COS 102 Problem Solving
COS 102 Problem Solving
Data: Data is a piece of basic raw facts about people, Places etc. that need to be processed to
produce information, these includes: Exams score e.g. 40, 75, student name e.g. Ado, Fatima etc.
Information: information is the useful desired form into which data is finally transformed after
undergoing a series of processes.
Data Processing: data processing involves all activities which when performed leads to the
production of required information. These activities include data sourcing, collection, recording,
collation, classification, inputting, transformation and outputting. Data processing could be
manually or automated. Automated data processing is done using (EDP) electronic data
processing.
The use of computer for data processing has been universal especially with the introduction of
the first microcomputer in 1975.
Methods of Data Processing
The following are the three major methods that have been widely used for data processing over
the years:
The Manual Method
The Mechanical Method
The Computer Method
Unreliable
Very costly
Huge size
Need of A.C.
Non-portable
A.C. needed
Faster
Lesser maintenance
Still costly
A.C needed
Very cheap
Use of PC's
Pipeline processing
No A.C. needed
The Fifth Generation: 1980- Till date (The era of Artificial Intelligence)
In the fifth generation, the VLSI technology became Ultra Large Scale Integration (ULSI)
technology, resulting in the production of microprocessor chips having ten million electronic
components. This generation is based on parallel processing hardware and AI (Artificial
Intelligence) software. AI is an emerging branch in computer science, which interprets means
and method of making computers think like human beings. All the high-level languages like C
and C++, Java, .Net etc., are used in this generation.
AI includes: Robotics, Neural Networks, Game Playing, and Development of expert systems to
make decisions in real life situations, Natural language understanding and generation. The main
features of fifth generation are:
ULSI technology
Classification of Computer
The computer has passed through many stages of evolution from the days of the mainframe
computers to the era of microcomputers. Computers have been classified based on different
criteria. In this unit, we shall classify computers based on three popular methods.
Classification Based on Data Signal Type
There are basically three types of electronic computers. These are the Digital, Analog and Hybrid
computers.
1. Super Computers
These are high-capacity machines with hundreds of thousands of processors that can perform
over one trillion calculations per second. These are the largest and most expensive but fastest
computers available, they are called super because they have been used for tasks requiring the
processing of enormous volumes of data such as forecasting weather, designing aircraft,
modeling molecules, simulating explosion of nuclear bombs, fluid dynamic calculations, nuclear
energy research, electronic design, and analysis of geological data (e.g. in petrochemical
prospecting), film animation etc.
2. Mainframe Computers
These were the only types of computers available until the late 1960s, Mainframe computers are
often shared by multiple users connected to the computer via terminals, they are expensive and
vary in size from small to medium, and large depending on their use. Mainframe executes many
programs concurrently and supports many simultaneous executions of programs. Small
mainframe are called Midsize Computers or Mini Computers, they are used by large
organizations such as banks, airlines, insurance companies and colleges for processing millions
of transactions.
3. Minicomputers
These types of computers were smaller, less expensive, and less powerful than
a mainframe or supercomputer but more expensive and more powerful than a personal computer.
Minicomputers were used for scientific and engineering computations, business
transaction processing, file handling, and database management. Minicomputers as a distinct
class of computers emerged in the late 1950s and reached their peak in the 1960s and 1970s
before declining in popularity in the 1980s and 1990s.
4. Microcomputers
Micro computer is also called Personal Computer (PC). It is a small, common, portable and
relatively inexpensive computer designed for a single user but can be linked together to form a
network. The technology of its microprocessor is based on entire CPU put in single chip. PCs are
of several types; desktop PCs. Tower PCs, Laptops (note books) and personal digital assistant
(handheld computers or palm tops).
1. Special-Purpose Computers
A special purpose computer is one that is designed to solve a restricted class of problems. Such
computers may even be designed and built to handle only one job. In such machines, the steps or
operations that the computer follows may be built into the hardware. Most of the computers used
for military purposes fall into this class. Other examples of special purpose computers include:
Computers designed specifically to solve navigational problems.
Computers used for process control applications in industries such as oil refinery,
chemical manufacture, steel processing and power generation
Computers used as robots in factories like vehicle assembly plants and glass industries.
They are less efficient than the special-purpose computers due to such problems as the
following:
Applications of Computers
There are many areas where computers are applied. Virtually in all aspects of human endeavor,
computers have been found to encompass their applications. Among others are:
1. Computer in Education: Computer programs, or applications, exist to aid every level of
education, from programs that teach simple addition or sentence construction to programs
that teach advanced calculus e.g. Computer-Aided Instruction (CAI). Educators use
computers to track grades and communicate with students; with computer-controlled
projection units, they can add graphics, sound, and animation to their communications.
Computer has provided a lot of facilities in the education system such as e-learning, e-
library, computer-based instructional materials such as smart boards, virtual class etc.
2. Computer in Research: Computers are used extensively in scientific research to solve
mathematical problems, investigate complicated data, or model systems that are too
costly or impractical to build, such as testing the air flow around the next generation of
aircraft.
3. Computer in Health sector: Computer controlled devices can be use to monitor patient’s
condition, like changes in temperature, heart beat etc. Computer with vast storage
facilities can store facts that complement the work of medical doctors with diagnosis.
These form the basis of what is call Knowledge base system or an Expert System.
2. Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors
committed in computing are mostly due to human rather than technological weakness.
There are in-built error detecting schemes in the computer.
3. Storage: It has both internal and external storage facilities for holding data and
instructions. This capacity varies from one machine to the other. Memories are built up in
K (Kilo) modules where K=1024 memory locations.
4. Automatic: Once a program is in the computer’s memory, it can run automatically each
time it is opened. The individual has little or no instruction to give again.
5. Reliability: Being a machine, a computer does not suffer human traits of tiredness and
lack of concentration. It will perform the last job with the same speed and accuracy as the
first job every time even if ten million jobs are involved.
6. Flexibility: It can perform any type of task once it can be reduced to logical steps.
Modern computers can be used to perform a variety of functions like on-line processing,
multiprogramming, real time processing etc.
A. Computer Hardware
The term "computer hardware" is described as the external and internal tools you need to do
important tasks including input, output, storage, communication, processing and more. A
computer is made up of the Central Processing Unit (CPU) and the peripherals unit (Keyboard,
mouse, printer, scanner, etc.)
A computer system's central processing unit (CPU) is frequently referred to as its brain. Similar
to how the brain in humans directs all activity, in a computer system the CPU directs all
processing activities. The following are its primary components:
Control Unit: The control unit (CU) serves as the system's supervisor. The CU is in charge of
coordinating and synchronizing all tasks carried out by a computer system. The CU controls the
movement of data from one area of the CPU to another and vice versa, acting as a traffic cop.
The management of the process of loading and unloading programs and data from memory falls
under the purview of the control unit. It is also in charge of sequentially carrying out (executing)
program instructions. The concept of a "register" to store interim computational values is
included in this.
Arithmetic and Logic Unit: All of the operations take place in the arithmetic and logic unit
(ALU). The ALU performs logical analysis and decision-making in addition to doing
mathematical calculations. The capabilities that set a computer system apart from a calculator are
logical comparison and decision-making.
Memory: A primary memory is memory that is located inside a central processing unit which
stores data and programs. The device is shaped like a silicon chip, and data is stored there as
electronic pulses. The numbers "1" and "0" represent the existence and absence of current,
respectively. This memory stores information as binary 0s and 1s.
Input Hardware
The phrase "Hardware" refers to all the tangible parts of a computer system that a user can touch,
such as the keyboard, visual display unit, system unit, mouse, printer, etc. A computer system's
input devices (hardware)are used to enter data into the system. Examples are:
Keyboards: The most frequent input devices are keyboards, which resemble typewriters. They
are made up of keys that stand in for letters, numbers, and unique symbols. They also contain
function keys, which vary in use based on the program being used, from F1 to F12.
Figure 2: Keyboard
Mouse: With the invention of the mouse, the movement restriction of older input devices was
eliminated. The necessity for an input device that can assist with data entry by picking an option
on the desktop comes with the introduction of GUI. A computer user may now rotate their screen
360 degrees, something that was previously impossible, with the aid of a mouse. One type of
mouse has a tracking ball that sends the signal to move the pointer on the screen, while the other
type is an optical mouse that detects movement and moves the pointer on the screen.
Figure 3: Mouse
Trackballs
Using a trackball, you can input motion information into computers and other electronics. Its top
has a rolling, moving ball that can move in any direction, acting as a mouse-like device. Instead
of moving the complete gadget, simply roll the movable ball on top of the trackball unit with
your hand to provide motion input. The main function of computer trackballs, which are often
used in place of mice, is to move the cursor around the screen. Similar to mice, computer
trackballs have buttons that can be used as left- and right-click buttons as well as for other
commands. Trackballs can be found in various electronic devices outside computers, such as
arcade games, mixing boards, and self-service kiosks, however they are most frequently used
with computers. The trackballs on these gadgets are frequently bigger than those seen on
computer input devices.
Figure 4: Trackball
Pointing sticks
Similar to a joystick, the pointing stick may move and control the computer cursor. It is intended
for its height to be just above the keys. A pointing stick is a practical substitute for a touchpad if
a laptop lacks the necessary space. The sensitivity grading of the pointing stick must be adjusted
to recognize motions and taps intended for its use in order for it to function as intended.
Speakers: These produce “soft” copy output in form of sound like music etc. The built-in
speakers in most PC cases are used just for making system sounds, such as warning beeps and
action indicators. To play more sophisticated sounds on your PC, you need a set of external
speakers. Usually speakers come in pairs, and there is a plug that connects them to your sound
card. Arrange the speakers with one on the left and one on the right of your desk or work area to
get a stereo effect.
Printers: Unlike monitors, printers produce hard copy output on paper such as letters, report,
memos, payroll, cheques e.t.c
Plotter: These are output devices that produce hard copy output like printer output with the
difference that their output are mostly graphic, like bar chart, map chart organization chart e.t.c
B. Computer Software
Software are basically programs simply consist of a sequence of instruction needed to be perform
to accomplish a task it is software that enable the hardware to put into effective use. It is
sometime said, “Computer without a program is an electronic idiot” because it can do nothing
constructive or profitable. There are two main categories of software. These are the System
software and the Application software.
System Software
System software is a kind of software that serves as the intermediary between the hardware and
application software. It provides the needed environment for application software to run and for
the user to interact with the computer system. System software includes the operating system, the
translator and other utility software the computer needs to function. Examples of operating
systems are Windows, Mac OS, Linux, Ubuntu, Android OS, Chrome OS and Apple iOS.
Translators include compilers (e.g Visual C#, Visual C# Express, Turbo C++, javac, etc ),
interpreters (e.g Python interpreter, PERL interpreter, Ruby interpreter, PHP interpreter, etc) and
assemblers (e.g Turbo Assembler, Microsoft Macro Assembler and High Level Assembler
(HLA)). Examples of utility software are backup software, anti-virus software, disk
defragmenting software, disk management software, etc.
Application Software
An application software is a software type that carries out specific tasks for the specific needs of
the users and could either fall into the category of a general purpose application software or a
special purpose application software. A general purpose software can be used to perform more
than a specific task e.g Microsft Word can be used for typesetting, spell-checking, find and
replace contents of a document, etc. A specific purpose application software is used to
accomplish specific tasks in specific domains e.g an insurance software is used to perform
functions specific to insurance.
People face problems every day—usually, multiple problems throughout the day. Sometimes
these problems are straightforward, sometimes, however, the problems we encounter are more
complex. For example, say you have a work deadline, and you must mail a printed copy of a
report to your supervisor by the end of the business day. The report is time-sensitive and must be
sent overnight. You finished the report last night, but your printer will not work today. What
should you do? First, you need to identify the problem and then apply a strategy for solving the
problem.
Practicing different problem-solving strategies can help professionals develop efficient solutions
to challenges they encounter at work and in their everyday lives. Each industry, business and
career has its own unique challenges, which means employees may implement different
strategies to solve them. If you are interested in learning how to solve problems more effectively,
then understanding how to implement several common problem-solving strategies may benefit
you.
Computer science is the study of problems, problem-solving, and the solutions that come out of
the problem-solving process. Given a problem, a computer scientist’s goal is to develop
an algorithm, a step-by-step list of instructions for solving any instance of the problem that might
arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are
solutions. Solving problems is the core of computer science. Programmers must first understand
how a human solves a problem, then understand how to translate this “algorithm” into something
a computer can do, and finally how to “write” the specific syntax (required by a computer) to get
the job done. It is sometimes the case that a machine will solve a problem in a completely
different way than a human.
Creativity and problem solving play a critical role in computing. It is important to apply a
structured process to identify problems and generate creative solutions before a program can be
developed. Regardless of the area of study, computer science is all about solving problems with
computers. The problems that we want to solve can come from any real-world problem or
perhaps even from the abstract world. We need to have a standard systematic approach to solving
problems.
Since we will be using computers to solve problems, it is important to first understand the
computer’s information processing model. The model shown in the diagram below assumes a
single CPU (Central Processing Unit). Many computers today have multiple CPUs, so you can
imagine the above model duplicated multiple times within the computer.
Figure 7: Central Processing Unit
A typical single CPU computer processes information as shown in the diagram. Problems are
solved using a computer by obtaining some kind of user input (e.g., keyboard/mouse information
or game control movements), then processing the input and producing some kind of output (e.g.,
images, test, sound). Sometimes the incoming and outgoing data may be in the form of hard
drives or network devices. In regards to problem solving, we will apply the above model in that
we will assume that we are given some kind of input information that we need to work with in
order to produce some desired output solution. However, the above model is quite simplified.
For larger and more complex problems, we need to iterate (i.e., repeat) the input/process/output
stages multiple times in sequence, producing intermediate results along the way that solve part of
our problem, but not necessarily the whole problem. For simple computations, the above model
is sufficient. It is the “problem solving” part of the process that is the interesting part, so we’ll
break this down a little. There are many definitions for “problem solving”.
Problem Solving is the sequential process of analyzing information related to a given situation
and generating appropriate response options.
Problem-solving techniques in computer science refer to the methods used to find solutions to
complex issues using algorithmic or heuristic approaches. These techniques can be systematic,
analytical, or intuitive, encompassing traditional programming, machine learning, or artificial
intelligence methods.
There are some well-defined steps to follow when solving a problem. These include:
2. Formulate a Model
3. Develop an Algorithm
Considering the following example of how the input/process/output works on a simple problem:
1. Input: get all the grades … possibly by typing them in via the keyboard or by reading them
from a USB flash drive or hard disk.
3. Output: output the answer to either the monitor, to the printer, to the USB flash drive or hard
disk … or a combination of any of these devices.
As you can see, the problem is easily solved by simply getting the input, computing something
and producing the output. Let us now examine the 6 steps to problems solving within the context
of the above example.
It sounds strange, but the first step to solving any problem is to make sure that one understands
the problem about to be solved. One needs to know:
In the example given above, it is understood that the input is a bunch of grades. But we need to
understand the format of the grades. Each grade might be a number from 0 to 100 or it may be a
letter grade from A to F. If it is a number, the grade might be a whole integer like 73 or it may be
a real number like 73.42. We need to understand the format of the grades in order to solve the
problem.
We also need to consider missing grades. What if we do not have the grade for every student: for
instance, some were away during the test? Should we be able to include that person in our
average (i.e., they received 0) or ignore them when computing the average?
We also need to understand what the output should be. Again, there is a formatting issue. Should
the output be a whole or real number or a letter grade? Do we want to display a pie chart with the
average grade? The choice is ours.
Finally, one needs to understand the kind of processing that must be performed on the data. This
leads to the next step.
2. Formulating a Model
The next step is to understand the processing part of the problem. Many problems break down
into smaller problems that require some kind of simple mathematical computations in order to
process the data. In the example given, the average of the incoming grades is to be computed. A
model (or formula) is thus needed for computing the average of a bunch of numbers. If there is
no such “formula”, one must be developed. Often, however, the problem breaks down into
simple computations that is well understood. Sometimes, one can look up certain formulas in a
book or online if there is a hitch.
In order to come up with a model, we need to fully understand the information available to us.
Assuming that the input data is a bunch of integers or real numbers 𝑥1, 𝑥2, ⋯ , 𝑥𝑛 representing
a grade percentage, the following computational model may apply:
Where the result will be a number from 0 to 100. That is very straight forward, assuming that the
formula for computing the average of a bunch of numbers is known. However, this approach will
not work if the input data is a set of letter grades like B-, C, A+, F, D-, etc., because addition and
division cannot be performed on the 17 letters. This problem solving step must figure out a way
to produce an average from such letters. Thinking is required.
After some thought, we may decide to assign an integer number to the incoming letters as
follows:
𝐴 + = 12 𝐴 = 11 𝐴 − = 10
If it is assumed that these newly assigned grade numbers are 𝑦1, 𝑦2, ⋯ , 𝑦𝑛, then the following
computational model may be used:
The main point to understand this step in the problems solving process is that it is all about
figuring out how to make use of the available data to compute an answer.
3. Develop an Algorithm
Now that we understand the problem and have formulated a model, it is time to come up with a
precise plan of what we want the computer to do using an algorithm.
Algorithm is a precise sequence of instructions for solving a problem. Some of the more
complex algorithms may be considered randomized algorithms or nondeterministic algorithms
where the instructions are not necessarily in sequence and may not even have a finite number of
instructions. However, the above definition will apply for all algorithms that will be discussed in
this course.
Pseudocode
Pseudocode is often used as a way of describing a computer program to someone who doesn’t
understand how to program a computer. When learning to program, it is important to write
pseudocode because it helps to clearly understand the problem that one is trying to solve. It also
helps avoid getting bogged down with syntax details (i.e., like spelling mistakes) when writing
the program later (see step 4).
Although flowcharts can be visually appealing, pseudocode is often the preferred choice for
algorithm development because:
It can be difficult to draw a flowchart neatly, especially when mistakes are made.
Pseudocode fits more easily on a page of paper.
Pseudocode can be written in a way that is very close to real program code, making it
easier later to write the program (i.e., in step 4).
Pseudocode takes less time to write than drawing a flowchart.
Pseudocode will vary according to whoever writes it. That is, one person’s pseudocode is often
quite different from that of another person. However, there are some common control structures
(i.e., features) that appear whenever pseudocode is written. These features are shown along with
some examples:
The point is that there are a variety of ways to write pseudocode. The important thing to
remember is that the algorithm should be clearly explained with no ambiguity as to what order
the steps are performed in.
Whether using a flow chart of pseudocode, an algorithm should be tested by manually going
through the steps in mentally to make sure a step or a special situation is not missed out. Often, a
flaw will be found in one’s algorithm because a special situation that could arise was missed out.
Only when one is convinced that the algorithm will solve the problem, should the next step be
attempted.
Now that we have a precise set of steps for solving the problem, most of the hard work has been
done. The next step is to transform the algorithm from step 3 into a set of instructions that can be
understood by the computer.
Writing a program is often called "coding" or “implementing an algorithm”. So the code (or
source code) is actually the program itself. Without much of an explanation, below is a program
(written in processing) that implements the given algorithm for finding the average of a set of
grades. Note that the code looks quite similar in structure, however, the processing code is less
readable and seems somewhat more mathematical:
For now, the details of how to produce the above source code will not be discussed. In fact, the
source code would vary depending on the programming language that was used. Learning a
programming language may seem difficult at first, but it will become easier with practice.
The computer requires precise instructions in order to understand what it is being asked to do.
For example, removing one of the semi-colon characters (;) from the program above, will make
the computer become confused as to what it’s being asked to do because the semi-colon
characters (;) is what it understands to be the end of an instruction. Leaving one of them off will
cause the program to generate what is known as a compile-time error.
Compiling is the process of converting a program into instructions that can be understood by the
computer.
The longer your program becomes, the more likely you will have multiple compile errors. You
need to fix all such compile errors before continuing on to the next step.
Once a program is written and compiles, the next task is to make sure that it solves the problem
that it was intended to solve and that the solutions are correct.
Running a program is the process of telling the computer to evaluate the compiled instructions.
When a program is run and all is well, you should see the correct output. It is possible however,
that a program works correctly for some set of input data but not for all. If the output of a
program is incorrect, it is possible that the algorithm was not properly converted into a proper
program. It is also possible that the programmer did not produce a proper algorithm back in step
3 that handles all situations that could arise. Perhaps some instructions are performed out of
sequence. Whatever happened, such problems with the program are known as bugs.
Bugs are errors with a program that cause it to stop working or produce incorrect or undesirable
results.
It is the responsibility of the programmer to fix as many bugs in a program as are present. To
find bugs effectively, a program should be tested with many test cases (called a test suite). It is
also a good idea to have others test one’s program because they may think up situations or input
data that one may never have thought of.
Once the program produces a result that seems correct, the original problem needs to be
reconsidered to make sure that the answer is formatted into a proper solution to the problem. It is
often the case that it may be realised that the program solution does not solve the problem the
way it is expected. It may also be realised that more steps are involved.
For example, if the result of a program is a long list of numbers, but the intent was to determine a
pattern in the numbers or to identify some feature from the data, then simply producing a list of
numbers may not suffice. There may be a need to display the information in a way that helps
visualise or interpret the results with respect to the problem; perhaps a chart or graph is needed.
It is also possible that when the results are examined, it is realised that additional data are needed
to fully solve the problem. Alternatively, the results may need to be adjusted to solve the
problem more efficiently (e.g., a game is too slow).
It is important to remember that the computer will only do what it is told to do. It is up to the
user to interpret the results in a meaningful way and determine whether or not it solves the
original problem. It may be necessary to re-do some of the steps again, perhaps going as far back
as step 1 again, if data were missing.
Problem Solving Techniques
There are several standard problem-solving techniques that you can employ irrespective of the
field of study in computer science. The first step, however, is always understanding the problem,
then you can choose the right strategy to solve it. Here are some of the basic problem-solving
methods that are particularly useful:
Divide and Conquer: This technique involves breaking a larger problem into smaller, more
manageable parts, solving each of them individually, and finally combining their solutions to get
the overall answer.
Algorithm Design: This technique involves formalizing a series of organized steps into an
algorithm to solve a specific problem. Common approaches include greedy algorithms, dynamic
programming, and brute force.
Heuristics: These are rules of thumb or educated guesses that can help you find an acceptable, if
not the perfect, solution when the problem is too complex for a direct mathematical approach, or
when computational resources are limited. Heuristics are not guaranteed to yield the optimal
solution but are often good enough for practical purposes and can dramatically reduce the time
and resources needed to find a solution.
Note: Even though these techniques might sound simple, they form a cornerstone and are often
cloaked within complex problem-solving techniques used in higher-level computer science.