0% found this document useful (0 votes)
7 views30 pages

COS 102 Problem Solving

The document provides an overview of computing, detailing the history, generations, and classifications of computers. It explains the evolution from early mechanical devices to modern computers, highlighting key developments and features of each generation. Additionally, it categorizes computers based on data signal type, size, and purpose, emphasizing their applications and technological advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
7 views30 pages

COS 102 Problem Solving

The document provides an overview of computing, detailing the history, generations, and classifications of computers. It explains the evolution from early mechanical devices to modern computers, highlighting key developments and features of each generation. Additionally, it categorizes computers based on data signal type, size, and purpose, emphasizing their applications and technological advancements.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd

CMP 102: PROBLEM SOLVING

1.1 INTRODUCTION TO COMPUTING


Today’s world is an information-rich world and it has become a necessity for everyone to know
about the revolutionary tool that shape the digital age. This section introduces the user of this
lecture note to the brief historical developments of computer, different types of computers,
characteristics of computer, uses and application of computer.

1.2 THE COMPUTER, HISTORY AND GENERATIONS


A computer can simply be described as a powerful electronics device which has the capability of
accepting data as (input) in a prescribed format, apply a series of arithmetic and logical operation
on the data (processing) and produce the result of these operation as an (output) in a specific
format at a very fast speed under the control of a logical sequences of instruction called
“Program”.
From the description above, a computer can be described as input-process-output (IPO) system,
pictorially represent in the figure below;

Figure 1: Input Process Output

Data: Data is a piece of basic raw facts about people, Places etc. that need to be processed to
produce information, these includes: Exams score e.g. 40, 75, student name e.g. Ado, Fatima etc.
Information: information is the useful desired form into which data is finally transformed after
undergoing a series of processes.
Data Processing: data processing involves all activities which when performed leads to the
production of required information. These activities include data sourcing, collection, recording,
collation, classification, inputting, transformation and outputting. Data processing could be
manually or automated. Automated data processing is done using (EDP) electronic data
processing.
The use of computer for data processing has been universal especially with the introduction of
the first microcomputer in 1975.
Methods of Data Processing
The following are the three major methods that have been widely used for data processing over
the years:
The Manual Method
The Mechanical Method
The Computer Method

The Manual Method


The manual method of data processing involves the use of chalk, wall, pen, pencil and the like.
These devices, machines or tools facilitate human efforts in recording, classifying, manipulating,
sorting and presenting data or information. The manual data processing operations entail
considerable manual efforts. Thus, the manual method is cumbersome, tiresome, boring,
frustrating and time consuming. Furthermore, the processing of data by the manual method is
likely to be affected by human errors. When there are errors, then the reliability, accuracy,
neatness, tidiness, and validity of the data would be in doubt. The manual method does not allow
for the processing of large volumes of data on a regular and timely basis.

The Mechanical Method


The mechanical method of data processing involves the use of machines such as the typewriter,
roneo machines, adding machines and the like. These machines facilitate human efforts in
recording, classifying, manipulating, sorting and presenting data or information. The mechanical
operations are basically routine in nature. There is virtually no creative thinking. Mechanical
operations are noisy, hazardous, error prone and untidy. The mechanical method does not allow
for the processing of large volumes of data continuously and timely.

The Computer Method


The computer method of carrying out data processing has the following major features:
 Data can be steadily and continuously processed
 The operations are practically not noisy
 There is a store where data and instructions can be stored temporarily and permanent.
 Errors can be easily and neatly corrected.
 Output reports are usually very neat, decent and can be produced in various forms such as
adding graphs, diagrams and pictures etc.
 Accuracy and reliability are highly enhanced

1.2.2 Brief History of Computer


About 300BC People in southwest Asia invented early form of a device called abacus to perform
calculations. In 1642, Blaise Pascal developed a mechanical calculator to speed arithmetic
calculations for his father, a tax official. Blaise Pascal was a French philosopher, mathematician,
and physicist.
Gottfried Wilhelm Leibniz, also Baron Gottfried Wilhelm von Leibniz, German philosopher,
mathematician, invented a calculating machine in 1672 capable of multiplying, dividing, and
extracting square roots, and he is considered a pioneer in the development of mathematical logic.
In the 1820s, British mathematician Charles Babbage began developing his Difference Engine, a
mechanical device that can perform simple mathematical calculations and record results on metal
plates. He was unable to complete it because of inadequate funding. However, in 1991 British
scientists, following Babbage's detailed drawings and specifications, constructed the Difference
Engine.
In the 1830s Babbage began developing his Analytical Engine, which was designed to carry out
more complicated calculations, even including a memory. Unfortunately, there was no way to
build the machine with 19th-century technology. Babbage's book “On the Economy of
Machinery and Manufactures (1832)” initiated the field of study known today as operational
research.
The first general-purpose electronic computer in America, called the Electronic Numerical
Integrator and Computer (ENIAC), was introduced at the University of Pennsylvania in 1946.
Two of its inventors, American engineers John Presper Eckert Junior, and John Mauchly, moved
on to build the first electronic computer for commercial use, the UNIVAC, (UNIVersal
Automatic Computer), at the Remington Rand Corporation. The implementation of the machine
marked the beginning of the computer era.
The Generations of Computer
A generation refers to the state of improvement in the development of a product. This term is
also used in the different advancements of computer technology. With each new generation, the
circuitry has gotten smaller and more advanced than the previous generation before it. This is as
a result of the continuous miniaturization, speed, power, and memory of the computers; new
discoveries are constantly being developed that affect the way we live, work and play. The
following are the main five generations of computer.
The first Generation: 1946-1958 (The Vacuum Tube Years)
The first generation computers were huge, slow, expensive, and often undependable. These
computers used vacuum tubes as the basic components for memory and circuitry for CPU. The
vacuum tubes produced a great deal of heat just like light bulbs do and were prone to frequent
fusing of the installations. In this generation mainly batch processing operating system were
used. Punched cards, paper tape, and magnetic tape were used as input and output devices. The
computers in this generation used machine code as programming language.
Despite the gigantic air conditioners used to cool these computers, vacuum tubes still overheated
regularly. The main features of the first generation of computers are:
 Vacuum tube technology

 Unreliable

 Supported machine language only

 Very costly

 Generated lot of heat

 Slow input and output devices

 Huge size

 Need of A.C.

 Non-portable

 Consumed lot of electricity

The Second Generation: 1959-1964 (The Era of the Transistor)


In 1947 three scientists, John Bardeen, William Shockley, and Walter Brattain working at
AT&T's Bell Labs invented what replaced the vacuum tube. This invention was the transistor
which functions like a vacuum tube. Transistors were found to be more reliable, cheaper and
conduct electricity faster and better than vacuum tubes. They were also much smaller and gave
off virtually no heat compared to vacuum tubes.
In this generation, magnetic cores were used as primary memory and magnetic tape and
magnetic disks as secondary storage devices. In this generation, assembly language and high-
level programming languages like FORTRAN, COBOL was used. The computers used batch
processing and multiprogramming operating system. The main features of second generation
include:
 Use of transistors

 Reliable in comparison to first generation computers

 Smaller size as compared to first generation computers

 Generated less heat as compared to first generation computers

 Consumed less electricity as compared to first generation computers

 Faster than first generation computers

 Still very costly

 A.C. needed

 Supported machine and assembly languages

The Third Generation: 1965-1970 (Integrated Circuits)


Robert Noyce of Fairchild Corporation and Jack Kilby of Texas Instruments independently
discovered the amazing attributes of Integrated Circuits (IC). Placing such large numbers of
transistors on a single chip vastly increased the power of a single computer and lowered its cost
considerably. Most electronic devices today use some form of integrated circuits placed on
printed circuit boards-- sometimes called a mother board.
A single IC has many transistors, resistors and capacitors along with the associated circuitry.
This development made computers smaller in size, reliable and efficient. In this generation
remote processing, time-sharing, multi-programming operating system were used. High-level
languages (FORTRAN-II TO IV, COBOL, PASCAL PL/1, BASIC, ALGOL-68 etc.) were used
during this generation. The main features of third generation are:
 Integrated Circuit technology

 More reliable in comparison to previous two generations


 Smaller size

 Generated less heat

 Faster

 Lesser maintenance

 Still costly

 A.C needed

 Consumed lesser electricity

 Supported high-level language

The Fourth Generation: 1971-1980 (The Microprocessor)


This generation can be characterized by both the jump to monolithic integrated circuits, millions
of transistors put onto one integrated circuit chip, also referred to as Very Large Scale Integrated
circuits (VLSI) and the invention of the microprocessor (a single chip that could do all the
processing of a full-scale computer). By putting millions of transistors onto one single chip more
calculation and faster speeds could be reached by computers. Because electricity travels about a
foot in a billionth of a second, the smaller the distance the greater the speed of computers
Computers in this generation became more powerful, compact, reliable, and affordable; as a
result, it gave rise to personal computer (PC) revolution. In this generation time sharing, real
time, networks, distributed operating system were used. All the high-level languages like C, C+
+, DBASE etc., were used in this generation. The main features of fourth generation are:
 Microprocessor technology

 Very cheap

 Portable and reliable

 Use of PC's

 Very small size

 Pipeline processing
 No A.C. needed

 Concept of internet was introduced

 Great developments in the fields of networks

 Computers became easily available

The Fifth Generation: 1980- Till date (The era of Artificial Intelligence)
In the fifth generation, the VLSI technology became Ultra Large Scale Integration (ULSI)
technology, resulting in the production of microprocessor chips having ten million electronic
components. This generation is based on parallel processing hardware and AI (Artificial
Intelligence) software. AI is an emerging branch in computer science, which interprets means
and method of making computers think like human beings. All the high-level languages like C
and C++, Java, .Net etc., are used in this generation.
AI includes: Robotics, Neural Networks, Game Playing, and Development of expert systems to
make decisions in real life situations, Natural language understanding and generation. The main
features of fifth generation are:

 ULSI technology

 Development of true artificial intelligence

 Development of Natural language processing

 Advancement in Parallel Processing

 Advancement in Superconductor technology

 More user friendly interfaces with multimedia features

 Availability of very powerful and compact computers at cheaper rates

Classification of Computer
The computer has passed through many stages of evolution from the days of the mainframe
computers to the era of microcomputers. Computers have been classified based on different
criteria. In this unit, we shall classify computers based on three popular methods.
Classification Based on Data Signal Type
There are basically three types of electronic computers. These are the Digital, Analog and Hybrid
computers.

1. The Digital Computer


This represents its variables in the form of digits. The data it deals with, whether representing
numbers, letters or other symbols, are converted into binary form on input to the computer. The
data undergoes a processing after which the binary digits are converted back to alpha numeric
form for output for human use. Because of the fact that business applications like inventory
control, invoicing and payroll deal with discrete values (separate, disunited, discontinuous), they
are best processed with digital computers. As a result of this, digital computers are mostly used
in commercial and business places today.

2. The Analog Computer


It measures rather than counts. This type of computer sets up a model of a system. The common
type represents its variables in terms of electrical voltage and sets up circuit analog to the
equation connecting the variables. The answer can be either by using a voltmeter to read the
value of the variable required, or by feeding the voltage into a plotting device. Analog computers
hold data in the form of physical variables rather than numerical quantities. In theory, analog
computers give an exact answer because the answer has not been approximated to the nearest
digit. Whereas, when we try to obtain the answers using a digital voltmeter, we often find that
the accuracy is less than that which could have been obtained from an analog computer.
It is almost never used in business systems. It is used by scientists and engineers to solve systems
of partial differential equations. It is also used in controlling and monitoring of systems in such
areas as hydrodynamics and rocketry in production.

3. The Hybrid Computer


In some cases, the computer user may wish to obtain the output from an analog computer as
processed by a digital computer or vice versa. To achieve this, he set up a hybrid machine where
the two are connected and the analog computer may be regarded as a peripheral of the digital
computer. In such a situation, a hybrid system attempts to gain the advantage of both the digital
and the analog elements in the same machine. This kind of machine is usually a special-purpose
device which is built for a specific task. It needs a conversion element which accepts analog
inputs, and outputs digital values. Such converters are called digitizers. There is a need for a
converter from analog to digital also. It has the advantage of giving real-time response on a
continuous basis. Complex calculations can be dealt with by the digital elements, thereby
requiring a large memory, and giving accurate results after programming. They are mainly used
in aerospace and process control applications.

Classification based on Size

1. Super Computers
These are high-capacity machines with hundreds of thousands of processors that can perform
over one trillion calculations per second. These are the largest and most expensive but fastest
computers available, they are called super because they have been used for tasks requiring the
processing of enormous volumes of data such as forecasting weather, designing aircraft,
modeling molecules, simulating explosion of nuclear bombs, fluid dynamic calculations, nuclear
energy research, electronic design, and analysis of geological data (e.g. in petrochemical
prospecting), film animation etc.

2. Mainframe Computers
These were the only types of computers available until the late 1960s, Mainframe computers are
often shared by multiple users connected to the computer via terminals, they are expensive and
vary in size from small to medium, and large depending on their use. Mainframe executes many
programs concurrently and supports many simultaneous executions of programs. Small
mainframe are called Midsize Computers or Mini Computers, they are used by large
organizations such as banks, airlines, insurance companies and colleges for processing millions
of transactions.

3. Minicomputers

These types of computers were smaller, less expensive, and less powerful than
a mainframe or supercomputer but more expensive and more powerful than a personal computer.
Minicomputers were used for scientific and engineering computations, business
transaction processing, file handling, and database management. Minicomputers as a distinct
class of computers emerged in the late 1950s and reached their peak in the 1960s and 1970s
before declining in popularity in the 1980s and 1990s.
4. Microcomputers
Micro computer is also called Personal Computer (PC). It is a small, common, portable and
relatively inexpensive computer designed for a single user but can be linked together to form a
network. The technology of its microprocessor is based on entire CPU put in single chip. PCs are
of several types; desktop PCs. Tower PCs, Laptops (note books) and personal digital assistant
(handheld computers or palm tops).

Classification based on Purpose

1. Special-Purpose Computers
A special purpose computer is one that is designed to solve a restricted class of problems. Such
computers may even be designed and built to handle only one job. In such machines, the steps or
operations that the computer follows may be built into the hardware. Most of the computers used
for military purposes fall into this class. Other examples of special purpose computers include:
 Computers designed specifically to solve navigational problems.

 Computers designed for tracking airplanes or missiles

 Computers used for process control applications in industries such as oil refinery,
chemical manufacture, steel processing and power generation

 Computers used as robots in factories like vehicle assembly plants and glass industries.

Attributes of Special-Purpose Computers


 Special-purpose computers are usually very efficient for the tasks for which they are
specially designed.
 They are very much less complex than the general-purpose computers. The simplicity of
the circuiting stems from the fact that provision is made only for limited facilities.
 They are very much cheaper than the general-purpose type since they involve fewer
components and are less complex.
2. General-Purpose Computers
General-purpose computers are computers designed to handle a wide range of problems.
Theoretically, a general-purpose computer can be adequate by means of some easily alterable
instructions to handle any problems that can be solved by computation. In practice, however,
there are limitations imposed by memory size, speed and the type of input/output devices.
Examples of areas where general purpose computers are employed include the following:
 Payroll
 Banking
 Billing
 Sales analysis
 Cost accounting
 Manufacturing scheduling
 Inventory control
Attributes of General-Purpose Computers
 General-purpose computers are more flexible than special purpose computers. Thus, the
former can handle a wide spectrum of problems.

 They are less efficient than the special-purpose computers due to such problems as the
following:

 They have inadequate storage

 They have low operating speed

 Coordination of the various tasks and subsections may take time

 General-purpose computers are more complex than special purpose computers.

Applications of Computers
There are many areas where computers are applied. Virtually in all aspects of human endeavor,
computers have been found to encompass their applications. Among others are:
1. Computer in Education: Computer programs, or applications, exist to aid every level of
education, from programs that teach simple addition or sentence construction to programs
that teach advanced calculus e.g. Computer-Aided Instruction (CAI). Educators use
computers to track grades and communicate with students; with computer-controlled
projection units, they can add graphics, sound, and animation to their communications.
Computer has provided a lot of facilities in the education system such as e-learning, e-
library, computer-based instructional materials such as smart boards, virtual class etc.
2. Computer in Research: Computers are used extensively in scientific research to solve
mathematical problems, investigate complicated data, or model systems that are too
costly or impractical to build, such as testing the air flow around the next generation of
aircraft.

3. Computer in Health sector: Computer controlled devices can be use to monitor patient’s
condition, like changes in temperature, heart beat etc. Computer with vast storage
facilities can store facts that complement the work of medical doctors with diagnosis.
These form the basis of what is call Knowledge base system or an Expert System.

An Expert system is a computer program designed to operate at the level of an expert in


a particular field, e.g. MYCIN, CADUES etc.
Other areas of computer applications include; Diagnostic system, Patient monitoring
system, Pharma Information System and even in surgery.
4. Computer in Industry: The application of computers simplifies the problems of planning,
control, production and overhead cost. Robots are used in the industries for variety of
tasks, which are repetitive and easily programmed e.g. Simulation, process control,
computer-aided designs (CAD) etc.

5. Computer in Defense: The military employs computers in sophisticated communications


to encode and unscramble messages, and to keep track of personnel and supplies. Some
applications are in Missile control, military operation and planning, smart weapons etc.

6. Computer in Automobiles: Computers in automobiles regulate the flow of fuel, thereby


increasing gas mileage, and are used in anti-theft systems.

7. Computer in Business: computers are applied in payroll calculations, budgeting, sales


analysis, financial forecasting, track inventories with bar codes and scanners, check the
credit status of customers, and transfer funds electronically etc.

8. Computer in Banking: Banking today is almost totally dependent on computers. ATM


machines are making it easier for customers to deal with banks, in addition, Online
banking allows customers to make transaction within the confine of their rooms, open
new account, pay bills etc.

9. Computer in Communication: Today, computers have simplified communication between


or among people, different platforms are available to aid transfer of messages. Such
technologies are E-mail, Usenet, Telnet, video-conferencing, etc
Characteristics of Computer
The advantages offered by computer in reality today are enormous; it has almost taken over most
of what people do today. The dependency in computer is growing in all aspect of our lives. Some
of the advantages of computer can be demonstrated in the following characteristics;
1. Speed: The computer can manipulate large data at incredible speed and response time
can be very fast.

2. Accuracy: Its accuracy is very high and its consistency can be relied upon. Errors
committed in computing are mostly due to human rather than technological weakness.
There are in-built error detecting schemes in the computer.
3. Storage: It has both internal and external storage facilities for holding data and
instructions. This capacity varies from one machine to the other. Memories are built up in
K (Kilo) modules where K=1024 memory locations.
4. Automatic: Once a program is in the computer’s memory, it can run automatically each
time it is opened. The individual has little or no instruction to give again.
5. Reliability: Being a machine, a computer does not suffer human traits of tiredness and
lack of concentration. It will perform the last job with the same speed and accuracy as the
first job every time even if ten million jobs are involved.
6. Flexibility: It can perform any type of task once it can be reduced to logical steps.
Modern computers can be used to perform a variety of functions like on-line processing,
multiprogramming, real time processing etc.

1.3 THE COMPONENTS OF COMPUTER


Computer consists of both the hardware and software. The Hardware are the physical equipment
and electronic device that make up the visible computer while the Software are basically
programs, which enable the hardware components to operate effectively as well as make it
provide very many useful services. It has often been said that “computer without a program is an
electronic idiot” because it can do nothing constructive or profitable.

A. Computer Hardware

The term "computer hardware" is described as the external and internal tools you need to do
important tasks including input, output, storage, communication, processing and more. A
computer is made up of the Central Processing Unit (CPU) and the peripherals unit (Keyboard,
mouse, printer, scanner, etc.)

Central Processing Unit:

A computer system's central processing unit (CPU) is frequently referred to as its brain. Similar
to how the brain in humans directs all activity, in a computer system the CPU directs all
processing activities. The following are its primary components:

Control Unit: The control unit (CU) serves as the system's supervisor. The CU is in charge of
coordinating and synchronizing all tasks carried out by a computer system. The CU controls the
movement of data from one area of the CPU to another and vice versa, acting as a traffic cop.
The management of the process of loading and unloading programs and data from memory falls
under the purview of the control unit. It is also in charge of sequentially carrying out (executing)
program instructions. The concept of a "register" to store interim computational values is
included in this.

Arithmetic and Logic Unit: All of the operations take place in the arithmetic and logic unit
(ALU). The ALU performs logical analysis and decision-making in addition to doing
mathematical calculations. The capabilities that set a computer system apart from a calculator are
logical comparison and decision-making.

Memory: A primary memory is memory that is located inside a central processing unit which
stores data and programs. The device is shaped like a silicon chip, and data is stored there as
electronic pulses. The numbers "1" and "0" represent the existence and absence of current,
respectively. This memory stores information as binary 0s and 1s.

Input Hardware
The phrase "Hardware" refers to all the tangible parts of a computer system that a user can touch,
such as the keyboard, visual display unit, system unit, mouse, printer, etc. A computer system's
input devices (hardware)are used to enter data into the system. Examples are:

Keyboards: The most frequent input devices are keyboards, which resemble typewriters. They
are made up of keys that stand in for letters, numbers, and unique symbols. They also contain
function keys, which vary in use based on the program being used, from F1 to F12.

Figure 2: Keyboard

Mouse: With the invention of the mouse, the movement restriction of older input devices was
eliminated. The necessity for an input device that can assist with data entry by picking an option
on the desktop comes with the introduction of GUI. A computer user may now rotate their screen
360 degrees, something that was previously impossible, with the aid of a mouse. One type of
mouse has a tracking ball that sends the signal to move the pointer on the screen, while the other
type is an optical mouse that detects movement and moves the pointer on the screen.

Figure 3: Mouse

Trackballs
Using a trackball, you can input motion information into computers and other electronics. Its top
has a rolling, moving ball that can move in any direction, acting as a mouse-like device. Instead
of moving the complete gadget, simply roll the movable ball on top of the trackball unit with
your hand to provide motion input. The main function of computer trackballs, which are often
used in place of mice, is to move the cursor around the screen. Similar to mice, computer
trackballs have buttons that can be used as left- and right-click buttons as well as for other
commands. Trackballs can be found in various electronic devices outside computers, such as
arcade games, mixing boards, and self-service kiosks, however they are most frequently used
with computers. The trackballs on these gadgets are frequently bigger than those seen on
computer input devices.

Figure 4: Trackball

Pointing sticks
Similar to a joystick, the pointing stick may move and control the computer cursor. It is intended
for its height to be just above the keys. A pointing stick is a practical substitute for a touchpad if
a laptop lacks the necessary space. The sensitivity grading of the pointing stick must be adjusted
to recognize motions and taps intended for its use in order for it to function as intended.

Figure 5: Pointing Stick

The Output Unit


The output unit consists of devices with the help of which we get the information from computer.
This unit is a link between computer and users. Output devices translate the bits and bytes
understood by the computer into a form human can understand.
1. Monitor (Visual Display Unit): This provides “soft” copy or temporary output for display. It
forms images from tiny dots, called pixels that are arranged in a rectangular form. The sharpness
of the image depends upon the number of pixels. Monitors are either monochrome or colored.
Monochrome monitor send a single color usually (green & red, or white & black). Colored
monitor send a signal for decomposition into various colors. They are usually more expensive
than monochrome.

Speakers: These produce “soft” copy output in form of sound like music etc. The built-in
speakers in most PC cases are used just for making system sounds, such as warning beeps and
action indicators. To play more sophisticated sounds on your PC, you need a set of external
speakers. Usually speakers come in pairs, and there is a plug that connects them to your sound
card. Arrange the speakers with one on the left and one on the right of your desk or work area to
get a stereo effect.

Printers: Unlike monitors, printers produce hard copy output on paper such as letters, report,
memos, payroll, cheques e.t.c

Plotter: These are output devices that produce hard copy output like printer output with the
difference that their output are mostly graphic, like bar chart, map chart organization chart e.t.c

B. Computer Software

Software are basically programs simply consist of a sequence of instruction needed to be perform
to accomplish a task it is software that enable the hardware to put into effective use. It is
sometime said, “Computer without a program is an electronic idiot” because it can do nothing
constructive or profitable. There are two main categories of software. These are the System
software and the Application software.

System Software
System software is a kind of software that serves as the intermediary between the hardware and
application software. It provides the needed environment for application software to run and for
the user to interact with the computer system. System software includes the operating system, the
translator and other utility software the computer needs to function. Examples of operating
systems are Windows, Mac OS, Linux, Ubuntu, Android OS, Chrome OS and Apple iOS.
Translators include compilers (e.g Visual C#, Visual C# Express, Turbo C++, javac, etc ),
interpreters (e.g Python interpreter, PERL interpreter, Ruby interpreter, PHP interpreter, etc) and
assemblers (e.g Turbo Assembler, Microsoft Macro Assembler and High Level Assembler
(HLA)). Examples of utility software are backup software, anti-virus software, disk
defragmenting software, disk management software, etc.

Application Software
An application software is a software type that carries out specific tasks for the specific needs of
the users and could either fall into the category of a general purpose application software or a
special purpose application software. A general purpose software can be used to perform more
than a specific task e.g Microsft Word can be used for typesetting, spell-checking, find and
replace contents of a document, etc. A specific purpose application software is used to
accomplish specific tasks in specific domains e.g an insurance software is used to perform
functions specific to insurance.

2.1 INTRODUCTION TO PROBLEM SOLVING

People face problems every day—usually, multiple problems throughout the day. Sometimes
these problems are straightforward, sometimes, however, the problems we encounter are more
complex. For example, say you have a work deadline, and you must mail a printed copy of a
report to your supervisor by the end of the business day. The report is time-sensitive and must be
sent overnight. You finished the report last night, but your printer will not work today. What
should you do? First, you need to identify the problem and then apply a strategy for solving the
problem.
Practicing different problem-solving strategies can help professionals develop efficient solutions
to challenges they encounter at work and in their everyday lives. Each industry, business and
career has its own unique challenges, which means employees may implement different
strategies to solve them. If you are interested in learning how to solve problems more effectively,
then understanding how to implement several common problem-solving strategies may benefit
you.

2.2 PROBLEM SOLVING

Computer science is the study of problems, problem-solving, and the solutions that come out of
the problem-solving process. Given a problem, a computer scientist’s goal is to develop
an algorithm, a step-by-step list of instructions for solving any instance of the problem that might
arise. Algorithms are finite processes that if followed will solve the problem. Algorithms are
solutions. Solving problems is the core of computer science. Programmers must first understand
how a human solves a problem, then understand how to translate this “algorithm” into something
a computer can do, and finally how to “write” the specific syntax (required by a computer) to get
the job done. It is sometimes the case that a machine will solve a problem in a completely
different way than a human.

Creativity and problem solving play a critical role in computing. It is important to apply a
structured process to identify problems and generate creative solutions before a program can be
developed. Regardless of the area of study, computer science is all about solving problems with
computers. The problems that we want to solve can come from any real-world problem or
perhaps even from the abstract world. We need to have a standard systematic approach to solving
problems.

Since we will be using computers to solve problems, it is important to first understand the
computer’s information processing model. The model shown in the diagram below assumes a
single CPU (Central Processing Unit). Many computers today have multiple CPUs, so you can
imagine the above model duplicated multiple times within the computer.
Figure 7: Central Processing Unit

A typical single CPU computer processes information as shown in the diagram. Problems are
solved using a computer by obtaining some kind of user input (e.g., keyboard/mouse information
or game control movements), then processing the input and producing some kind of output (e.g.,
images, test, sound). Sometimes the incoming and outgoing data may be in the form of hard
drives or network devices. In regards to problem solving, we will apply the above model in that
we will assume that we are given some kind of input information that we need to work with in
order to produce some desired output solution. However, the above model is quite simplified.
For larger and more complex problems, we need to iterate (i.e., repeat) the input/process/output
stages multiple times in sequence, producing intermediate results along the way that solve part of
our problem, but not necessarily the whole problem. For simple computations, the above model
is sufficient. It is the “problem solving” part of the process that is the interesting part, so we’ll
break this down a little. There are many definitions for “problem solving”.

Problem Solving is the sequential process of analyzing information related to a given situation
and generating appropriate response options.

Problem-solving techniques in computer science refer to the methods used to find solutions to
complex issues using algorithmic or heuristic approaches. These techniques can be systematic,
analytical, or intuitive, encompassing traditional programming, machine learning, or artificial
intelligence methods.

Steps to Solving Problems

There are some well-defined steps to follow when solving a problem. These include:

1. Understand the Problem

2. Formulate a Model

3. Develop an Algorithm

4. Write the Program

5. Test the Program

6. Evaluate the Solution

Considering the following example of how the input/process/output works on a simple problem:

Example: Calculate the average grade for all students in a class.

1. Input: get all the grades … possibly by typing them in via the keyboard or by reading them
from a USB flash drive or hard disk.

2. Process: add them all up and compute the average grade.

3. Output: output the answer to either the monitor, to the printer, to the USB flash drive or hard
disk … or a combination of any of these devices.
As you can see, the problem is easily solved by simply getting the input, computing something
and producing the output. Let us now examine the 6 steps to problems solving within the context
of the above example.

1. Understand the Problem

It sounds strange, but the first step to solving any problem is to make sure that one understands
the problem about to be solved. One needs to know:

 What input data/information is available?


 What does the data/information represent?
 In what format is the data/information?
 What is missing in the data provided?
 Does the person solving the problem have everything needed? 16
 What output information needs to be produced?
 In what format should the result be: text, picture, graph?
 What are the other requirements needed for computation?

In the example given above, it is understood that the input is a bunch of grades. But we need to
understand the format of the grades. Each grade might be a number from 0 to 100 or it may be a
letter grade from A to F. If it is a number, the grade might be a whole integer like 73 or it may be
a real number like 73.42. We need to understand the format of the grades in order to solve the
problem.

We also need to consider missing grades. What if we do not have the grade for every student: for
instance, some were away during the test? Should we be able to include that person in our
average (i.e., they received 0) or ignore them when computing the average?

We also need to understand what the output should be. Again, there is a formatting issue. Should
the output be a whole or real number or a letter grade? Do we want to display a pie chart with the
average grade? The choice is ours.

Finally, one needs to understand the kind of processing that must be performed on the data. This
leads to the next step.

2. Formulating a Model
The next step is to understand the processing part of the problem. Many problems break down
into smaller problems that require some kind of simple mathematical computations in order to
process the data. In the example given, the average of the incoming grades is to be computed. A
model (or formula) is thus needed for computing the average of a bunch of numbers. If there is
no such “formula”, one must be developed. Often, however, the problem breaks down into
simple computations that is well understood. Sometimes, one can look up certain formulas in a
book or online if there is a hitch.

In order to come up with a model, we need to fully understand the information available to us.
Assuming that the input data is a bunch of integers or real numbers 𝑥1, 𝑥2, ⋯ , 𝑥𝑛 representing
a grade percentage, the following computational model may apply:

𝐴𝑣𝑒𝑟𝑎𝑔𝑒1 = (𝑥1 + 𝑥2 + 𝑥3 + ⋯ + )/𝑛

Where the result will be a number from 0 to 100. That is very straight forward, assuming that the
formula for computing the average of a bunch of numbers is known. However, this approach will
not work if the input data is a set of letter grades like B-, C, A+, F, D-, etc., because addition and
division cannot be performed on the 17 letters. This problem solving step must figure out a way
to produce an average from such letters. Thinking is required.

After some thought, we may decide to assign an integer number to the incoming letters as
follows:

𝐴 + = 12 𝐴 = 11 𝐴 − = 10

𝐵+=9 𝐵=8 𝐵−=7

𝐶+=6 𝐶=5 𝐶−=4

𝐷+=3 𝐷=2 𝐷−=1 𝐹=0

If it is assumed that these newly assigned grade numbers are 𝑦1, 𝑦2, ⋯ , 𝑦𝑛, then the following
computational model may be used:

𝐴𝑣𝑒𝑟𝑎𝑔𝑒2 = (𝑦1 + 𝑦2 + 𝑦3 + ⋯ + )/𝑛

Where the result will be a number from 0 to 12.


As for the output, if it is to be represented as a percentage, then 𝐴𝑣𝑒𝑟𝑎𝑔𝑒1 can either be used
directly or one may use (𝐴𝑣𝑒𝑟𝑎𝑔𝑒2/12), depending on the input that we had originally. If a
letter grade is preferred as output, then one may need to use (𝐴𝑣𝑒𝑟𝑎𝑔𝑒1/100 ∗ 12) or
(𝐴𝑣𝑒𝑟𝑎𝑔𝑒1 ∗ 0.12) or 𝐴𝑣𝑒𝑟𝑎𝑔𝑒2 and then map that to some kind of “lookup table” that
allows one to look up a grade letter according to a number from 0 to 12.

The main point to understand this step in the problems solving process is that it is all about
figuring out how to make use of the available data to compute an answer.

3. Develop an Algorithm

Now that we understand the problem and have formulated a model, it is time to come up with a
precise plan of what we want the computer to do using an algorithm.

Algorithm is a precise sequence of instructions for solving a problem. Some of the more
complex algorithms may be considered randomized algorithms or nondeterministic algorithms
where the instructions are not necessarily in sequence and may not even have a finite number of
instructions. However, the above definition will apply for all algorithms that will be discussed in
this course.

To develop an algorithm, the instructions must be represented in a way that is understandable to


a person who is trying to figure out the steps involved. Two commonly used representations for
an algorithm is by using (1) pseudo code, or (2) flowcharts. Consider the following example for
solving the problem of a broken lamp. First is the example in a flowchart, and then in
pseudocode.
Figure 8: Flowchart for a broken Lamp

Pseudocode

1. IF lamp works, go to step 7.

2. Check if lamp is plugged in.

3. IF not plugged in, plug in lamp.

4. Check if bulb is burnt out.

5. IF blub is burnt, replace bulb.

6. IF lamp doesn’t work buy new lamp.

7. Quit ... problem is solved.

Note: pseudocode is a simple and concise sequence of English-like instructions to solve a


problem.

Pseudocode is often used as a way of describing a computer program to someone who doesn’t
understand how to program a computer. When learning to program, it is important to write
pseudocode because it helps to clearly understand the problem that one is trying to solve. It also
helps avoid getting bogged down with syntax details (i.e., like spelling mistakes) when writing
the program later (see step 4).

Although flowcharts can be visually appealing, pseudocode is often the preferred choice for
algorithm development because:

 It can be difficult to draw a flowchart neatly, especially when mistakes are made.
 Pseudocode fits more easily on a page of paper.
 Pseudocode can be written in a way that is very close to real program code, making it
easier later to write the program (i.e., in step 4).
 Pseudocode takes less time to write than drawing a flowchart.

Pseudocode will vary according to whoever writes it. That is, one person’s pseudocode is often
quite different from that of another person. However, there are some common control structures
(i.e., features) that appear whenever pseudocode is written. These features are shown along with
some examples:

 Sequence: Listing instructions step-by-step in order (often numbered)


1. Make sure switch is turned on
2. Check if lamp is plugged in
3. Check if bulb is burned out
4. ……
 Condition: Making a decision and doing one thing or something else depending on the
outcome of the decision.
If lamp is not plugged in
then plug it in
If bulb is burned out
then replace bulb
Else buy new lamp
 Repetition: repeating something a fixed number of times or until some condition occurs
Repeat
get a new light bulb
put it in the lamp
Until lamp works or no more bulbs left
Repeat 3 times
Unplug lamp
Plug into different socket
…..
 Storage: storing information for use in instructions further down the list
x ← a new bulb
count ← 8
 Transfer of Control: being able to go to a specific step when needed
If bulb works
then goto step 7

The point is that there are a variety of ways to write pseudocode. The important thing to
remember is that the algorithm should be clearly explained with no ambiguity as to what order
the steps are performed in.

Whether using a flow chart of pseudocode, an algorithm should be tested by manually going
through the steps in mentally to make sure a step or a special situation is not missed out. Often, a
flaw will be found in one’s algorithm because a special situation that could arise was missed out.
Only when one is convinced that the algorithm will solve the problem, should the next step be
attempted.

4. Writing the Program

Now that we have a precise set of steps for solving the problem, most of the hard work has been
done. The next step is to transform the algorithm from step 3 into a set of instructions that can be
understood by the computer.

Writing a program is often called "coding" or “implementing an algorithm”. So the code (or
source code) is actually the program itself. Without much of an explanation, below is a program
(written in processing) that implements the given algorithm for finding the average of a set of
grades. Note that the code looks quite similar in structure, however, the processing code is less
readable and seems somewhat more mathematical:
For now, the details of how to produce the above source code will not be discussed. In fact, the
source code would vary depending on the programming language that was used. Learning a
programming language may seem difficult at first, but it will become easier with practice.

The computer requires precise instructions in order to understand what it is being asked to do.
For example, removing one of the semi-colon characters (;) from the program above, will make
the computer become confused as to what it’s being asked to do because the semi-colon
characters (;) is what it understands to be the end of an instruction. Leaving one of them off will
cause the program to generate what is known as a compile-time error.

Compiling is the process of converting a program into instructions that can be understood by the
computer.

The longer your program becomes, the more likely you will have multiple compile errors. You
need to fix all such compile errors before continuing on to the next step.

5. Test the Program

Once a program is written and compiles, the next task is to make sure that it solves the problem
that it was intended to solve and that the solutions are correct.

Running a program is the process of telling the computer to evaluate the compiled instructions.
When a program is run and all is well, you should see the correct output. It is possible however,
that a program works correctly for some set of input data but not for all. If the output of a
program is incorrect, it is possible that the algorithm was not properly converted into a proper
program. It is also possible that the programmer did not produce a proper algorithm back in step
3 that handles all situations that could arise. Perhaps some instructions are performed out of
sequence. Whatever happened, such problems with the program are known as bugs.

Bugs are errors with a program that cause it to stop working or produce incorrect or undesirable
results.

It is the responsibility of the programmer to fix as many bugs in a program as are present. To
find bugs effectively, a program should be tested with many test cases (called a test suite). It is
also a good idea to have others test one’s program because they may think up situations or input
data that one may never have thought of.

Debugging is the process of finding and fixing errors in program code.

Debugging is often a very time-consuming “chore” when it comes to being a programmer.


However, if one painstakingly and carefully follows steps 1 through 3, this should greatly reduce
the amount of bugs in a program, thus making debugging much easier.

6. Evaluating the Solution

Once the program produces a result that seems correct, the original problem needs to be
reconsidered to make sure that the answer is formatted into a proper solution to the problem. It is
often the case that it may be realised that the program solution does not solve the problem the
way it is expected. It may also be realised that more steps are involved.

For example, if the result of a program is a long list of numbers, but the intent was to determine a
pattern in the numbers or to identify some feature from the data, then simply producing a list of
numbers may not suffice. There may be a need to display the information in a way that helps
visualise or interpret the results with respect to the problem; perhaps a chart or graph is needed.
It is also possible that when the results are examined, it is realised that additional data are needed
to fully solve the problem. Alternatively, the results may need to be adjusted to solve the
problem more efficiently (e.g., a game is too slow).

It is important to remember that the computer will only do what it is told to do. It is up to the
user to interpret the results in a meaningful way and determine whether or not it solves the
original problem. It may be necessary to re-do some of the steps again, perhaps going as far back
as step 1 again, if data were missing.
Problem Solving Techniques

There are several standard problem-solving techniques that you can employ irrespective of the
field of study in computer science. The first step, however, is always understanding the problem,
then you can choose the right strategy to solve it. Here are some of the basic problem-solving
methods that are particularly useful:

Divide and Conquer: This technique involves breaking a larger problem into smaller, more
manageable parts, solving each of them individually, and finally combining their solutions to get
the overall answer.

Algorithm Design: This technique involves formalizing a series of organized steps into an
algorithm to solve a specific problem. Common approaches include greedy algorithms, dynamic
programming, and brute force.

Heuristics: These are rules of thumb or educated guesses that can help you find an acceptable, if
not the perfect, solution when the problem is too complex for a direct mathematical approach, or
when computational resources are limited. Heuristics are not guaranteed to yield the optimal
solution but are often good enough for practical purposes and can dramatically reduce the time
and resources needed to find a solution.

Recursive Thinking: Recursion is predicated on solving a problem by breaking it down into


smaller instances of the same problem. The idea is that, eventually, you will get to a problem that
is small enough to solve directly.

Note: Even though these techniques might sound simple, they form a cornerstone and are often
cloaked within complex problem-solving techniques used in higher-level computer science.

You might also like