0% found this document useful (0 votes)
307 views28 pages

Chapter 03 History and Classification of Computers

Uploaded by

shihabsince99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
307 views28 pages

Chapter 03 History and Classification of Computers

Uploaded by

shihabsince99
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PPTX, PDF, TXT or read online on Scribd
You are on page 1/ 28

Chapter 03: HISTORY AND

CLASSIFICATION OF COMPUTERS
EARLY COMPUTING DEVICES
Abacus: The earliest device that qualifies, as a digital computer
is the Abacus also known as Soroban This device permits the
user to represent number by the position of beads on a rack.
Simple addition and subtraction can be carried by out rapidly
and efficiently by positioning the beads appropriately.
Although, the Abacus was invented around 600 BC.
Napier's Bones: Another manual calculating device was Napier's
Bones, designed by John Napier, a Scottish scientist. It used a set
of eleven rods called ‘bones’ with numbers carved on them. It
was designed in early 17th century and its upgraded versions
were in use even around 1890.
Pascaline: designed by Blaise Pascal
in 1642, was the first mechanical
calculating machine. Pascaline
used a system of gears, cogwheels
and dials for carrying out
repeated additions and
subtractions.
Punched cards: . In the early 19th
century a Frenchman, Joseph
Jacquard invented a loom that
used punched cards to
automatically control the
manufacturing of patterned cloth.
Jacquard's idea of storing a
sequence of instructions on the
cards is also conceptually similar
to modern computer programs.
Difference Engine: The year 1822
could be considered a golden year
in the history of Computer
Science. It was in the year 1822,
when an Englishmen, Charles
Babbage, a Professor of
Mathematics at Cambridge
University, developed and
demonstrated a working model of
a mechanical computer called
Difference Engine. It could do
complex algebraic equations and
was built on the principal of
gearing wheels of earlier era. It
was also able to produce
mathematical and statistical tables
correct up to 20 digits.
Analytic Engine: Encouraged by success of Difference Engine;
Babbage, in 1833, came out with his new idea of Analytic
Engine. It could store 1000 numbers of 50 decimals each. It
was to be capable of performing the basic arithmetic function
for any mathematical problem and it was to do so at an
average speed of 60 additions per minute. Unfortunately, he
was unable to produce a working model of this machine
mainly because the precision engineering required to
manufacture the machine was not available during that
period. However, his efforts established a number of
principles, which have been shown to be fundamental to the
design of any digital computer. There were many features of
the Engine - punched card instructions, internal memory and
an arithmetic unit to perform calculation – that were
incorporated in the early computer, designed a 100 year later.
ADA : His disciple, a brilliant mathematician, Lady Ada Lovelace, the
daughter of the famous English poet Lord Byron, developed the binary
number system for Bandage's machine. She is widely considered as the
first programmer in the world, and programming language ADA is
named after her. Babbage’s Engines were more closely related to
modern computers than any of their predecessors. Many people today
credit Charles Babbage with the real father of computers.
Punched cards: Herman Hollerith came up with the concept of punched
cards, which are extensively used as input media in modern digital
computers. Business machines and calculators made their appearance
in Europe and America towards the end of the nineteenth century.
Hollerith Machine: In 1880's the Census Bureau of the
United States appointed Harman Hollerith to develop a
technique for speeding up the processing of census data. It
was the major breakthrough of 19th century when the
developed a machine which used punched cards to store
the census data. This innovation reduced the time of
processing from 8 years to less than 3 years. In 1890's
many other countries like Canada, Australia, and Russia
also used Hollerith Machine for processing their census
data. Later many other large organizations like Insurance
Company used Hollerith Machine to speed up their
processing activity. In 1890's Hollerith left the Census
Bureau and started the Tabulating Machine Company,
which, in 1911, merged with several other firms to form
International Business Machine (IBM) Corporation.
In the period of late 1930’s and early 1940's many projects went
underway. In this period, under the direction of George Stibitz
of Bell Telephone Laboratories, five large-scale computers
were developed. These computers were built using
electromechanical relays and were called as Bell Relay
Computers. These were capable to perform calculations with
a high speed and accuracy.
The world's first electro-mechanical computer was developed by
Dr Howard Aiken of Harvard University and produced by IBM
in the year 1944. This computer used more than 3000 relays,
was 50 feet long and 8 feet high and was named as Automatic
Sequence Controlled Calculator also called Mark-1. It was a
quit fast machine and could add two numbers in 0.3 seconds
and multiple two number in 4.5 seconds. This computer may
be regarded as the first realization of Babbage's Analytical
Engine. IBM went on to develop Mark –II through IV.
The Computer Generations
1. First-Generation Computers (1942-1955)
First-generation computing involved massive computers using hundreds or
thousands of vacuum tubes for their processing and memory circuitry. These
large computers generated enormous amounts of heat; their vacuum tubes
had to be replaced frequently. Thus, they had large electrical power, air
conditioning, and maintenance requirements. First-generation computers
had main memories of only a few thousand characters and millisecond
processing speeds. They used magnetic drums or tape for secondary storage.
Examples of some of the popular first generation computers include ENIAC,
EDVAC, UNIVAC-I, IBM-701, IBM-650, and IAS Machine.
The Computer Generations
2. Second-Generation Computers (1955-1964)
Second-generation computing used transistors and other solid-
state, semiconductor devices that were wired to circuit boards
in the computers. Transistorized circuits were much smaller
and much more reliable, generated little heat, were less
expensive, and required less power than vacuum tubes. Tiny
magnetic cores were used for the computer's memory, or
internal storage. Many second-generation computers had
main memory capacities of less than 100 kilobytes and
microsecond processing speeds. Removable magnetic disk
packs were introduced, and magnetic tape emerged as the
major input, output, and secondary storage medium for large
computer installations. Examples of some of the popular
second-generation computers include IBM-1620, IBM7094,
CDC-1604, CDC-3600, UNIVAC-1108, PDP-I and NCR-304.
The Computer Generations
3. Third-Generation Computers (1964-1975)
Third-generation computing saw the development of computers that use
integrated circuits, in which thousands of transistors and other circuit
elements are etched on tiny chips of silicon. Main memory capacities
increased to several megabytes and processing speeds jumped to
millions of instructions per second (MIPS) as telecommunications
capabilities became common. This made it possible for operating
system programs to come into widespread use that automated and
supervised the activities of many types of peripheral devices and
processing by mainframe computers of several programs at the same
time, frequently involving networks of users at remote terminals.
Integrated circuit technology also made possible the development and
widespread use of small computers called minicomputers in the third
computer generation. Examples of some of the popular third-
generation computers include IBM-360 Series, IBM-370 Series, HCL-
2900 Series, Honeywell-6000 Series, PDP-8 and VAX.
integrated circuits IBM-360 Series
The Computer Generations
4. Fourth-Generation Computers (1975-2000)
Fourth-generation computing relies on the use of LSI (large-scale integration)
and VLSI (very-large-scale integration) technologies that cram hundreds
of thousands or millions of transistors and other circuit elements on
each chip. This enabled the development of microprocessors, in which
all of the circuits of a CPU are contained on a single chip with
processing speeds of millions of instructions per second. Main memory
capacities ranging from a few megabytes to several gigabytes can also
be achieved by memory chips that replaced magnetic core memories.
Microcomputers, which use microprocessor CPUs and a variety of
peripheral devices and easy-to-use software packages to form small
personal computer (PC), systems or client/server networks of linked PCs
and servers, are a hallmark of the fourth generation of computing,
which accelerated the downsizing of computing systems. Examples of
some of the popular fourth-generation computers include DEC-10,
STAR1000, PDP-II, CRAY-I (Supercomputer), CRAY-X-MP
The Computer Generations
5. Fifth-Generation Computers (2000-…)
Computer scientists and engineers are now talking about developing
fifth -generation computers, which can ‘think’. The emphasis is
now shifting from developing reliable, faster and smaller but
‘dumb’ machines to more ‘intelligent’ machines. Fifth-generation
computers will be highly complex knowledge processing
machines. Japan, USA and many other countries are working on
systems, which use Artificial Intelligence. Automatic Programming,
Computational Logic Pattern Recognition and Control of Robots
are the processes, which are used in these computers. The speed
of the computers will be billions of instructions per second, and
will have unimaginable storage capacities. These computers will
be interactive.
Moore’s Law
In 1965 Gordon E Moore predicted, based on data available at that time that the density of
transistors in integrated circuits will double at regular intervals of around 2 years. Based
on the experience from 1965 to date, it has been found that his prediction has been
surprisingly accurate. In fact the number of transistors per integrated circuit chip has
approximately doubled every 18 months. The observation of Moore has been called
"Moore's Law". In the next Figure we have given two plots.
One gives the number of transistors per chip in Dynamic Random Access Memory along the
y-axis and years along x-axis. Observe that the y-axis is a logarithmic scale and the x-axis
a linear scale.
The second plot gives the number of transistors in microprocessor chips. Observe that in
1974 the largest DRAM chip had 16 Kbits whereas in 1998 it has 256 Mbits, an increase
of 16000 times in 24 years. The increase in the number of components in
microprocessors has been similar. It is indeed surprising that the growth has sustained
over 30 years. The availability of large memory and fast processors has in turn increased
the size and complexity of systems and applications software. It has been observed that
software developers have always consumed the increased hardware capability faster
than the growth in hardware. This has kept up the demand for hardware.
Another interesting point to
be noted is the increase in
disk capacity. In 1984 disk
capacity in PCs was around 20
MB whereas it was 20 GB in
2000 - a 1000 fold increase in
about 16 years again doubling
every 18 months, which is
similar to Moore's law.
The implication of Moore's
law is that in the foreseeable
future we will be getting more
powerful computers at
reasonable cost. It will be up
to our ingenuity to use this
increased power of
computers effectively.
CLASSIFICATION OF COMPUTERS
Computers can be divided into following
categories by functional criteria (data
representation):
1.Digital Computers
2.Analog Computers
3. Hybrid Computers
1. Digital computers
A digital computer, as the name suggests, works with digits. In
other words, a digital computer is a counting device. All the
expressions are coded into binary digits (0s and 1s) inside the
computers and it maintains and manipulates them at a very
fast speed. A digital computer can perform only one
operation i.e. addition. The other operations of subtraction,
multiplication and division are performed with the help of
addition operation. The digital computer circuits are designed
and fabricated by the manufacturers and are quite
complicated ones. A digital computer manipulates data
according to the instructions (program) given to it in a certain
language.
2. Analog Computers
Analog computers represent numbers by a physical quantity i.e.
they assign numeric values by physically measuring some
actual property, such as the length of an object, or the
amount of voltage passing through a point in an electric
circuit. Analog computers derive all their data from some
form of measurement. Though effective for some
applications, this method of representing numbers is a
limitation of the analog computers. The accuracy of the data
used in an analog computer is directly related to the precision
of its measurement. Speedometers, Volmeters, Pressure
Gauges, Slide Rules, Flight Simulators for training pilots and
Wall Clocks are some examples of analog computers
3. Hybrid Computers
Hybrid computers combine the best features of analog and digital
computers. They have the speed of analog computers and
accuracy of digital computers. They are usually used for special
problems in which input data derived from measurement is
converted into digits and processed by computer. Hybrid
computers for example, control National Defense and Passenger
flight radars. Consider the Hybrid computer system in a hospital
Intensive Care Unit (ICU). The analog device may measure a
patient heart function, temperature and other signs. These
measurements may, then, be converted into numbers and
supplied to a digital device, which may send as immediate signal
to a nurse’s station if any abnormal readings are detected.
We can also classify the computer systems into
following categories by using the capacity
performance criteria (size, cost, speed and
memory):
Supercomputers
Mainframe computers
Minicomputers, or Midrange computers
Workstations
Microcomputers, or Personal computers
1. Supercomputers
Supercomputers are the most powerful computers made, and
physically they are some of the largest. These systems are built
to process huge amounts of data, and the fastest
supercomputers can perform more than trillion calculations per
second. Some supercomputers such as the Cray T-90 system;
can house thousands of processors. This speed and power
make supercomputers ideal for handling large and highly
complex problems that require extreme calculating power such
as numerical whether prediction, design of supersonic aircraft,
design of drugs and modeling of complex molecules.
The Japanese supercomputer, Fugaku, is the world's most
powerful.
2. Mainframe Computers
The largest type of computer in common use is the
mainframe. Mainframe computers are used in large
organizations like insurance companies and banks where
many people need frequent access to the same data,
which is usually organized into one or more huge
databases. Mainframes are being used more and more
as specialized servers on the World Wide Web, enabling
companies to offer secure transactions with customers
over the Internet. If we purchase an air line ticket over
the Web, for example, there is a good chance that our
transaction is being handled by a mainframe system.
3. Mini Computers
First released in the 1960s, minicomputers got
their name because of their small size compared
to other computers of the day. The capabilities of
minicomputer are between that of mainframes
and personal computers. (For this reason,
minicomputers are increasingly being called
midrange computers). Like mainframes,
minicomputers can handle much more input and
output than personal computers can.
4. Microcomputers or Personal Computers
The terms microcomputer and personal computer are interchangeable, but PC,
which stands for personal computers- sometimes, has a more specific
meaning. In 1981, IBM called its first microcomputer the IBM–PC. Within a few
years, many companies were copying the IBM design, creating “clones” or
“compatibles” that were meant to function like the original, for the reason, the
term PC has come to mean the family of computers that includes IBMs and
IBM compatibles. The vast majority of microcomputers sold today are part of
this family. One source of the PC’s popularity is the rate at with improvements
is made in its technology. Microprocessors, memory chips, and storage devices
make continual gains in speed and capacity, while physical size and price
remain stable – or in some cases are reduced. For example, compared to the
typical PC of ten years ago, a machine of the same price today will have about
ten times as much RAM, about 100 times more storage capacity, and a
microprocessor at least 100 times as fast. What’s more, many analysts believe
that this pace of change will continue for another 10 or 20 years.

You might also like