0% found this document useful (0 votes)
69 views61 pages

Introduction To Information Technology 21 January 2025

The document provides an introduction to information technology, focusing on computer systems, their evolution, and types. It outlines the characteristics and functionalities of various generations of computers, from the first generation using vacuum tubes to the current fifth generation utilizing artificial intelligence. Additionally, it categorizes computers into analog, digital, and hybrid systems, detailing their applications and examples in various fields, particularly in healthcare.

Uploaded by

mc.scelly254
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
69 views61 pages

Introduction To Information Technology 21 January 2025

The document provides an introduction to information technology, focusing on computer systems, their evolution, and types. It outlines the characteristics and functionalities of various generations of computers, from the first generation using vacuum tubes to the current fifth generation utilizing artificial intelligence. Additionally, it categorizes computers into analog, digital, and hybrid systems, detailing their applications and examples in various fields, particularly in healthcare.

Uploaded by

mc.scelly254
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 61

INTRODUCTION TO INFORMATION TECHNOLOGY

Application of Computer Technologies

JANUARY 2024
REFERENCE MATERIAL COLLECTIONS
[email protected]
Information and Communications Technology

Chapter one
COMPUTER SYSTEM

A computer system is an electronic machine which accepts data applies a prescribed set of instructions
to produce desired results.

Computing
Data Results
function

The electronic machine is referred to as computer hardware and a set of instructions referred to as
computer program.
The processing performed on the data is determined by a program, the machine is provided with an
appropriate program in order to perform a specified computation and the machine can perform a wide
variety of computations.
Any computation can be expressed as a sequence of instructions selected from a small set (i.e. a
program). The machine executes the instructions, using the provided data, to generate the results.
The are two categories of machines
- Programmable Machine
- Hardware programmed machine

Programmable Machine
Programmable refers to the ability of a device or system to be programmed or customized to perform
specific tasks or functions. It allows you to write and execute instructions or code to control the behavior
and functionality of the device, making it adaptable and flexible.

"Programmable devices" refer to electronic devices or systems that can be configured or instructed to
perform specific tasks by using a set of instructions known as a program or code.

Hardware programmed machine


Configuring the device on a printed circuit board.
All major electronics today have four main components, a clock to keep processing ticking, a processor
that will react to certain electric impulses and output other electric impulses, volatile memories to keep
the current outputs of the processor, the ‘state of operations', and non-volatile memory to save
permanently results of computations.

System
The term system is derived from Greek word system, which means an organized relationship among
functioning units or components. A system exists because it is designed to achieve one or more
objectives.
Therefore system is a set of interacting or interdependent components forming an integrated whole or a
set of elements (often called 'components' ) and relationships which are different from relationships of
the set or its elements to other elements or sets. In a system the different components are connected with
each other and they are interdependent.
1 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
A component is either an irreducible part or an aggregate of parts, also called a subsystem. The simple
concept of a component is very powerful

Interdependent components may refer to physical parts or managerial steps known as subsystem of a
system. Most systems share common characteristics, including:
 A system has structure, it contains parts (or components) that are directly or indirectly related to
each other;
 A system has interconnectivity: the parts and processes are connected by structural and/or
behavioral relationships.
 A system has behavior, it exhibits processes that fulfill its function or purpose;
The term system may also refer to a set of rules that governs structure and/or behavior.
Systems approach as an organized way of dealing with a problem.

The system takes input from outside, processes it, and sends the resulting output back to its
environment. The arrows in the figure show this interaction between the system and the world outside of
it.
The concept of a system is shown below:

.
For example, just as with an automobile or a stereo system, with proper design, we can repair or
upgrade the system by changing individual components without having to make changes throughout the
entire system.
The components are interrelated; that is, the function of one is somehow tied to the functions of the
others.
A system has a boundary, within which all of its components are contained and which establishes the
limits of a system, separating it from other systems.

Environment: The environment is the 'supersystem' within which an organization operates. It excludes
input, processes and outputs. It is the source of external elements that impinge on the system. All
systems have a boundary and operate within an environment.

2 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

EVOLUTION OF COMPUTERS
First Generation of Computer (1937 – 1946):
In 1943 an electronic computer name the Colossus was built for the military. Other developments
continued until in 1946 the first general– purpose digital computer, the Electronic Numerical Integrator
and Calculator (ENIAC) was built.
Computers of this generation could only perform single task, and they had no operating system.

Characteristics:
i. Sizes of these computers were as large as the size of a room.
ii. Possession of Vacuum Tubes to perform calculation.
iii. They used an internally stored instruction called program.
iv. Use capacitors to store binary data and information.
v. They use punched card for communication of input and output data and
information
vi. They have about One Thousand 1000 circuits per cubic foot.
vii. These computers were still generating a lot of heat

Second Generation of Computer (1947 – 1962):


Second generation of computers used transistors instead of vacuum tubes
which were more reliable. In 1951 the first computer for commercial use was
introduced to the public; the Universal Automatic Computer (UNIVAC 1). In
1953 the International Business Machine (IBM) 650 and 700 series computers
made their mark in the computer world.

Characteristics:
i. The computers were still large, but smaller than the first generation of
computers.
ii. They use transistor in place of Vacuum Tubes to perform calculation.
iii. They were produced at a reduced cost compared to the first generation
of computers.
iv. Possession of magnetic tapes as for data storage.
v. They were using punch cards as input and output of data and information. The use of keyboard as an
input device was also introduced.
vi. These computers were still generating a lot of heat in which an air conditioner is needed to maintain
a cold temperature.
vii. They have about one thousand circuits per cubic foot.

Third Generation of Computer (1963 – 1975):


The invention of integrated circuit brought us the third generation of
computers. With this invention computer became smaller, more powerful more
reliable and they are able to run many different programs at the same time.

Characteristics:
i. They used large-scale integrated circuits, which were used for both data processing and storage. They
have hundred thousand circuits per cubic foot.
ii. Computers were miniaturized, that is, they were reduced in size compared to previous generation.
iii. Keyboard and mouse were used for input while the monitor was used as output device.

3 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Examples:
i. IBM system 360
ii. UNIVAC 9000 series.

Fourth Generation of Computer (PC 1975 – Current)


At this time of technological development, the size of computer was redefined to what we called
Personal Computers, PC. This was the time the first Microprocessor was created by Intel. The
microprocessor was a very large scale, that is, VLS integrated circuit which contained thousands of
transistors. 15 Transistors on one chip were capable performing all the functions of a computer’s central
processing unit.

Characteristics:
i. Possession of microprocessor which performs all the task of a computer system use today.
ii. The size of computers and cost was reduced.
iii. Increase in speed of computers.
iv. Very large scale (VLS) integrated circuits were used.
v. They have millions of circuits per cubic foot.

Examples:
i. IBM system 3090, IBM RISC6000, IBM RT.
ii. HP 9000.
iii. Apple Computers.

Fifth Generation of Computers (Present and Beyond)


Fifth generations computing devices, based on artificial intelligence (AI)
are still in development, although there is some application such as voice
recognition, facial face detector and thumb print that are used today.

Characteristics:
i. Consist of extremely large scale integration.
ii. Parallel processing
iii. Possession of high speed logic and memory chip.
iv. High performance, micro-miniaturization.
v. Ability of computers to mimic human intelligence, e.g. voice
recognition, facial face detector, thumb print.
vi. Satellite links, virtual reality.
vii. They have billions of circuits per cubic.

Examples:
i. Super computers
ii. Robots
iii. Facial face detector
iv. Thumb print

4 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Types Computers
There are two distinct families of computing device and analog, digital and hybrid available to us today.
These two types of computer operate on quite different principles. Hence there are three categories of
computers.
 Analog computer system
 Digital computer system
 Hybrid computer system

Analog Computer system


An analog computer is a form of computer that uses the continuously-changeable aspects of physical
phenomena such as electrical, mechanical, or hydraulic quantities to model the problem being solved.

Analog computers are used to process analog data. Analog data is of continuous nature and which is not
discrete or separate. Such type of data includes temperature, pressure, speed weight, voltage, depth etc.
These quantities are continuous and having an infinite variety of values.

Analog computers are the first computers being developed and provided the basis for the development
of the modern digital computers.
Analog computers do not require any storage capability because they measure and compare quantities in
a single operation. Output from an analog computer is generally in the form of readings on a series of
dial (Speedometer of a car) or a graph on strip chart.

Analog computers are widely used for certain specialized engineering and scientific applications, for
calculation and measurement of analog quantities. They are frequently used to control process such as
those found in oil refinery where flow and temperature measurements are important.

Firstly, Electronic analog computer was developed in USA, and initially they were used in the different
missiles, airplane layout, and in flight.
Analog computers are in use for some specific applications, like the flight computer in aircraft, ships,
submarines, and some appliances in our daily life such as refrigerator, speedometer, etc.

Examples
The examples of an analog computer are astrolabe, oscilloscope, autopilot, telephone lines, speedometer,
etc.

Digital computer systems


A Digital Computer, as its name implies, works with digits to represent numerals, letters or other special
symbols. Digital Computers operate on inputs which are ON-OFF type and its output is also in the form
of ON-OFF signal. Normally, an ON is represented by a 1 and an OFF is represented by a 0. So we can
say that digital computers process information which is based on the presence or the absence of an
electrical charge or we prefer to say a binary 1 or 0. This computer system works with letters of
alphabet, special characters and numbers which can be coded into 0s and 1s.

The digital computer is a digital system that performs various computational tasks. The
word digital implies that the information in the computer is represented by variables that take a limited
number of discrete values. These values are processed internally by components that can maintain a
limited number of discrete states.
5 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Digital computers use the binary number system, which has two digits: 0 and 1. A binary digit is called
a bit. Information is represented in digital computers in groups of bits. By using various coding
techniques, groups of bits can be made to represent not only binary numbers but also other discrete
symbols, such as decimal digits or letters of the alphabet.

Types of Digital Computers


1. Micro Computer
The personal computers, laptops, tablets, and mobile phones that we use in our daily life fall under the
category of microcomputers. These type of computers consist of microprocessor and various
input/output devices embedded on a printed circuit board. They are less complex and comparatively
inexpensive. Microcomputers are advantageous because they are flexible, portable, and loaded with
features.
Examples
A smartphone is a microcomputer that consists of a processor, a memory element, and various
input/output devices embedded on a printed circuit board. The processor takes the real-life input from
the user, converts it into digital form, processes it, and output the data into user understandable format

2. Mini Computer
Minicomputers are most popular for their quality of multiprocessing. It means that mini computers can
use multiple processors connected to a common memory and the same peripheral devices to perform
different tasks simultaneously. Minicomputers have greater complexity than microcomputers. These are
also known as mid-range computers and are generally used for applications like engineering
computations, data handling, etc.

3. Mainframe Computer
Mainframe computers were first introduced during the early 1930s and were properly functional in the
year 1943. They are known for their exceptional data handling capacity and reliability. This quality is
used in applications where bulk data handling and security are of main concern. For example, in banks,
during the census, etc. Harvard Mark 1 was the first mainframe computer.

Examples
Automated Teller Machine is a perfect example of mainframe computers. Data handling capacity and
reliability are the two most notable traits of mainframe computers. This is why they are employed in
applications where security and flawless operation are of prime concern.

4. Super Computer
Supercomputers are basically used in scientific and research-related applications. They require an entire
room for their set up and operation. Supercomputers employ thousands of processors that work together
to perform trillions of calculations per second. Therefore, they require external cooling pipes to manage
the generated heat. They are fast, accurate, and secure. Some of the prominent organizations that make
use of supercomputers are the National Nuclear Security Administration, NASA, ISRO, etc.

6 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Hybrid computers
A hybrid is a combination of digital and analog computers. It combines the best features of both types of
computers, i-e. It has the speed of analog computer and the memory and accuracy of digital computer.
Hybrid computer has features of both analogue and digital computers. It is fast like an
analogue computer and has memory and accuracy like digital computers. It can process both continuous
and discrete data. It accepts analogue signals and convert them into digital form before processing.
Hybrid computers are used mainly in specialized applications where both kinds of data need to be
processed
For example, a petrol pump contains a processor that converts fuel flow measurements into quantity and
price values. In hospital Intensive Care Unit (ICU), an analog device is used which measures patient's
blood pressure and temperature, they are used in airplanes etc, which are then converted and displayed
in the form of digits. Hybrid computers for example are used for scientific calculations, in defense and
radar systems. This category of computer systems consists of a measuring unit which measures, converts
the data to computer signal and transmits as input for processing.
This feature of measurement and conversion makes the analog computer very fast and useful in areas
such as oil refining equipment’s, Military equipment’s such as missile system, medical equipment’s etc.

Examples of Hybrid Computers


Monitoring Machine
A patient monitoring system is one of the finest innovation in the modern world. Typically, a monitoring
machine is able to evaluate body parameters such as heart rate, breathing rate, blood pressure, SPO2,
and body temperature. A number of sensors, attached to the body are responsible for catching the
analogue signal from the body. This signal is then converted into digital data and is sent for further
processing. The controller processes this digital data and presents it in a form that is interpretable to the
doctor. This process is then repeated again after user allotted time. This hybrid computer is a major asset
because monitoring all these body parameters separately and frequently is very difficult.

Ultrasound Machine
Ultrasound is the high-frequency sound waves that a normal human cannot hear directly. These sound
waves are very advantageous in medical technology. An ultrasound machine consists of a transducer
probe that transmits the high-frequency sound, this sound travels freely till it strikes an obstacle. When
the ultrasound strikes the obstacle, it gets reflected back. The reflected signal is picked up by the same
transducer probe and is sent for processing to generate the output in image form. Ultrasound devices are
neither completely analogue nor completely digital. Therefore, an ultrasound machine can be placed
under the category of hybrid computers.

Electrocardiogram Machine
Electrocardiogram or ECG machine is designed to measure heart activity. It makes use of 12-13 sensors
that pick the body signals and translate them to the digital data. This digital data is then processed by the
controller, and the output is generated, usually, in the form of an electrocardiograph. Body signals are
analogue in nature, and the output is generated in both analogue and digital form. Therefore, ECG
machine is an example of hybrid computers.

Computers in hospitals
Computers play a crucial role in the health and medical sectors. In health care, it enabled hospitals to
maintain a record of their millions of patients in order to contact them for treatment or related to their
appointment, medicine updates, or disease updates

7 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
In hospitals, computers are radically changing the diagnosis methods. In hospitals, computers are used
for multiple tasks such as maintaining the information of patients, records, live monitoring of patients,
X-rays, and many more. In hospitals, computers are also helpful to configure lab-tools, monitoring the
blood pressure and heart rate, etc. Computers are advantageous as they enable doctors to exchange the
data of patients easily with other medical specialists. Advanced surgical devices are based on robotics
that helps surgeons to remotely conduct surgeries and complex operations.

Common uses of computers in hospitals.

1. Medical and patient data


A computer keeps patient data organized, secure, and easily accessible. Traditional file systems are
problematic. Medical staff requires fast access of patient records in emergency. In this situation, finding
a patient's file in the cabinet wastes time. In busy shifts or in the hustle and bustle, the paper-based filing
systems are completely disorganized. It sometimes causes the loss of patient's data.

2. Monitor patients
Computers in hospitals are also used to monitor blood pressure, heart rate, and other critical medical
equipment. The computer monitoring system also collects useful data from patients. This data can be
accessed for future reference or can be further used for studies.

3. Research and studies


Now doctors can consult with medical databases to learn more about a particular disease. The presence
of a computer in healthcare is enhancing the knowledge that can be accessed and used by the medical
personnel.

4. Inventory
For a patient's treatment, it is important to know what medicines are in stock. It is crucial to keep an
inventory list up to date because if the doctor prescribes a medicine and that medicine is out of stock so,
it can cause a slowdown in the recovery. Meanwhile, inventory management is very important for health
clinics and hospitals. So, computers enable the inventory managers to monitor the stock levels.

5. Computers for surgical procedures


Computers in operation theatres are helpful in saving lives. Surgeons depend upon computer systems for
performing complex procedures.

6. Medical imaging and equipment


Computers are often used to control the medical equipment that performs important medical exams such
as ultrasound, CT scan, MRIs, blood tests, etc. Additionally, doctors use computers to show their results
and explain the condition and treatment. Computers also allow 3D drafting and imaging.

7. Communication and telemedicine


Telemedicine is said to be the practice of caring for patients remotely when both patient and provider
and not physically present. With computers and smartphones, doctors are now able to communicate with
fellow medical professionals and patients. Telemedicine also plays an important role during natural
disasters.

8 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Chapter two
COMPUTER HARDWARE
This is the physical computer or touchable parts of the computer system. The study of a computer
hardware system covers:
 functions,
 components and their interconnections
Computer hardware is the physical part of a computer, including its digital circuitry, as
distinguished from the computer software that executes within the hardware. The hardware of a
computer is infrequently changed, in comparison with software and hardware data, which are "soft" in
the sense that they are readily created, modified or erased on the computer
The main components of computer hardware Include:
 Input unit
 Processing unit
 Output unit
 Storage unit

INPUT UNIT
The main function of input hardware is to capture raw data and convert it into computer usable form as
quickly and efficiently as possible.

Categories of input hardware.


One of the easiest ways to categorize input hardware is according to whether or not it was a keyboard to
initially capture data.

Keyboard Entry
A computer keyboard is an electromechanical component designed to create special standardized
electronic codes when a key is pressed. The codes are transmitted a long the cable that connects the
keyboard to the computer system updated each time the card is used.
9 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology

Direct Entry
Involve non-keyboard input device. This type of data capture systems minimize the amount of human
activity required to get data into a compiler system. The most common ones include: Card readers,
Scanning devices, pointing devices, Voice input devices etc

Card readers
The technology has evolved from use of punched cards to smart cards. Smart cards are designed to be
carried like credit cards. This card is wiped into special card reading point-of-sale terminal and then
enters a password on the keyboard.

Scanning Device
Are designed to read data from source documents into computer usable form. The devices use light
sensitive mechanism read data. Examples include. Barcode reader ,Optical mark reader , Magnetic
character reader
Fax machine etc.

Pointing devices
These devices allow the user to identify and select the necessary command or option by moving the
cursor to a certain location on the screen or tablet. They are used in menu-driven software. These
devices include:
Light pens, A mouse, Touch screen, Digitizer etc.

Voice input Devices


Also known as voice recognition systems. They convert spoken words into electrical signals.

PROCESSING UNIT
It is referred to as central processing unit (CPU) or simply microprocessor. A microprocessor is a tiny,
enormously powerful high speed electronic brain etched on a single silicon semiconductor chip which
contains the basic logic, storage and arithmetic functions of a computer.
It receives and decodes instructions from input devices like keyboards, disks then send them over a bus
system consisting of microscopic etched conductive "wiring" to be processed by its arithmetic calculator
and logic unit. The results are temporarily stored in memory cells and released.
In 1970, Intel Corporation introduced the first dynamic RAM, which increased IC memory by a factor of
four. These early products identified Intel as an innovative young company. However, their next
product, the microprocessor, was more than successful by setting in motion an engineering feat that
dramatically altered the course of electronics.

First Generation (1971-73)


The microprocessors that were introduced from 1971 to 1973 were referred to as the first generation
systems.
Early 1971 Ted Hoff, the Intel engineer came up with an unnamed 4-bit central processing unit which
would become known as a "microprocessor." Mid 1971, Intel successfully completed the project.
Remarkably, the four-bit 4004 microprocessors, composed of 2,300 transistors etched on a tiny chip,
could execute 60,000 instructions per second. Intel 4004 was the first commercially available single-
chip microprocessor in history. First-generation microprocessors processed their instructions serially i.e.,
they fetched the instruction, decoded it, and then executed it.

10 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Second Generation (1974-78)


The second generation marked the beginning of very efficient 8 bit microprocessors. Some of the
popular processors were
: i. Intel’s 8080
ii. Motorola’s 6800 and 6809
iii. Zilog’s Z80

Third Generation (1979-80)


The third generation, introduced in 1979, was represented by Intel’s 8086 and the Zilog Z8000, which
were 16 bit processors with minicomputer-like performance.

The hardware is a programmable consisting of a general-purpose arithmetic and logic unit able to
execute a small set of simple/basic functions selectable by control signals and instruction interpreter
which reads instruction codes one at a time and generates an appropriate sequence of control signals.
As a computer technology evolved from one generation to the next the size of this unit has become
smaller and smaller while processing capacity has increased tremendously.
Every time the system becomes smaller, cheaper and more powerful, knew groups of users presented
themselves new areas of application.

Fourth Generation (1981-1995)


Microprocessors entered their fourth generation with designs surpassing a million transistors. This era
marked the beginning of 32 bits microprocessors. Intel introduced 432, which was bit problematic and
then Intel 80386 was launched.

Fifth Generation (1995- Till date)


Microprocessors in their fifth generation, employed decoupled super scalar processing, and their design
soon surpassed 10 million transistors. In this generation, PCs are a low-margin, highvolume-business
dominated by a single microprocessor. This age the emphasis is on introducing chips that carry on-chip
functionalities and improvements in the speed of memory and I/O devices along with introduction of 64-
bit microprocessors. Intel leads the show here with Pentium, Celeron and very recently dual and quad
core processors working with up to 3.5GHz speed.

Future of microprocessor
The future looks bright for Microprocessor with the advancement in technology and it is constantly
improving every year. We will soon be able to see more miniature devices that are powerful, energy
efficient and surely blow your mind. On the other side microprocessor hardware improvements are
becoming more and more difficult to accomplish as, even Gordon Moore believes, the exponential
upward curve in microprocessor hardware advancements “can’t continue forever.” Because the future
winners are far from clear today, it is way too early to predict whether some form o

The main elements of processor.


Although the physical construction of different processors varies,they typically consist of three main
sectors.
 Arithmetic logic unit
 Control unit
 Internal memory

(i) Arithmetic logic unit.


11 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
The arithmetic-logic unit (ALU) performs all arithmetic operations (addition, subtraction,
multiplication, and division) and logic operations. Logic operations test various conditions
encountered during processing and allow for different actions to be taken based on the results. The data
required to perform the arithmetic and logical functions are inputs from the designated CPU registers
and operands. The ALU relies on basic items to perform its operations. These include number
systems, data routing circuits (adders/subtracters), timing, instructions, operands, and registers.

(ii) Control Unit (C.U).


The control unit maintains order within the computer system and directs the flow of
traffic (operations) and data. The control unit selects one program statement at a time from
the program storage area, interprets the statement, and sends the appropriate electronic impulses
to the arithmetic-logic unit and storage section to cause them to carry out the instruction. The control
unit does not perform the actual processing operations on the data. Specifically, the control
unit manages the operations of the CPU, be it a single-chip microprocessor or a fill-size mainframe.
Like a traffic director, it decides when to start and stop (control and timing), what to do (program
instructions), where to keep information (memory), and with what devices to communicate (I/O). It
controls the flow of all data entering and leaving the computer. It accomplishes this by
communicating or interfacing with the arithmetic-logic unit, memory, and I/O areas.

(iii) Internal storage


Primary storage (main memory) The primary storage section (also called internal storage, main
storage, main memory, or just memory) serves four purposes: . To hold data transferred from an I/O
device to the input storage area, where it remains until the computer is ready to process it. To hold
both the data being processed and the intermediate results of the arithmetic-logic operations.
This is a working storage area within the storage section. It is sometimes referred to as a
scratch pad memory. . To hold the processing results in an output storage area for transfer to an I/O
device

Each computer's CPU can have different cycles based on different instruction sets, but will be similar to
the following cycle:

1. Fetching the instruction


The next instruction is fetched from the memory address that is currently stored in the program counter
(PC), and stored in the instruction register (IR). At the end of the fetch operation, the PC points to the
next instruction that will be read at the next cycle.

12 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
2. Decode the instruction
The decoder interprets the instruction. During this cycle the instruction inside the IR (instruction
register) gets decoded.

3.In case of a memory instruction (direct or indirect) the execution phase will be in the next clock pulse.
If the instruction has an indirect address, the effective address is read from main memory, and any
required data is fetched from main memory to be processed and then placed into data registers. If the
instruction is direct, nothing is done at this clock pulse. If this is an I/O instruction or a Register
instruction, the operation is performed (executed) at clock Pulse.

4. Execute the instruction


The control unit of the CPU passes the decoded information as a sequence of control signals to the
relevant function units of the CPU to perform the actions required by the instruction such as reading
values from registers, passing them to the ALU to perform mathematical or logic functions on them, and
writing the result back to a register. If the ALU is involved, it sends a condition signal back to the CU.

The result generated by the operation is stored in the main memory, or sent to an output device. Based
on the condition of any feedback from the ALU, Program Counter may be updated to a different address
from which the next instruction will be fetched.
The cycle is then repeated.

Note in the diagram that the CU directs the CPU through a sequence of different states. The speed with
which the CPU cycles from state to state is governed by a device called the system clock. Clock speed is
often measured in gigahertz (GHz) where a gigahertz is one billion cycles per second. Thus, a 2.9 GHz
processor could execute 2.9 billion cycles in one second.
Arithmetic Logic Unic

13 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

The CPU utilizes two types of memory, RAM and ROM. RAM, or random access memory, stores both
the instructions and data that the CPU acts on to accomplish some task using the operating cycle just
described. RAM is dynamic, meaning that as the task progresses, the contents of RAM change. This
allows RAM to store any results that the task might produce. RAM is volatile, meaning that its contents
are lost if power to the computer is interrupted or turned off.

On the other hand, ROM is neither dynamic nor volatile. Its contents are fixed during the manufacturing
process for the computer and can't be changed during processing. In addition, its contents are not lost
when the power to the computer is interrupted or turned off. Although ROM is not directly involved in
the normal CPU operating cycle described above, it does serve a very important function in that it
contains all the information necessary to start or restart the computer. Starting the computer when the
power is off is called booting the computer. Restarting the computer when it already has power turned
on is called rebooting the computer.

Information is moved between the CPU and memory across a connection called a bus. The amount of
information that can be moved to or from memory at one time is referred to as the bus width..

Microprocessors used in handheld devices


Today, handheld devices - this includes smartphones, tablets, portable media players - have rather
powerful microprocessors, so much so that they can compete with a desktop computer. And processors
are these days are coming up with more cores. Initially, processors were single core, followed by two
cores, Quad-core, Hexa-core, Octa-core and now even Deca cores. Most processors nowadays are 64bit.
With Graphics Processing Unit (GPU) being included inside mobile processors, these devices can now
offer high quality graphics, Virtual Reality capability, 3D capability and 4k recording, plus the improved
processor technology means greater power efficiency.

Microprocessors used in general-purpose computing (regular desktops and laptops):

14 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
In this segment, Intel and AMD are the market leaders. Intel chips are said to be the best for gaming and
other predominantly single-threaded tasks

Microprocessors used in high-performance computing (supercomputers):


It basically refers to aggregating computing power in a manner that delivers much higher performance
than one could get out of a typical desktop computer or workstation in order to solve large problems in
science, engineering, or business. So it is definitely more complex than a simple desktop computer.
These computers, or supercomputers as they are called, are computers with a higher level of
performance compared to a general-purpose computer. Supercomputers play an important role in
computational science, and are used for many computationally intensive tasks in areas such as quantum
mechanics, weather forecasting, climate research, oil and gas exploration, molecular modeling
(computing the structures and properties of chemical compounds, macromolecules, polymers and
crystals), and physical simulations (simulations of the early moments of the universe, airplane and
spacecraft aerodynamics, the detonation of nuclear weapons, and nuclear fusion).

OUTPUT UNIT
Converts digital signals to human readable form. These devises falls into two main categories.
 Hard copy output.
 Soft copy output.

Hard copy
Refers to information that has been recorded on a tangible medium such as paper or microfilm. The
principal hard copy output devices include:
 Printer
 Microfilm recorders
 Graphic plotters.
 Printer
Is a device capable of producing characters, symbols and graphics on paper.
There are many ways of classifying printers, the most common is based on the mode of printing.

There are two main categories of printers.


 Impact printer
 Non-impact printer

Impact printer
Forms print images by pressing an inked ribbon aganaist a papere with a hammer like mechanism.
Print single color image, the color of the ribbon.
Examples include: Dot matrix , Line printers e.t.c.

Non-impact printer
Print by spraying ink on paper. The ink is contained in an electronic container called cartridge. There
are different sizes and design of cartridges for different types of printers. The cartridges are either black
or color.
Examples include Inkjet printers, Laser printers etc

Microform recorder
Reduces output between 24 to 48 smaller and then record onto a micro form.
Microforms are photographically reduced documents on to a film.
15 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
A typical 16mm roll will hold the equivalent of 3,000 A4 pages. One typical microfiche will hold
equivalent of about 98 A4 pages.

Graphic plotters
Are used in printing maps and structural and architectural designs.

Softcopy
Refers to the output displayed on a computer screen.
The principal softcopy output devices are screens and voice output systems.

Computer screen
Is commonly known as visual display unit ( VDU) or monitor.
They fall into two categories.
- Monochrome monitor
- Color monitor.

Monochrome monitor – Display only single-color image.


Color monitor - display multi color images.

Voice output
The most common are small loudspeakers commonly fitted in desktop computers.

BACKING STORAGE
Is also known as secondary or auxiliary storage. It holds data and instructions until it is deleted.
The contents of the storage is not directly accessed by the CPU, it must be copied to the main memory
first.
Two main components constitute a secondary storage unit.
 Storage medium
 Read / write device

Commonly used technologies include.


 magnetic tape
 magnetic disk
 optical disk
 flash disk
These technologies were copied from music industry with modification to suit computer use.

Magnetic tape
A narrow plastic strip coated with magnetic material just like the tape in a tape-recorder, the data is
written to or read from the tape as it passes the magnetic heads sequentially.

Uses
Magnetic tapes are often used to make a copy of hard discs for back-up reasons. This is automatically
done overnight on the KLB network and the tapes are kept in a safe place away from the server.

Advantages

16 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Magnetic tape is relatively cheap and tape cassettes can store very large quantities of data.
Disadvantages
Accessing data is very slow and you cannot go directly to an item of data on the tape as you can with a
disc. It is necessary to start at the beginning of the tape and search for the data as the tape goes past the
heads (serial access).

Magnetic Disk
The main categories of magnetic disk is Hard disks

Hard discs
It is the principal storage of a computer system and is always assembled in the system unit. Data is
stored by magnetizing the surface of flat, circular plates called platters which have a surface that can be
magnetized. They constantly rotate at very high speed. A read/write head floats on a cushion of air a
fraction of a millimeter above the surface of the disc. The drive is inside a sealed unit because even a
speck of dust could cause the heads to crash.
Programs and data are held on the disc in blocks formed by tracks and sectors. These are created when
the hard disc is first formatted and this must take place before the disc can be used. Disc is usually
supplied pre-formatted.
For a drive to read data from a disc, the read/write head must move in or out to align with the correct
track (the time to do this is called the seek time). Then it must wait until the correct sector approaches
the head.

Uses
The hard disc is usually the usual main backing storage media for a typical computer or server. It is
used to store:
- The operating system (e.g. Microsoft® Windows)
- Applications software (e.g. word-processor, database, spreadsheet, etc.)
Files such as documents, music, video etc.
A typical home/school microcomputer would have a disc capacity of up to 80 gigabytes.

Advantages
Very fast access to data. Data can be read directly from any part of the hard disc (random access). The
access speed is about 1000 KB per second.

Disadvantages
Non really! Few home users have the data on their home computer hard drive backed up so it can be a
real disaster when they eventually fail.

Optical discs
The data is read by a laser beam reflecting or not reflecting from the disc surface.
Like a floppy disc, a CD-ROM only starts spinning when requested and it has to spin up to the correct
speed each time it is accessed. It is much faster to access than a floppy but it is currently slower than a
hard disc.

Uses
Most software programs are now sold on CD-Rom.
Advantages
CD-ROM's hold large quantities of data (650 MB).
They are relatively tough as long as the surface does not get too scratched.
17 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology

Chapter Two
SOFTWARE
Software is a set of logically related programs which controls the computer hardware.
Program is a group of logically related instructions which directs a computer on how to perform very
specific processing tasks.
The instructions direct the computer what to do step by step and the sequence of instructions determines
the order in which the instructions are executed.

Software falls into major categories.


 system software
 Application software

SYSTEM SOFTWARE
System software is the software that helps the computer system to function and perform all its tasks.
Therefore it is a set of programs designed to enable the computer to manage it own resources ( i.e
hardware and application software) It includes the operating system, which manages the hardware and
software resources of the system, as well as the various utility programs that help to maintain and
optimize the system.
System software jobs typically involve working with these different components to ensure they function
correctly and efficiently. This can include troubleshooting and resolving issues and developing new
features and enhancements.
There are several different types of system software that we will look at in more detail very shortly:
 Operating Systems
 Utility programs
 Library programs are a compiled collection of subroutines
 Translator software (Compiler, Assembler, Interpreter)

Operating system
a collection of programs that make the computer hardware conveniently available to the user and also
hide the complexities of the computer's operation. The Operating System (such as Windows 7 or Linux)
interprets commands issued by application software (e.g. word processor and spreadsheets). The
Operating System is also an interface between the application software and computer. Without the
operating system, the application programs would be unable to communicate with the computer.
Since system software runs at the most basic level of your computer, it is called "low-level" software. It
generates the user interface and allows the operating system to interact with the hardware. Fortunately,
you don't have to worry about what the system software is doing since it just runs in the background. It's
nice to think you are working at a "high-level" anyway

Functions of operating system


An operating system is called an executive “ supervisor” .It performs the following functions.

User Interface.
The operating system provides an interface between the user and the computer system.

Job management.
18 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
It controls the running of programs which one gets executed first, then next.
In small computer, the operating system responds to interactive commands from the user and loads the
requested application program into memory for execution. Larger computers programs are run in shifts.

Task management
Controls the simultaneous execution of programs this referred to a multitasking. Operating systems of
this category have the ability to prioritize programs so that one job gets done between the other.
In order to provide users at terminals with the fastest response time, batch programs can be put on
lowest priority and interactive programs can be given highest priority.
Multi-tasking is accomplished by executing instructions for one function while data is coming into or
going out of the computer for another large computer are designed to overlap these operations and data
can move simultaneously in and out of the computer through separate channels with the operating
system governing these actions.

Data Management
Operating system keeps track of data on the disk. The application program does not know how to get it.
When a program is ready to accept data,it signals the operating system finds the data and delivers it to
the program. Conversely, when the program is ready to putput the operating system transfers the data
from the program onto the available space on disk.

Device Management
Operating system controls the input and from the peripheral devices.
The operating system is responsible for providing control management of all devices, not just disk
drives. When a new type of peripheral is added to the computer the operating system is updated with
anew driver for that device. The driver contains the specific instructions necessary to run it. The
operating system calls the drivers foe input and output, and the drivers talk to the hardware.

Categories of operating system


Operating system falls into two major categories.
- Single user operating system
- Multi user operating system

Single user
Supports one user at a time or a P.C
This category can furher be classified into:
- Single user single tasking.
- Single user multitasking.

Single user single tasking


Supports one task at a time or the main memory.
Examples Pc ,Dos . Ms Dos.

Single user Multitasking.


Allows a single user to handle several tasks simultaneously. So that one could have in RAM more than
one application programs.
Examples Windows operating systems.

19 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Multi-user operating system.
Allows a number of users to use the computer system simultaneously in This computer system is called
a host computer.
Each user is connected to the host computer via a user station or terminal. The operating system allows
the users to share hardware devices, data and software.
Examples windows 98 / 200/ NT, Unix, Novell et,c

Utility programs
Are small, powerful programs with a limited capability, they are usually operated by the user to maintain
a smooth running of the computer system. Various examples include file management, diagnosing
problems and finding out information about the computer etc. Notable examples of utility programs
include copy, paste, delete, file searching, disk defragmenter, disk cleanup. However, there are also
other types that can be separately installable from the Operating System.

Library Programs
Library programs are compiled libraries of commonly-used routines. On a Windows system they usually
carry the file extension dll and are often referred to as run-time libraries. The libraries are run-time
because they are called upon by running programs when they are needed. When you program using a
run-time library, you typically add a reference to it either in your code or through the IDE in which you
are programming.
Some library programs are provided within operating systems like Windows or along with development
tools like Visual Studio. For example, it is possible to download and use a library of routines that can be
used with Windows Media Player. This includes things like making playlists, functions and procedures
for accessing and manipulating the music library (which is a binary file) and playback routines.
Using library programs saves time when programming. It also allows the programmer to interact with
proprietary software without having access to its source code.

Language Translators
Whatever language or type of language we use to write our programs, they need to be in machine code
in order to be executed by the computer. There are 3 main categories of translator used,

Assembler

An assembler is a program that translates the mnemonic codes used in assembly language into the bit
patterns that represent machine operations. Assembly language has a one-to-one equivalence with
machine code, each assembly statement can be converted into a single machine operation.

Compiler

A compiler turns the source code that you write in a high-level language into object code (machine
code) that can be executed by the computer.

The compiler is a more complex beast than the assembler. It may require several machine operations to
represent a single high-level language statement. As a result, compiling may well be a lengthy process
with very large programs.

Interpreter
Interpreters translate the source code at run-time. The interpreter translates statements one-at-a-time as
the program is executed.
20 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Interpreters are often used to execute high-level language programs whilst they are being developed
since this can be quicker than compiling the entire program. The program would be compiled when it is
complete and ready to be released.

Interpreters are also used with high-level scripting languages like PHP, Javascript and many more.
These instructions are not compiled and have to be interpreted either by the browser (in the case of
Javascript) or by interpreters on the server (in the case of PHP).

APPLICATION SOFTWARE
Application software is the software that allows the computer to be applied to a particular problem.
Application software would not normally be applied to the management of the resources of the computer
such as memory, time spent processing job, and dealing with input and output. Application software is
used for a particular purpose or application such as order processing, payroll, stock control, games,
creating websites, image editing, word-processing and so on.

Software acquisition
There are many approaches of acquiring application software, the most popular approach is based on
the mode of acquisition. Thus there are two main categories.
Tailor-made application software
Purchased Application Software

a) Tailor-made Application Software


These category of application software are developed by organizations programmers or hired consultants
/ companies according to user requirement or specification
e.g - Management Information System
- Expert Systems

b) Purchased Application Software.


This software is designed and developed by software companies in various targeted fields. Once
developed they are marketed to potential users.
Such software includes:
 Office applications
 Statistical applications
 Publishing applications
 Accounting applications
 Engineering Applications etc .

Office Applications
Enable the user handle office tasks e.g word processor, Spreadsheets, Database manager etc.
Word processor – Enable the user to create, edit and produce documents.
Examples are Ms Word e.t.c.
Spreadsheet - Is an electronic spreadsheet which enable the user to capture data and
develop specialized reports. Examples are Ms Excel.
Database Management - Allows the user to store large amounts of records and
manipulated with great flexibility to produce meaningful Management
report. Examples Ms Access.
Presentation – Enable the to make presentation
Example Ms powerpoint
21 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology

Software Developments
Programming Languages
Programming languages are the primary tools for creating software. The concept of language
generations, sometimes called levels, is closely connected to the advances in technology that brought
about computer generations. So far 5 generations of programming languages have been defined. These
ranges from machine level languages (1GL) to languages necessary for AI & Neural Networks (5GL). A
brief introduction of each of the five generations is given below:

First Generation Programming Language


First generation of programming language refers to machine language. This is the lowest level of
programming language. All the commands and data values are given in ones and zeros, corresponding to
the "on" and "off" electrical states in a computer.
Advantage
 These languages directly talk to hardware i.e. are directly executed
 These programs are not easy for humans to read, write, or debug.
Disadvantage
 Programs are not portable

Second Generation Programming Language


Second generation of languages is also low level language which is known as assembly language.
Assembly languages are symbolic programming languages that use symbolic notation to represent
machine-language instructions. Symbolic programming languages are strongly connected to machine
language and the internal architecture of the computer system on which they are used. They are called
low-level languages because they are so closely related to the machines. Nearly all computer systems
have an assembly language available for use.
It uses mnemonic codes, or easy-to-remember abbreviations, rather than numbers. Examples of these
codes include A for add, CMP for compare, MP for multiply, and STO for storing information into
memory.
Advantage
 The programming process became easier with the development of assembly language, a
language that is logically equivalent to machine language but is easier for people to read, write,
and understand.
 Programs can be very efficient in terms of execution time and main memory usage. Nearly every
instruction is written on a one-for-one basis with machine language.
Disadvantage
 Programs are not portable

Third Generation Programming Language


A programming language in which the program statements are not closely related to the internal
characteristics of the computer is called a high-level language.
High-level languages fall somewhere between natural languages and machine languages, and were
developed to make the programming process more efficient. High level Languages made it possible for
scientists and business people to write programs using familiar terms instead of obscure machine
instructions.

22 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Advantage
 The programming process became easier because the programmers concentrated on the solution
rather than the coding
 Programs are portable

Disadvantage
 Programs are portable
 A translator is needed to translate the symbolic statements of a high-level language into
computer-executable machine language.
Fourth Generation Programming Language
With each generation, programming languages have become easier to use and more like natural
languages. However, fourth-generation languages (4GLs) seem to sever connections with the prior
generation because they are basically nonprocedural. In a nonprocedural language, users define only
what they want the computer to do, without supplying all the details of how something is to be done.
These languages are similar or closer to human languages. General characteristics of 4GL are:
- Closer to human languages
- Portable
- Database supportive
- Simple and requires less effort than 3GL

Advantage
 The programming process became much easier because the programmers generate programs
 Programs are portable

Disadvantage
 Programs are portable
 A translator is needed to translate the symbolic statements into computer-executable machine
language.
Object-Oriented Languages
In object-oriented programming, a program is no longer a series of instructions, but a collection of
objects. These objects contain both data and instructions, are assigned to classes, and can perform
specific tasks. With this approach, programmers can build programs from pre-existing objects and can
use features from one program in another. These results in faster development time, reduced
maintenance costs, and improved flexibility for future revisions. Some examples of object-oriented
languages are: C, Java, and Ada
Fifth Generation Natural Languages
Natural language programming is a subfield of AI that deals with the ability of computers to understand
and process human language. It is an interdisciplinary field that combines linguistics, computer science,
and artificial intelligence.
Languages used for writing programs for Artificial Intelligence, Neural Network, Plasma Computing
etc.
Natural language programming is a subfield of AI that deals with the ability of computers to understand
and process human language. It is an interdisciplinary field that combines linguistics, computer science,
and artificial intelligence.
Natural language processing (NLP) is the ability of a computer program to understand human language
as it's spoken and written -- referred to as natural language.

23 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
NLP techniques enable machines to understand, interpret, and generate human language, making it
possible to process and analyze vast amounts of textual data. Python, being a versatile and powerful
programming language, has emerged as a popular choice for NLP tasks due to its rich ecosystem of
libraries and frameworks
NLP is used in a variety of applications, including machine translation, chatbots, and voice recognition

The best natural language for programming


 Python. Python is considered the Swiss Army Knife of programming because of its versatility. ...
 Java. Java is another commonly used programming language in the field of natural language
processing. ...
 R. While R is popular for being used in statistical learning, it's widely used for natural language
processing.

This is the future of programming language.

SOFTWARE DEVELOPMENT
Software development is the process of computer programming, documenting, testing, and bug fixing
resulting in a product. Software development in a broader sense, it includes all that is involved between
the conceptions of the desired software through to the final manifestation of the software. Therefore,
software development may include research, new development, prototyping, modification, reuse, re-
engineering, maintenance, or any other activities that result in software products.
Software can be developed for a variety of purposes, the three most common being to meet specific
needs of a specific client/business (the case with custom software), to meet a perceived need of some set
of potential users (the case with commercial and open source software), or for personal use (e.g. a
scientist may write software to automate a task).

Evolution of Software Development


Software development, the process of creating computer programs to perform various tasks, has evolved
significantly since its inception. This evolution has been driven by technological advancements,
changing needs, and the growing complexity of the digital world. Let’s explore the key stages in the
evolution of software development

1. The Pioneering Days (1940s-1950s)


In the early days of computing, software development was a manual and highly technical process.
Computer programmers wrote machine-level instructions, dealing directly with the hardware. This era
was marked by:
Key Points:
Manual Coding: In the beginning, software was crafted through manual coding, where programmers
wrote machine-level instructions by hand.
Limited Hardware: Hardware limitations forced developers to write efficient and compact code.

Use: Software development was in its infancy, primarily used for scientific and military purposes.

Applications:
 Scientific calculations and simulations.
 Military and defense systems.
 Business data processing.

24 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
2. The Birth of High-Level Languages (1950s-1960s)
Grace Hopper developed the first compiler, which translated symbolic code into machine code, making
programming more accessible.
The introduction of high-level programming languages like Fortran, COBOL, and LISP revolutionized
software development. Key developments included:
Key Points:
High-Level Languages: The introduction of high-level programming languages like Fortran, COBOL,
and BASIC made coding more accessible.
Compiler and Interpreter: Compilers and interpreters translated high-level code into machine code,
simplifying the coding process.

Use: Business applications and database management systems gained prominence.

Applications:
 Commercial data processing.
 Early database management systems.
 Development of operating systems.

3. The Personal Computer Revolution (1970s-1980s)


The C programming language, developed by Dennis Ritchie at Bell Labs, revolutionized software
development, leading to the creation of Unix.
Microsoft’s Disk Operating System (MS-DOS) became the standard operating system for personal
computers.
The term “virus” was coined to describe self-replicating code, a new challenge for software developers.

The advent of personal computers brought software development to a broader audience. This era
witnessed:
Key Points:
Personal Computers: The advent of personal computers brought software development to a broader
audience.
Graphical User Interfaces (GUI): Graphical interfaces like Windows and Macintosh OS improved user
experience.
Use: Expansion into home computing, gaming, and word processing.
Applications:
Word processing software (e.g., MS Word).
Early PC games (e.g., Pong and Pac-Man).
Development of GUI-based operating systems.

4. The Internet Age (1990s-2000s)


The World Wide Web transformed software into a global, interconnected entity. Key developments
included:
Key Points:
World Wide Web: The birth of the World Wide Web transformed software into a global, interconnected
entity.
Client-Server Architecture: Client-server models allowed users to interact with web applications.
Use: E-commerce, online communication, and web-based applications.
Applications:
Development of web browsers (e.g., Netscape Navigator).
E-commerce platforms (e.g., Amazon and eBay).
25 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Email and instant messaging services.

5. 2001: Apple introduced Mac OS X, combining the Unix-based architecture with user-friendly
interfaces, influencing modern operating systems.
2008: The release of the Apple App Store marked the start of the mobile app era, transforming software
development.
2009: Bitcoin, a decentralized digital currency, introduced blockchain technology, opening up new
possibilities for software applications.

Evolution of Software Development in 2010s:


2010: DevOps practices became widespread, promoting collaboration between software development
and IT operations.
2013: Docker was released, popularizing containerization and changing how applications are developed
and deployed.
2015: The term “Artificial Intelligence” gained widespread attention as machine learning and AI became
integral to software development.

Artificial Intelligence and Automation:


AI will play an increasingly significant role in software development, from code generation and
debugging to automating routine tasks.
Automated testing and quality assurance processes will become more advanced, improving code
reliability and reducing errors
Cybersecurity and Privacy Emphasis:
With an increasing number of cyber threats, software development will place a strong focus on security
and privacy, including encryption and identity management.
Edge Computing and IoT:
Software development
will increasingly focus on edge computing to process data closer to where it’s generated, supporting
real-time applications in IoT, autonomous vehicles, and more.

Evolution of Software Development in 2020s:


2020: The COVID-19 pandemic accelerated the need for remote work and digital solutions, driving
innovation in software development.
2022: Quantum computing advanced significantly, offering new opportunities and challenges for
software development.
The evolution of software development has been a dynamic journey marked by numerous technological
breakthroughs and paradigm shifts. It continues to shape our modern world, offering new possibilities
and challenges with each passing year.

The Rise of Mobile and Apps (2000s-Present)


The proliferation of smartphones and app stores introduced a new era of software development.
Significant developments included:
Key Points:
Mobile Devices: The rise of smartphones and tablets led to a new era of software development.
App Stores: App stores, such as the Apple App Store and Google Play, centralized distribution.
Use: Mobile apps for various purposes, from social networking to navigation.
Applications:
Mobile gaming apps (e.g., Angry Birds).
26 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Social media applications (e.g., Facebook and Instagram).
Navigation and productivity apps (e.g., Google Maps and Microsoft Office).

6. Cloud Computing and AI (Present and Beyond)


The present era is characterized by cloud computing and the integration of artificial intelligence (AI)
into software development:
Key Points:
Cloud Computing: Cloud platforms offer scalable and accessible resources for software development.
Artificial Intelligence: AI and machine learning are integrated into software, enabling automation and
intelligent decision-making.
Use: Cloud-based services, AI-driven applications, and IoT

Applications:
Cloud-based storage and computing (e.g., Amazon Web Services).
AI-powered virtual assistants (e.g., Siri and Alexa).
Internet of Things (IoT) applications for smart homes and cities.
Year Wise Evolution of Software Development

Software Development Jobs


Many jobs that use software development skills include software developers, engineers, and system
administrators. These professionals use their skills to develop and maintain software applications, and
they also use their skills to troubleshoot and fix software issues.

A software developer job involves designing, creating, testing, and maintaining software applications.
They may work in various industries, including computer science, engineering, information technology
and business.

27 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Chapter Three
DATA COMMUNICATION
Data communication is the process of transferring data from one place to another or between two
locations. It allows electronic and digital data to move between two networks, no matter where the two
are located geographically.
The fundamental purpose of data communications is to exchange information between user's computers,
terminals and applications programs. In its simplest form data communications takes place between two
devices that are directly connected by some form of point-to-point transmission medium.

Computer network
Computer networking refers to interconnected computing devices that can exchange data and share
resources with each other. These networked devices use a system of rules, called communications
protocols, to transmit information over physical or wireless technologies.
Computer network is built with two basic blocks: nodes or network devices and links. The links connect
two or more nodes with each other. The way these links carry the information is defined by
communication protocols.
Objectives of Deploying a Computer Network
a) Resource sharing
A network allows data and hardware to be accessible to every pertinent user, across departments,
geographies, and time zones.
b) Resource availability
A network ensures that resources are not present in inaccessible silos and are available from multiple
points. facilitate sharing of network resources, such as: Data, Hardware facilities, Software
c) Performance management
When one or more processors are added to the network, it improves the system’s overall
performance and accommodates this growth.
Cost savings
This not only improves performance but also saves money. Since it enables employees to access
information in seconds, networks save operational time, and subsequently, costs. Centralized
network administration also means that fewer investments need to be made for IT support.
d) Increased storage capacity
Network-attached storage devices are a boon for employees who work with high volumes of data.
With businesses seeing record levels of customer data flowing into their systems, the ability to
increase storage capacity is necessary in today’s world.
e) Streamlined collaboration & communication
Networks have a major impact on the day-to-day functioning of a company. Employees can share
files, view each other’s work, sync their calendars, and exchange ideas more effectively.
f) Reduction of errors
Networks reduce errors by ensuring that all involved parties acquire information from a single
source, even if they are viewing it from different locations.
g) Secured remote access
A secure network ensures that users have a safe way of accessing and working on sensitive data,
even when they’re away from the company premises.

28 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Key components of computer Network


 Network Devices
 Communication Link
 Communication Protocols
 Network Security
 Software

1. Network Devices
Network devices or nodes are computing devices that need to be linked in the network. Some network
devices include:
Computers, mobiles, and other consumer devices: These are end devices that users directly and
frequently access. For example, an email originates from the mailing application on a laptop or mobile
phone.

 Terminal
Also referred to as a workstation or remote site, It is used for accessing the network resources.
They fall into three main categories:
Dump terminal – Used for input and retrieval e.g. ATM
Smart terminal - Used for input, retrieval and limited processing.
Intelligent terminal - Used for input, retrieval, limited processing and storage of data

 Servers: These are application or storage servers where the main computation and data
storage occur. Handles all computer processing requests from terminals. It stores application
programs processing requests for and computer communication programs. Computer systems
used as “host” computers vary considerably in terms of size and capability.

 Routers: Routing is the process of selecting the network path through which the data
packets traverse. Routers are devices that forward these packets between networks to
ultimately reach the destination. They add efficiency to large networks.
 A hub
Is a device into which you can connect all device s on a network so that they can talk to each
other. Can add delays to your network.
 Switch
Is used to connect all devices on a home network so that they can talk together switches regulate
traffic providing more efficient traffic flow.
 A bridge
Is used to allow traffic from one network segment to the other. When network segments are
combined into a single large network, paths exist between the individual network segments.
These paths are called routes, and devices like routers and bridges keep tables which define how
to get a particular computer in the network.
 Switches: are electronic devices that receive network signals and clean or strengthen
them. Hubs are repeaters with multiple ports in them. A switch is a multi-port bridge.
Multiple data cables can be plugged into switches to enable communication with multiple
network devices.
 Gateways: Gateways are hardware devices that act as ‘gates’ between two distinct
networks. They can be firewalls, routers, or servers.
 Modem
Modem accepts digital signals and converts them to analog and vice versa.
29 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
To allow digital data to be transmitted over telephone lines, is converted (i.e. modulated) to
analog signals and transmitted upon reaching their destination the analog signal is demodulated
to digital signals.

2. Communication Links
Is a facility by which data is transmitted between locations in a computer network. The channel may be
one or a combination of transmission media below.

Links are the transmission media which can be of two types:


 Wired: Examples of wired technologies used in networks include coaxial cables, phone lines,
twisted-pair cabling, and optical fibers. Optical fibers carry pulses of light to represent data.
 Wireless: Network connections can also be established through radio or other electromagnetic
signals. This kind of transmission is called ‘wireless’. The most common examples of wireless
links include communication satellites, cellular networks, and radio and technology spread
spectrums.

Wired
 Twisted cable
 coaxial cables
 fiber optics
The already established complex telephone networks permits data to be transmitted over the network.

Coaxial cable
Commit of a hollow outer conductor which surrounds a single inner conductor. Both the outer and inner
conductor is insulated.
Permit high-speed data transmission with minimal signal distortion.

Fiber optic cable


Digital signals are converted to light and transmitted along the cable.

Advantages
Fiber optic cables have much lower error rates than telephone cables.
 Transmits 10,000 times faster than the microwave systems.
 it is resistant to illegal data theft (tapping)
 it is very cheap

30 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
WIRELESS TECHNOLOGIES
Wireless networks provide a convenient wireless connection without cables using radio waves to
transmit data between devices.
Wireless networks have become integral to our daily lives, enabling us to access the internet, connect
devices, and communicate wirelessly.
From the convenience of Wi-Fi in our homes to the deployment of wireless networks in various
industries, wireless technology has revolutionized how we stay connected.

Some of the Wireless Technologies include:


a. Bluetooth and BLE (Bluetooth Low Energy)
Bluetooth and BLE are used for everything from fitness and medical wearables like smartwatches to
smart home devices like home security systems, where data is communicated to smartphones. They
work quite effectively with very short-range communications.
Bluetooth technology enables short-range wireless communication between devices such as
smartphones, laptops, and tablets. It uses low-power radio waves to Bluetooth. Bluetooth is a standard
used for short-range wireless connections between smartphones and peripherals like wireless
headphones/earbuds, speakers ...

b. WiFi
WiFi has played a critical role in providing high-throughput data transfer in homes and for enterprises
— it’s another well-known IoT wireless technology. It can be quite effective in the right situations,
though it has significant limitations with scalability, coverage, and high power consumption.
The high energy requirements often make WiFi a poor solution for large networks with battery-operated
sensors, such as smart buildings and industrial use. Instead, it’s more effective with devices like smart
home appliances. The latest WiFi technology, WiFi 6, does offer improved bandwidth and speed, though
it’s still behind other available options. And it carries security risks that other options don’t.

c. Zigbee.
Zigbee is the leading wireless standard behind IoT devices like smart home equipment, consumer
electronics, healthcare gear, and industrial ...
Zigbee is a wireless technology specifically designed for low-power, low-data rate applications. It is
commonly used in smart home devices, industrial ..

e. LPWAN (Cat-M1/NB-IoT)
Low power wide area networks (LPWAN) provide long-range communication using small, inexpensive
batteries. This family of technologies is ideal for supporting large-scale IoT networks where a
significant range is required. However, LPWANs can only send small blocks of data at a low rate.
LPWANs are ideally suited for use cases that don’t require time sensitivity or high bandwidth, like a
water meter for example. They can be quite effective for asset tracking in a manufacturing facility,
facility management, and environmental monitoring. Keep in mind that standardization is important to
ensure the network’s security, interoperability, and reliability.

f. LoRaWAN(Low Power Wide Area Network)


LoRaWAN is a powerful and emerging technology. It’s similar to Bluetooth, but it offers a longer range
for small data packets with low power consumption. LoRaWAN manages the communication
frequencies, power, and data rate for all connected devices. So, LoRaWAN sensors communicate to a
cellular gateway to send data to the cloud.
LoRaWAN is a powerful and emerging technology. It's similar to Bluetooth, but it offers a longer range
for small data packets with low power consumption.
31 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology

g. NFC
Near-field communication (NFC) is a short-range wireless connectivity technology that uses magnetic
field induction to enable communication between devices when they're touched together or brought
within a few centimeters of each other. This includes authenticating credit cards, enabling physical
access, transferring small files and jumpstarting more capable wireless links.

h. RFID
Radio Frequency Identification (RFID) uses radio frequency signals to track and monitor objects and
assets efficiently and accurately. RFID systems consist of tags, equipped with unique identification
numbers, that can be attached to or embedded in objects like credential badges and wearables for
sporting events, parking tags to hang in automobiles, loyalty cards, and warehousing labels. RFID tags
exchange information wirelessly but do not connect to the internet directly. RFID technology requires a
gateway and cellular connectivity to send data to a cloud platform.
RFID is a wireless technology that uses radio waves to identify and track objects. It consists of a tag
with a unique identifier and a reader that captures.

i. Cellular
One of the best-known wireless technology is cellular, particularly in the consumer mobile market. A
cellular network is a system of radio waves dispersed across land in the shape of cells, with a base
station permanently fixed in each cell. These cells work together to provide greater geographic radio
coverage.
Every base station is connected to the mobile switching centre to create a call and mobility network by
connecting mobile phones to wide area networks.

Types of Cellular Networks


Cellular networks are an essential component of contemporary telecommunications, giving mobile
devices wireless connectivity. Different standards and technology distinguish the many types of cellular
networks. Several important kinds are as follows:

First Generation (1G): Available in the 1980s, analogue voice communication was the primary feature
of the first generation of cellular networks.

Second Generation (2G): These networks brought text messaging (SMS) and better audio quality,
signaling the shift to digital technologies.

Third Generation (3G): These networks significantly increased data transmission speeds, making
multimedia apps, video calling, and mobile internet access possible.

Fourth Generation (4G): These networks are intended to offer reduced latency, increased support for
multimedia applications, and quicker data transfer rates. For 4G networks, Long-Term Evolution (LTE)
is a standard.

Fifth Generation (5G): The newest Generation of cellular technology, known as 5G (Fifth Generation)
networks, are intended to offer much higher data transmission speeds, lower latency, and support for a
large number of connected devices. To accomplish these improvements, 5G uses various frequency
bands, including millimeter-wave frequencies.

32 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Sixth Generation 6G: Although it is still at the conceptual stage, 6G is anticipated to deliver higher data
speeds, reduced latency, and additional features not seen in 5G. Advanced artificial intelligence and
terahertz frequencies are two possible technological components.

j. Microwave Link
Microwave means very short wave. The microwave frequency spectrum is 1GHZ to 30 30GHZ.Lower
frequency bands are congested and demand for point to point communication.
A Microwave Link in Electronic Communication performs the same functions as a copper or optic fiber
cable, but in a different manner, by using point-to-point microwave transmission between repeaters.

A Microwave Link in Electronic Communication terminal has a number of similarities to a coaxial cable
terminal. Where a cable system uses a number of coaxial cable pairs, a Microwave Link in Electronic
Communication will use a number of carriers at various frequencies within the bandwidth allocated to
the system. The effect is much the same, and once again a spare carrier is used as a “protection” bearer
in case one of the working bearers fails. Finally, there are interconnections at the terminal to other
microwave or cable systems, local or-trunk.

The antennas most frequently used are those with parabolic reflectors. Hoghorn antennas are preferred
for high-density links, since they are broadband and low-noise. They also lend themselves to so-called
frequency reuse, by means of separation of signals through vertical and horizontal polarization.

The towers used for Microwave Link in Electronic Communication range in height up to about 25 m,
depending on the terrain, length of that particular link and location of the tower itself. Such link
repeaters are unattended, and, unlike coaxial cables where direct current is fed down the cable, repeaters
must have their own power supplies. The 200 to 300 W of dc power required by a link is generally
provided by a battery. In turn, the power is replenished by a generator, which may be diesel, wind-
driven or, in some (especially desert) locations, solar. The antennas themselves are mounted near the top
of the tower, a few meters apart in the case of space diversity.

Basically, microwave links are cheaper and have better properties for TV transmission, although coaxial
cable is much less prone to interference.
Microwave technologies can be a very secure form of communication. If a signal needs to be transmitted
over a short distance.

k. Satellite systems
A Satellite System is defined as a network comprising satellites, system control centers, gateways, and
terminals that enable direct communication channels between satellites and terminals, as well as
connections to land networks through feeder links. The system is managed and monitored by a central
control center, with satellites classified as intelligent or dumb based on their functionality.

Communications satellites are radio-relay stations in space. They serve much the same purpose as the
microwave towers one sees along the highway. The satellites receive radio signals transmitted from the
ground, amplify them, and retransmit them back to the ground. Since the satellites are at high altitude,
they can “see” across much of the earth. This gives them their principal communications advantage: the
ability to span large distances. Ground links, such as microwave relays, are inherently limited in their
ability to cover large distances by the terrain.

33 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Satellite Networks
Satellite systems provide voice, data, and broadcast services with widespread, often global, coverage to
high-mobility users as well as to fixed sites. They have the same basic architecture as cellular systems,
except that the base stations are satellites orbiting the earth. Satellites are characterized by their orbit
distance from the earth: low-earth orbit (LEO) at 500 to 2,000 km, medium-earth orbit (MEO) at 10,000
km, and geosynchronous orbit (GEO) at 35,800 km.
Satellites with lower orbits have smaller coverage areas, and these coverage areas change over time so
that satellite handoff is needed for stationary users or fixed-point service.

Satellite technologies
To date, there are over 1,200 satellites orbiting Earth to support three main categories of systems: (1) to
collect data (including weather data, pictures, etc.), (2) to broadcast location-related information, and (3)
to relay communications. The first category deals with remote sensing that enables large-scale data
collection for monitoring both short- and long-term phenomena, including emergencies and disasters.
Major uses of the collected data include mapping and cartography, weather and environmental
monitoring, and change detection .

Applications
Satellite systems that provide vehicle tracking and dispatching (OMNITRACs) are commercially
successful. Satellite navigation systems (the Global Positioning System or GPS) are very widely used. A
new wireless system for Digital Audio Broadcasting (DAB) has recently been introduced in Europe.

Satellite Surveillance Systems


Government/military agencies use satellite-based surveillance systems to perform reconnaissance
operations domestically and globally.
Reconnaissance satellites are generally ones that observe the Earth and can zoom in to view everything
and anything from space to Earth’s surface. Generally, used for military operations or law enforcement
intelligence, these systems allow for high-resolution viewing, eavesdropping visually, and, at times, with
audio and recognition.

3. Communication protocols
A communications protocol is a set of formal rules describing how to transmit or exchange data,
especially across a network.
These types of protocols use typical rules as well as methods like a common language to interact with
computers or networks to each other. For instance, if a user wants to send an e-mail to another, then the
user will create the e-mail on his personal computer by including the details along with the message and
attachments.
Once the user sends the e-mail, then immediately multiple actions can take place so that the receiver gets
the email. The message moves over the network and reaches the recipient. These protocols provide the
information on how the note will be enclosed so that it can move over the system, how the receiver
computer can verify for errors, etc

A standardized communications protocol is one that has been codified as a standard. Some common
protocols include WiFi, the internet protocol suite (TCP/IP),, and the Hypertext Transfer Protocol
(HTTP). IEEE 802, etc.
TCP/IP is a conceptual model that standardizes communication in a modern network. It suggests four
functional layers of these communication links:
 Network access layer: This layer defines how the data is physically transferred. It includes how
hardware sends data bits through physical wires or fibers.
34 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
 Internet layer: This layer is responsible for packaging the data into understandable packets and
allowing it to be sent and received.
 Transport layer: This layer enables devices to maintain a conversation by ensuring the
connection is valid and stable.
 Application layer: This layer defines how high-level applications can access the network to
initiate data transfer.

Communication Protocols in Internet of Things(IoT)


The IoT based devices are more susceptible to threats. So these security loopholes can be reduced by
using the correct protocols. Communication protocols in IoT are types of communication that ensure the
finest security toward the data being exchanged among the IoT connected devices.
The main benefits of IoT communication protocols are high quality, credibility, interoperability,
innovation flexibility & global scalability. IoT protocols are available in two types mainly IoT network
protocols and IoT data protocols.
The list of Top 10 IoT Communication Protocols includes the following.
 WiFi
 SigFox
 Bluetooth
 LoRaWAN
 NFC (Near Field Communication)
 Z wave
 Zigbee
 OPC- UA
 Cellular
 TCP/IP (Transmission Control Protocol/Internet Protocol): A foundational protocol suite for
internet communication, responsible for data transmission and routing. Examples include HTTP
for web browsing and SMTP for email.
 HTTP (Hypertext Transfer Protocol): Used for transferring web pages and related files over the
internet. It governs the request-response cycle between web clients and servers. It's a bit old and
obsolete nowadays.

4. Network Security
Network security encompasses all the steps taken to protect the integrity of a computer network and the
data within it. Network security is important because it keeps sensitive data safe from cyber-attacks and
ensures the network is usable and trustworthy. Successful network security strategies employ multiple
security solutions to protect users and organizations from malware and cyber-attacks, like distributed
denial of service.
A network is composed of interconnected devices, such as computers, servers and wireless networks.
Many of these devices are susceptible to potential attackers. Network security involves the use of a
variety of software and hardware tools on a network or as software as a service. Security becomes more
important as networks grow more complex and enterprises rely more on their networks and data to
conduct business. Security methods must evolve as threat actors create new attack methods on these
increasingly complex networks.
Security is critical when unprecedented amounts of data are generated, moved, and processed across
networks.

Why is network security important?

35 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Network security is critical because it prevents cybercriminals from gaining access to valuable data and
sensitive information. When hackers get hold of such data, they can cause a variety of problems,
including identity theft, stolen assets and reputational harm.
The following are four of the most important reasons why protecting networks and the data they hold is
important:
Operational risks
An organization without adequate network security risks disruption of its operations. Business also rely
on networks for most internal and external communication.

Financial risks for compromised personally identifiable information (PII).


Data breaches can be expensive for both individuals and businesses. Data breaches and exposure also
can ruin a company's reputation and expose it to lawsuits.

Financial risk for compromised intellectual property


Organizations can also have their own intellectual property stolen, which is costly. The loss of a
company's ideas, inventions and products can lead to loss of business and competitive advantages.

Regulatory issues
Many governments require businesses to comply with data security regulations that cover aspects of
network security

How does network security work?


Network security is enforced using a combination of hardware and software tools. The primary goal of
network security is to prevent unauthorized access into or between parts of a network.

5. Software
Software falls into two categories:
-Operating system
-Application software

Operating system
Controls data transmission in the network through receiving data establishing contact with terminals and
processing any line errors which may occur

Browser
Browser is an interactive program that permits a user to view information from the World Wide Web
(www).
Operating system
World Wide Web (WWW)- is a large scale, on-line repository of information that users can
Access using a browser.

Categories Of Computer Networks


Computer networks falls into two main categories: Local area network (LAN) and Wide area network
(WAN)

Local area network


This is a network which serves a small geographical area and is limited to a specific application.

36 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology

Wide Area Network


WAN is a computer network which covers a large geographical area with multiple applications.
It involves linkage of very many networks, hence it is a network of networks.
WAN can be categorized into the following categories:
 Metropolitan network
 National network
 Regional network
 Global network

Software
 Browser
 Operating System
Information
 World Wide Web (www)

Application or use of Internet


Electronic mail
Have features comparable to with those of postal service. Which include:

37 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Each user has a ‘MailBox’which is accessed via a computer terminal for each user.

Information ‘Browsing’
End users often working from PCs are able to search and find information of interest.

Newsgroups
Under a facility on the internet called “Usenet” individuals can gain access to a very wide range of
information topics. The Usenet software receives “postings” of information and transmits new postings
of information and transmits new postings to users who have registered their interest in receiving the
information. Software user groups include: Professional bodies

Access computer
User can easily access any computer within the network as long as the address is known.

Evolution of networking
The evolution of networking has been one of the most transformative technological advancements in
modern history. Today, networking plays an integral role in the way we communicate, conduct business,
and connect. In this blog, we will explore the history, evolution, present stage, and future of networking,
and highlight how networks are part of our daily lives.
Networking has come a long way since the early days of communication as shown in the table above. In
the mid-20th century, computer networks were developed, allowing computers to communicate with
each other over long distances.
The advent of the internet in the 1990s brought about a new era of networking. The internet allowed
people to connect on a global scale, and the development of the World Wide Web made it easy to access
information and services from anywhere in the world. The growth of social media platforms, mobile
devices, and cloud computing has further accelerated the evolution of networking.

Present Stage
Today, networking is an essential part of our daily lives. We use networks to communicate with friends
and family, access information and entertainment, conduct business, and even control our homes. Social
media platforms like Facebook, Twitter, and Instagram allow us to connect with people around the
world and share our experiences. Mobile devices like smartphones and tablets have made it easy to stay
connected on the go, while cloud computing has made it possible to access data and services from
anywhere in the world.
Businesses and governments also rely heavily on networking to operate efficiently. Networks allow
businesses to connect with customers, partners, and suppliers around the world, and to collaborate on
projects in real time.

Future
The future of networking is bright, with new technologies promising to bring about even more
transformative changes. One of the most exciting developments is the emergence of 5G networks, which
will offer faster speeds, lower latency, and greater reliability than current networks. This will enable new
applications like autonomous vehicles, virtual and augmented reality, and smart cities.
Other emerging technologies like the Internet of Things (IoT), artificial intelligence (AI), and
blockchain are also poised to revolutionize networking. IoT devices will enable the creation of smart
homes, smart cities, and even smart factories, while AI will help us better manage and analyze the vast
amounts of data generated by these devices. Blockchain technology, on the other hand, will enable
secure and transparent transactions between parties, without the need for intermediaries.

38 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Chapter Four
ARTIFICIAL INTELLIGENCE
Artificial intelligence is a field of science concerned with building computers and machines that can
reason, learn, and act in such a way that would normally require human intelligence or that involves data
whose scale exceeds what humans can analyze.
AI is a broad field that encompasses many different disciplines, including computer science, data
analytics and statistics, hardware and software engineering, linguistics, neuroscience, and even
philosophy and psychology.

What is intelligence?
All but the simplest human behavior is ascribed to intelligence, while even the most
complicated insect behavior is usually not taken as an indication of intelligence. What is the difference?
Consider the behavior of the digger wasp, Sphex ichneumons. When the female wasp returns to her
burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only
then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behavior is
revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on
emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—
conspicuously absent in the case of the wasp—must include the ability to adapt to new circumstances.

Psychologists generally characterize human intelligence not by just one trait but by the combination of
many diverse abilities.

The Basic Components of AI


Research in AI has focused chiefly on the following components of intelligence: learning,
reasoning, problem solving, perception, and using language.

Learning
Learning is a crucial component of AI as it enables AI systems to learn from data and improve
performance without being explicitly programmed by a human.AI technology learns by labeling data,
discovering patterns within the data, and reinforcing this learning via feedback.
There are a number of different forms of learning as applied to artificial intelligence. The simplest is
learning by trial and error.
For example, a simple computer program for solving mate-in-one chess problems might try moves at
random until mate is found. The program might then store the solution with the position so that, the next
time the computer encountered the same position, it would recall the solution. This simple memorizing
of individual items and procedures—known as rote learning—is relatively easy to implement on a
computer.
Generalization involves applying past experience to analogous new situations. For example, a program
that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a
word such as jump unless the program was previously presented with jumped, whereas a program that is
able to generalize can learn the “add -ed” rule for regular verbs ending in a consonant and so form the
past tense of jump on the basis of experience with similar verbs.
Example: Voice recognition systems like Siri or Alexa learn correct grammar and the skeleton of a
language

39 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Reasoning and decision making
The second major component of AI is reasoning and decision-making. AI systems can use logical rules,
probabilistic models, and algorithms to draw conclusions and make inferred decisions. When faced with
problems or issues, AI models should use reasoning to generate consistent results.
To reason is to draw inferences appropriate to the situation. Inferences are classified as
either deductive or inductive.
Inductive reasoning is common in science, where data are collected and tentative models are developed
to describe and predict future behavior—until the appearance of anomalous data forces the model to be
revised. Deductive reasoning is common in mathematics and logic, where elaborate structures of
irrefutable theorems are built up from a small set of basic axioms and rules.
There has been considerable success in programming computers to draw inferences. However, true
reasoning involves more than just drawing inferences: it involves drawing inferences relevant to the
solution of the particular problem. This is one of the hardest problems confronting AI.

Example: A writing assistant, like Grammar, knows when or when not to add commas and other
punctuation marks

Problem solving
Problem solving in AI is similar to reasoning and decision making. AI systems take in data, manipulate
it and apply it to create a solution that solves a specific problem.

Problem solving, particularly in artificial intelligence, may be characterized as a systematic search


through a range of possible actions in order to reach some predefined goal or solution. Problem-solving
methods divide into special purpose and general purpose. A special-purpose method is tailor-made for a
particular problem and often exploits very specific features of the situation in which the problem is
embedded. In contrast, a general-purpose method is applicable to a wide variety of problems.

One general-purpose technique used in AI is means-end analysis—a step-by-step, or incremental,


reduction of the difference between the current state and the final goal. The program selects actions from
a list of means—in the case of a simple robot, this might consist of PICKUP, PUTDOWN,
MOVEFORWARD, MOVEBACK, MOVELEFT, and MOVERIGHT—until the goal is reached.
Many diverse problems have been solved by artificial intelligence programs.

Example: A chess game understands its opponent's moves and then decides to make the best decision
based on the game's rules and predicting future moves and outcomes

Perception
Perception refers to AI utilizing different real or artificial sense organs. The AI system can take in data
and perceive suggested objects, and understand its physical relationship (e.g, distance) to said objects.
Perception often involves image recognition, object detection, image segmentation, and video analysis.
Example: Self-driving cars gather visual data to recognize roads, lanes, and obstacles and then map
these objects.

In perception the environment is scanned by means of various sensory organs, real or artificial, and the
scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by
the fact that an object may appear different depending on the angle from which it is viewed, the
direction and intensity of illumination in the scene, and how much the object contrasts with the
surrounding field. At present, artificial perception is sufficiently advanced to enable optical sensors to
identify individuals and enable autonomous vehicles to drive at moderate speeds on the open road.
40 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Computer Vision
As the name suggests, computer vision is dedicated to analyzing and comprehending visual media,
whether images or videos. It’s the component that enables AI algorithms to accurately and reliably
identify objects that the machine “sees” and react accordingly.

Examples include the following:


 Facebook's automatic photo tagging: Computer Vision is used to identify friends in your photos.
 Tesla's Autopilot system: It is also used to identify and respond to objects on the road.
 Google Lens: Computer Vision identifies objects and provides relevant information.

Language
A language is a system of signs having meaning by convention. In this sense, language need not be
confined to the spoken word. Traffic signs, for example, form a mini-language, it being a matter of
convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic
units possess meaning by convention, and linguistic meaning is very different from what is
called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in
pressure means the valve is malfunctioning.”
An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—
is their productivity. A productive language can formulate an unlimited variety of sentences.
Large language models like ChatGPT can respond fluently in a human language to questions and
statements. Although such models do not actually understand language as humans do but merely select
words that are more probable than others, they have reached the point where their command of a
language is indistinguishable from that of a normal human.

The Basic Elements


One of the simplest and most straightforward definitions of AI was presented by John McCarthy, a
professor of computer science at Stanford University, as “the science and engineering of making
intelligent systems.” The intelligent systems could be in the form of software, hardware, or a
combination of both.
The key elements of AI include:
 Natural language processing (NLP)
 Expert systems
 Robotics
 Intelligent agents
 Machine Learning

Machine learning(ML)
ML is the ability of machines to learn from data and algorithms automatically.

Machine learning has found applications in many fields beyond gaming and image classification, which
include:
 The pharmaceutical company Pfizer used the technique to quickly search millions of
possible compounds in developing the COVID-19 treatment Paxlovid.
 Google uses machine learning to filter out spam from the inbox of Gmail users.
 Banks and credit card companies use historical data to train models to detect fraudulent
transactions.
 Deepfakes are AI-generated media produced using two different deep-learning algorithms: one
that creates the best possible replica of a real image or video and another that detects whether the
replica is fake and, if it is, reports on the differences between it and the original. The
41 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
first algorithm produces a synthetic image and receives feedback on it from the
second algorithm; it then adjusts it to make it appear more real. The process is repeated until the
second algorithm does not detect any false imagery.
Deepfake media portray images that do not exist in reality or events that have never occurred.
Widely circulated deepfakes include:
an image of Pope Francis in a puffer jacket,
an image of former U.S. president Donald Trump in a scuffle with police officers,
a video of Facebook CEO Mark Zuckerberg giving a speech about his
Kenyan Politicians in different circumstances
Such events did not occur in real life.
Applications of Machine Learning
Machine Learning has a wide range of applications like predictive analytics, image recognition, and
even speech recognition. That’s especially true when combined with other AI components, such as
computer vision and NLP.

Examples of Machine Learning


Machine Learning is used in various industries and sectors. Some examples include:
 Google's search algorithms: Google uses ML algorithms to improve search result accuracy and
relevance to users.
 Recommendation engine: Sites like YouTube and Netflix use ML to analyze viewer behavior
and suggest content they might like.
 Self-driving cars: Machine learning is used to predict and respond to different scenarios on the
road.
3. Natural Language Processing
Natural language processing (NLP) is the aspect of AI that allows computers to understand spoken
words and written text.
NLP is a branch of AI that allows machines to use and understand human language. It is built into
products such as automatic language translators used in multilingual conferences, text-to-speech
translation, speech-to-text translation, and knowledge extraction from text. This technology is used to
scan data in the form of raw language such as handwriting, voice, and images into contextually relevant
structures and relationships that can easily be integrated with other structured data for more efficient
analysis in subsequent processes. Unstructured data are rarely used since they were originally meant
only for use by humans. Hence, there is a need to utilize, understand, and unlock the vast wealth of
valuable information hidden in them.
NLP is arguably the most commonly used AI as it’s intertwined in many of today’s digital assistants,
chatbots, virtual assistants, and spam detection. NLP is also used to generate sentiment analysis, which
analyzes texts and extracts the emotions and attitudes about a product or service.

Applications of NLP
NLP has various applications, including text and audio translation from one language to another,
sentiment analysis of the emotion and meaning behind sentences, and interactive chatbots capable of
understanding and participating in a human conversation.

Examples of NLP
Some examples of NLP being used in various sectors include:
 Siri: Apple's virtual assistant uses NLP to understand and respond to voice and text user
commands.
 Google Translate: NLP automatically and instantaneously translates text from one language to
another.
42 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
 Gmail Smart Reply: This feature uses NLP to suggest quick email responses.

4. Robotics
Intelligent robots are mechanical structures in various shapes that are programmed to perform specific
tasks based on human instructions.
Robotics utilizes AI to develop and design robots or machines capable of performing tasks
autonomously or semi-autonomously. Generally, robotics involves other components of AI technology,
such as NLP, ML, or perception
Depending on the environment of use (land, air, and sea), they are called drones and rovers. In the
petroleum industry, they have been used in innovative and beneficial ways: in production; to connect
different segments of drill pipes during drilling, in underwater welding to conduct underwater
maintenance and repair tasks; in exploration to map outcrops for building digital models for geologists;
and in field operations to inspect remote sites and challenging terrains that are potentially dangerous for
humans to navigate.
Some of the benefits derived from the use of robots in the oil and gas industry include improving safety,
increasing productivity, automating repetitive tasks, and reducing operational costs by diminishing
downtime.

Applications of Robotics
Robotics has countless applications, from manufacturing and healthcare to exploring remote regions and
even search-and-rescue operations using nearly autonomous vehicles.

Examples of Robotics
Robotics is used in various sectors. Examples of robotics include:
 Amazon's warehouse robots: These robots are used to move items around Amazon's warehouses
to fulfill deliveries.
 Da Vinci Surgical System: This robot performs minimally invasive surgeries that are impossible
with humans alone.
 NASA's Mars rovers: These robots are used to explore the surface of Mars.

Expert Systems
Expert systems are machines or software applications that provide explanation and advice to users
through a set of rules provided by an expert. The rules are programmed into software to reproduce the
knowledge for nonexperts to solve a range of actual problems. Examples of this are found in the fields
of medicine, pharmacy, law, food science, and engineering, and maintenance. In the oil and gas industry,
expert systems have been used from exploration through production, from research through operations,
and from training through fault diagnosis.

Applications of Expert Systems


Expert Systems have multiple applications, particularly in the healthcare and financial industries, where
decisions must be made accurately and promptly based on newly acquired information. In medical
diagnosis, expert systems can identify diseases based on symptoms. Meanwhile, they can predict stock
prices and make split-second trading decisions in finance.

Examples of Expert Systems


Some examples of Expert Systems in different sectors include:

43 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
 MYCIN: This is one of the first expert systems developed by Stanford University to diagnose
bacterial infections.
 Credit card fraud detection systems: These expert systems can identify suspicious financial
transactions with a combination of neuro-fuzzy systems.

Intelligent Agents
Multi-agent systems (MAS) is a subfield of AI that builds computational systems capable of making
decisions and take actions autonomously. These systems are capable of maintaining information about
their environment and making decisions based on their perception about the state of the environment,
their past experiences, and their objectives. Agents can also interface with other agents to collaborate on
common goals. They emulate human social behavior by sharing partial views of a problem, enabling
collaboration, and cooperating with other agents to make appropriate and timely decisions to reach
desired objectives. Agents have been implemented successfully, mostly in the manufacturing industries,
and are proven to have potential benefits in the petroleum industry.
Uses of MAS include:
 Managing supply chain
 Addressing various production- and maintenance-related tasks
 Processing and managing the distributed nature of the oil and gas business
 Verifying, validating, and securing data streams in complex process pipelines
 Getting insights from data to increase operational efficiency
 Scheduling maintenance
 Preventing theft and fraud

Artificial intelligence (AI) applications


While the specifics vary across different AI techniques, the core principle revolves around data. AI
systems learn and improve through exposure to vast amounts of data, identifying patterns and
relationships that humans may miss.
Artificial intelligence (AI) applications are software programs that use AI techniques to perform
specific tasks. These tasks can range from simple, repetitive tasks to complex, cognitive tasks that
require human-like intelligence.
Artificial intelligence (AI) is transforming the way we live, work, and interact. From our personal
lifestyles through our social engagements to the way we conduct our private and corporate businesses,
AI is altering our methodologies and changing the landscape of end products. From the age-old medical
expert systems and intelligent search engines to intelligent chatbots and predictive models, the
enthusiasm for AI practice is growing rapidly
AI applications are becoming increasingly common in a wide variety of industries, including
healthcare, finance, retail, and manufacturing. As AI technology continues to develop, we can expect to
see even more innovative and groundbreaking AI applications in the future.

AI in business intelligence
AI is playing an increasingly important role in business intelligence (BI). AI-powered BI tools can help
businesses collect, analyze, and visualize data more efficiently and effectively. This can lead to
improved decision-making, increased productivity, and reduced costs.
Some of the ways that AI is being used in BI include:
Data collection: Collecting data from a variety of sources, including structured data (for example,
databases) and unstructured data (for example, text documents, images, and videos)
Data analysis: To analyze data and identify patterns, trends, and relationships
Data visualization: AI can help create visualizations that make it easier to understand data
44 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Decision-making: Insights and recommendations generated by AI models can help drive data-driven
decision-making for businesses

AI in healthcare
AI is also playing an increasingly important role in healthcare. AI-powered tools can help doctors
diagnose diseases, develop new treatments, and provide personalized care to patients. For example:
Disease diagnosis: AI can be used to analyze patient data and identify patterns that may indicate a
disease. This can help doctors diagnose diseases earlier and more accurately.
Treatment development: By analyzing large datasets of patient data, AI can identify new patterns and
relationships that can be used to develop new drugs and therapies.
Personalized care: By analyzing a patient's data, AI can help doctors develop treatment plans that are
tailored to the patient's specific needs.

AI in education
AI could be used in education to personalize learning, improve student engagement, and automate
administrative tasks for schools and other organizations.
Personalized learning: AI can be used to create personalized learning experiences for students. By
tracking each student's progress, AI can identify areas where the student needs additional support and
provide targeted instruction.
Improved student engagement: AI can be used to improve student engagement by providing interactive
and engaging learning experiences. For example, AI-powered applications can provide students with
real-time feedback and support.
Automated administrative tasks: Administrative tasks, such as grading papers and scheduling classes can
be assisted by AI models, which will help free up teachers' time to focus on teaching.

AI in finance
AI can help financial services institutions in five general areas: personalize services and products, create
opportunities, manage risk and fraud, enable transparency and compliance, and automate operations and
reduce costs. For example:
Risk and fraud detection: Detect suspicious, potential money laundering activity faster and more
precisely with AI.
Personalized recommendations: Deliver highly personalized recommendations for financial products
and services, such as investment advice or banking offers, based on customer journeys, peer interactions,
risk preferences, and financial goals.
Document processing: Extract structured and unstructured data from documents and analyze, search and
store this data for document-extensive processes, such as loan servicing, and investment opportunity
discovery.

AI in manufacturing
Some ways that AI may be used in manufacturing include:
Improved efficiency: Automating tasks, such as assembly and inspection
Increased productivity: Optimizing production processes
Improved quality: AI can be used to detect defects and improve quality control

45 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Chapter Five
DATA PROCESSING
Data is a term used to describe base facts about objects / persons or activities of a transaction / event.
Data capture is the process of trading the relevant data, recording and converting it into a suitable form
for further processing.
Processing is a range of actions, which may be performed on the data to improve its usefulness to the
users. These actions include: Coding, Summarizing, Calculating, Storing, Selecting e.t.c.
Data processing system is a system, which transforms data into meaningful information. Thus
processing improves the value of the data.
There are two processing levers:
 Primary processing – involves Processing of data
 Secondary processing – involves interpretation, application to specific circumstances,
Judgment and reasoning

Categories 0f Data Processing


There are two categories of data processing.
 Manual data processing.
 Electronic (Automated) data processing.

Manual data processing


The entire process of data collection, filtering, sorting, calculation and other logical operations are all
done with human intervention without the use of any other electronic device or automation software. It
is a low-cost method and requires little to no tools, but produces high errors, high labor costs and lots of
time.
A human being processes given data as directed by the instructions in the procedures manuals or
personal experience. The system which accomplishes this processing is called manual information
system.

Electronic (Automated) data processing.


Data is processed with modern technologies using data processing software and programs. A set of
instructions is given to the software to process the data and yield output. This method is the most
expensive but provides the fastest processing speeds with the highest reliability and accuracy of output
A machine processes data as directed by the stored programs. The system is largely self-controlling with
people playing a very little role.

Computer data processing is any process that uses a computer program to enter data and summarise,
analyse or otherwise convert data into usable information.
Because data is most useful when well-presented and actually informative, data-processing systems are
often referred to as information systems.

Information the word comes from a Latin word informae, which means “to build from” or to give
structure”.
Hence information is trends, patterns, tendencies (measurement of central tendency) that users need in
order to perform their jobs.

Information system is the set of devices, procedures and operations designed with the aid of user to
produce desired information and communicate it to the user for decision-making.
This system accepts data, processes it to produce desired information.

46 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Information is data that has been processed in such a way as to be meaningful to the person who
receives it. It provides context for data and enables decision making processes.

Qualities of Information
 Relevance
The information a manager receives from an IS has to relate to the decisions the manager has to make
 Accuracy
A key measure of the effectiveness of an IS is the accuracy and reliability of its information. The
accuracy of the data it uses and the calculations it applies generally determine the effectiveness of the
resulting information. However, not all data needs to be equally accurate.
 Usefulness
The information a manager receives from an IS may be relevant and accurate, but it is only useful if it
helps him with the particular decisions he has to make. The MIS has to make useful information
easily accessible.
 Timeliness
Management has to make decisions about the future of the organization based on data from the present,
even when evaluating trends. The more recent the data, the more these decisions will reflect present
reality and correctly anticipate their effects on the company.
 Completeness
An effective IS presents all the most relevant and useful information for a particular decision. If some
information is not available due to missing data, it highlights the gaps and either displays possible
scenarios or presents possible consequences resulting from the missing data.

Uses of Information
Businesses and other organizations need information for many purposes: we have summarized the five
main uses in the table below.

 Decision-making
i. Strategic information: used to help plan the objectives of the business as a whole and to measure
how well those objectives are being achieved. Strategic information include: Profitability of each
part of the business
and Size, growth & competitive structure of the markets in which a business operates
ii. Tactical Information: this is used to decide how the resources of the business should be
employed.
ii. Examples include: Information about business productivity (e.g. units produced per employee;
staff turnover)
iii. Operational: Information: this information is used to make sure that specific operational tasks are
carried out as planned/intended (i.e. things are done properly).

Users are the essential ingredients, which convert information to action through the knowledge,
understanding and skills, which they bring to bear on the data provided.

It is only at the level of the user that the information system actually provides benefit or value to the
organization. Users will themselves undertake further processing of the information received.

47 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
MODES OF PROCESSING
Online Processing
It is also called Interactive processing
Data is automatically fed into the CPU as soon as it becomes available. Used for continuous processing
of data. Eg: barcode scanning
The processing system responds immediately whenever a change is made

Transaction Processing
Transaction Processing is similar to interactive processing. Data for each transaction is processed very
shortly after the transaction occurs. A transaction is completely processed before the next transaction.
This may result in a particular transaction having to wait while an earlier one is processed. The delay
will usually be short. An example might be holiday bookings where a second transaction will not be
initiated until the first is completed to avoid the possibility of double booking.

Real-Time and Pseudo Real-Time Processing


Real-time processing is when a computer responds immediately to an event, e.g. a computer that
controls a plane has to respond immediately to changes in air-pressure, wind, speed and so on, an airline
booking system makes the booking immediately so that nobody else can book the same seat.
A computer in a library or a supermarket performs transactions more or less immediately. A delay of a
few seconds is acceptable. This is called pseudo real-time processing.

Batch Processing
Batch processing is where a group of similar transactions are collected over a period of time and
processed in a batch. Eg: payroll system

 Paper documents are collected into batched (e.g. of 50), they are checked, control
totals/hash totals are calculated and written into a batch header document.
 The data is keyed offline from the main computer and it is validated by a computer
program. It is stored on a transaction file.
 Data is verified by being entered a second time by a different keyboard operator.
 The transaction file is transferred to the main computer.
 Processing begins at a scheduled time.
 The transaction file may be sorted into the same sequence as the master file to speed up
the processing of data.
 The master file is updated.
 Any required reports are produced.

Criteria for Choice of Processing Mode


The following should be considered:
 Whether the information obtained needs to be completely up-to-date at all times
 The scale of the operation - batch processing is well-suited to large volumes of data.
 Cost - real-time systems are generally more expensive.
 Computer Usage - batch systems make use of spare computer capacity because they process at
times when the computers would otherwise not be used.

Advantages and Disadvantages


With an online/interactive system, the data is always up-to-date and there is less need for paperwork.
However, the lack of paperwork causes a problem for auditors. Checking for accuracy can be difficult.

48 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Chapter six
DATABASE SYSTEMS
In the early 1970s, a database was considered an esoteric subject, of interest only to the largest
corporations with largest computers. Today database processing is becoming an information system
standard.
Data is a term used to describe base facts about objects/persons or activities or transaction /event

Information is processed data


Before the database management approach, organizations relied on file processing systems to organize, store, and
process data files. End users criticized file processing because the data is stored in many different files and each
organized in a different way. Each file was specialized to be used with a specific application. File processing was
bulky, costly and inflexible when it came to supplying needed data accurately and promptly. Data redundancy is
an issue with the file processing system because the independent data files produce duplicate data so when
updates were needed each separate file would need to be updated. Another issue is the lack of data integration.
The data is dependent on other data to organize and store it. Lastly, there was not any consistency or
standardization of the data in a file processing system which makes maintenance difficult. For these reasons, the
database management approach was produced.

Components of a database system


Is the physical realizatioion of the database system. A database system is much more than a database
management system.

Architecture of a database system


Database
Is a model
A data model is a set of concepts that can be used to describe the structure of and operations on a
database. By structure of a database we mean the data types, relationships, and constraints that define
the "template" of that database. A data model should provide operations on the database that allow
retrievals and updates including insertions, deletions, and modifications

A database is an integrated collection of a number of record types which are integrated by various
specific relationships, is also called a databank. Thus is an organized collection of logically related data,
managed in such a way as to enable the user or application programs to view the complete collection or
logical subset of the collection as a unit.
49 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
The data in the database will be expected to be both integrated and shared particularly on multipurpose-
user system.
Motivation for databases over files: integration for easy access and update, non-redundancy, multi-
access.

A database management system (DBMS)


Database management system (DBMS) is a generalized software system for manipulating databases. Includes
logical view (schema, sub-schema), physical view (access methods, clustering), data manipulation language, data
definition language, utilities etc.
In other words, it controls how information is stored and accessed. It supports a high-level access
language such as SQL to retrieve the data. Most DBMS are based on the relational data model although
there is the Object Oriented model to consider. However, the professor said we’ll focus on the former.

Data Models
The Evolution of Database Modeling
The various data models that came before the relational database model (such as the hierarchical
database model and the network database model) were partial solutions to the never-ending problem of
how to store data and how to do it efficiently. The relational database model is currently the best
solution for both storage and retrieval of data. Examining the relational database model from its roots
can help to understand critical problems the relational database model is used to solve; therefore, it is
essential to understand how the different data models evolved into the relational database model as it
is today.
The evolution of database modeling occurred when each database model improved upon the previous
one. The initial solution was no virtually database model at all: the file system (also known as flat files).

Hierarchical model
The term Hierarchical model covers a broad concept spectrum it often refers to a lot of setups like multi-
level models where there are various levels of information or data all related become larger form. it is
similar to the network model. A kind of database management system that links record together like a
family tree such that each record type has only one owner The hierarchical data model organizes data in
a tree structure. there is a hierarchy of parent and child data segments. this structure implies that a record
can have repeating information generally in the child data segments. Data in a series of records which
have a set of field values attached to it . It collects all the instances of a specific record together to it .it
50 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
collects all the instances of a specific record together as a record type. To create links between these
record types, the hierarchical model uses parent child relationships. There are 1:N mapping between
record types. This is done by using trees, likes set theory used in the relational model „‟borrowed‟‟
from math‟s.

Advantages and disadvantages of hierarchical database


-Hierarchical database are fast and conceptually simple however do not support many tomany
relationships and have a lack of referential integrity

Network model
Network models is a database model conceived as a flexible way of representing objects and their
relationship The popularity of the network data model coincided with the popularity of the hierarchical
data model. Some data were more naturally model with more than one parent per child, so the network
model permitted the modeling of many to many relationships in data. A set consists of an owner record
type, a set name and a member record type. Member record type can have that role in more than one set,
hence the multi parent concept is supported.
An owner record type can also be a member or owner in another set. Thus the complete network of
relationships is represented by several pair wise sets, in each set some (one)record type is owner and one
or more record types are members,
It very similar to the hierarchical model infant the hierarchical model Is a subset of the network model

Advantages and Disadvantages of Network model


It‟s provide very efficient ‟‟High – Speed‟‟ retrieval
The network model can handle the one – many and many to many relationships

Relational model
A relational database allows the definition of data structures, storage and retrieval operations and
integrity constraints. Relational model is a data base model based on first order predicate logic. The
relational model used the basic concept of a relation or table. the column or fields in the table identify
the attributes such as name, age; also a tuple or row contains all the data of single instance of the table
such as a person. In the relational model every tuple must have a unique identification or key based on
the data. Often, keys are used to join data from two or more relations based on matching identification.
The relational model also includes concepts such as foreign keys, which are primary keys in one relation
that re kept in another relation tallow for the joining of data. For examples- your parents SSN are keys
for the tuples that represent them and they are foreign keys in the tuple that represents you more also
,certain fields may be designated as keys, which means that searches for specific values of that field will
use indexing to speed them up .where fields in two different tables take values from the same set, a join
operation can be performed to select related records in the two tables by matching values in those tables

Advantages
Ease for use, Flexibility: Different tables from which information has to be linked and extracted can be
easily manipulated by operators such as project and join to give information in the form in which it is
desired Security control and authorization can also be implemented more easily by moving sensitive
attributes in a given table into a separate relation with its own authorization controls

Object Oriented Model


Object Oriented model is a modeling paradigm mainly used in computer programming. Prior to the rise
of OOM, the dominant paradigm was procedural programming, which emphasized the use of discreet

51 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
reusable code blocks that could stand on their own, take variables, Performa function on them,
and return values.
"The object-oriented database (OODB) paradigm is the combination of object-oriented programming
language (OOPL) systems and persistent systems. The power of the OODB comes from the seamless
treatment of both persistent data, as found in databases, and transient data, as found in executing
programs. “In contrast to a relational DBMS where a complex data structure must be flattened out to fit
into tables or joined together from those tables to form the in-memory structure, object DBMSs have no
performance overhead to store or retrieve a web or hierarchy of interrelated objects.It has some
advantages such as reuse of code , better structured programs and easier transition from analysis to
implementation

TYPES OF DATABASES
Basically there are two types of databases, which are analytical databases and operationaldatabases.

Analytical Database
An analytic database, also called an analytical database, is a read-only system that stores historical data
on business metrics such as sales performance and inventory levels. Business analysts, corporate
executives and other workers can run queries and reports against an analytic database. An analytical
database system provides access to all of the data collected by an entity in interactive time.
The analytical database system transforms relational database data. An analytic database is specifically
designed to support business intelligence(BI)and analytic applications, typically as part of a data o rdata
mart. This differentiates it from an operational, transactional or OLTP database, which is used
for transaction processing i.e., order entry and other “run the business” applications.
On the web you will often see analytic databases in the form of inventory catalogs such asAmazon.com;
it usually holds descriptive information about all available products in the inventory. Analytical
databases also called OLAP (on line analytical processing)

Operational database
Operational Database is the database-of-record, consisting of system-specific reference data and event
data belonging to a transaction-update system. It may also contain system control data such as
indicators, flags, and counters. The operational database is the source of data for the data warehouse. It
contains detailed data used to run the day-to-day operations of the business. The data continually
changes as updates are made, and reflect the current value of the last transaction. An operational
database, as the name implies, is the database that is currently and progressive in use capturing real time
data and supplying data for real time computations and other analyzing processes. For example, an
operational database is the one which used for taking order and fulfilling them in a store whether it is a
traditional store or an online store. Other areas in business that use an operational database is in a
catalog fulfillment system any other Point of Sale system

Data warehousing and data mining


A data warehouse is a type of computer database that is responsible for collecting and storing the
information of a particular organization. A data warehouse is a database with archival, querying and data
exploration tools (i.e. statistical tools) and is used for storing historical and current data of potential
interest to managers throughout the organization.
The data originate in many of the operational areas and are copied into the data warehouse as often as
needed. The data in the warehouse are organized according to company wide standards so that they can
be used for management reporting and analysis .data warehouses support looking at the data of the
organization through many views or directions.

52 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
A data warehouse allows managers to look at products by customer, by year, by salesperson, essentially
different slices of the data. Also, a data warehouse is a tool that is constructed to give a specific view of
data that an organization or company can gather during the course of carrying out various processes.
Data warehouses are useful because they can allow a company to give managers and executives crucial
information that will allow them to make better decisions.

Although, data warehousing is a promising technology it can become problematic for companies that
fail to use core principles. Finally, to having a proper design, data warehouse must be properly
maintained and implemented.

Data mining
Data mining, also known as "knowledge discovery," refers to computer-assisted tools and techniques for
sifting through and analyzing these vast data stores in order to find trends, patterns, and correlations that
can guide decision making and increase understanding. Data mining covers a wide variety of uses,
from analyzing customer purchases to discovering galaxies. In essence, data mining is the equivalent of
finding gold nuggets in a mountain of data. The monumental task of finding hidden gold depends
heavily upon the power of computers.
In summary, the purpose of DM is to analyze and understand past trends and predict future trends. By
predicting future trends, business organizations can better position their products and services for
financial gain. Nonprofit organizations have also achieved significant benefits from data mining, such as
in the area of scientific progress. The concept of data mining is simple yet powerful. The simplicity of
the concept is deceiving, however. Traditional methods of analyzing data, involving query-and-report
approaches, cannot handle tasks of such magnitude and complexity.

Contribution to Database Development


Global events and competition affects almost all modern businesses and organizations are increasingly
facing challenges as a result of the ever changing technologies in the world. The economic and political
linkages involving the migration of money, products and people across national boundaries together
with ideas and values have increased the pace of change, ambiguity, uncertainty and unpredictability in
the contemporary business world. The advanced of technology such as the use of internet through
utilization of database management systems, the world has increasingly become a global village.

The use of data base management systems has boosted the activities in the modern business world. The
systems are designed to hold or store large amount of information. It has also been utilized in leaning
institution where by any information for every student is stored and can easily retrieved when required.

For example, if a student is engaged in bad activities parents can be traced easily because the
information regarding to that student can easily be retrieved from his/her detains that he/she filled on the
registration. Another example of database management systems use is that of booking tickets by
travellers, it gives opportunity to travellers to book in advance and when there is date of departure , there
record can easily be retrieved with ease and with less time.
Finally, we can not only attribute globalization to development of database management system but also
to development of new technologies within any organization in a country. The witnessing of technology
transformation alongside with alternations experienced in the trading environment has led to a
reconsideration of fundamental archival assumptions, thought and methods. The use of spreadsheets in
storing and retrieving information has led to a lot of deficiencies such as spending longer hours in
retrieving the information and limited storage of information space, but with the use of DBMS is an
efficient way of keeping such information because it captures nearly all trading dealings, safeguard
complete records and completely acknowledging proceedings or records within the organization
53 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Chapter seven
INFORMATION SYSTEMS
Defining a system
System is a set of interacting or interdependent components forming an integrated whole or a set of
elements (often called ‘components’) and relationships which are different from relationships of the set
or its elements to other elements or sets. In a system the different components are connected with each
other and they are interdependent. Every system is delineated by its spatial and temporal boundaries,
surrounded and influenced by its environment, described by its structure and purpose and expressed in
its functioning.

The term system may also refer to a set of rules that governs structure and/or behavior. The term
institution is used to describe the set of rules that govern structure and/or behavior.

INFORMATION SYSTEM
Data consists of the raw facts representing events occurring in the organization before they are
organized into an understandable and useful form for humans.
Information the word comes from a Latin word informae, which means “to build from” or to give
structure”.
Hence information is trends, patterns, tendencies (measurement of central tendency) that users need in
order to perform their jobs.
An Information System can be defined technically as a set of interrelated components that collect (or
retrieve), process, store and distribute information to support decision making and control in an
organization.

Information systems should not be confused with information technology. They exist independent of
each other and irrespective of whether they are implemented well. Information systems use computers
(or Information Technology) as tools for the storing and rapid processing of information leading to
analysis, decision-making and better coordination and control. Hence information technology forms the
basis of modern information systems.
The principal tasks of information systems specialists involve modifying the applications for their
employer’s needs and integrating the applications to create a coherent systems architecture for the firm.
Generally, only smaller applications are developed internally. Certain applications of a more personal
nature may be developed by the end users themselves.

Information systems functions


The information systems function in the organization is composed of three distinct entities.
 The first is a formal organizational unit or function called an information systems department.
 The second consists of information systems specialists such as programmers, systems analysts,
project leaders, and information systems managers. Also, external specialists such as hardware
vendors and manufacturers, software firms and consultants frequently participate in the day-to-
day operations and long term planning of information systems.
 A third element of the information systems package is the technology itself, both hardware and
software.

54 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
TYPES OF INFORMATION SYSTEM
In the early days of computing, each time an information system was needed it was 'tailor made' - built
as a one-off solution for a particular problem. However, it soon became apparent that many of the
problems information systems set out to solve shared certain characteristics. Consequently, people
attempted to try to build a single system that would solve a whole range of similar problems. However,
they soon realized that in order to do this, it was first necessary to be able to define how and where the
information system would be used and why it was needed. It was then that the search for a way to
classify information systems accurately began.

Depending on how you create your classification, you can find almost any number of different types of
information system. However, it is important to remember that different kinds of systems found in
organizations exist to deal with the particular problems and tasks that are found in organizations.

Examples
 Management Information Systems
 Geographical Information Systems etc

Management Information Systems


In management most organizations are hierarchical, the way in which the different classes of
information systems are categorized tends to follow the hierarchy. This is often described as "the
pyramid model" because the way in which the systems are arranged mirrors the nature of the tasks found
at various different levels in the organization.

Thus, while there are several different versions of the pyramid model, the most common is probably a
four level model based on the people who use the systems. Basing the classification on the people who
use the information system means that many of the other characteristics such as the nature of the task
and informational requirements, are taken into account more or less automatically.

Transaction Processing Systems

Transaction Processing System are operational-level systems at the bottom of the pyramid. They are
usually operated directly by shop floor workers or front line staff, which provide the key data required to
support the management of operations. This data is usually obtained through the automated or semi-
automated tracking of low-level activities and basic transactions.

Management Information Systems

However, within our pyramid model, Management Information Systems are management-level systems
that are used by middle managers to help ensure the smooth running of the organization in the short to
medium term.

Decision Support Systems

A Decision Support System can be seen as knowledge based system, used by senior managers, which
facilitates the creation of knowledge and allow its integration into the organization. These systems are
often used to analyze existing structured information and allow managers to project the potential effects
of their decisions into the future.

55 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Executive Information Systems
Executive Information Systems are strategic-level information systems that are found at the top of the
Pyramid. They help executives and senior managers analyze the environment in which the organization
operates, to identify long-term trends, and to plan appropriate courses of action. The information in such
systems is often weakly structured and comes from both internal and external sources. Executive
Information System are designed to be operated directly by executives without the need for
intermediaries and easily tailored to the preferences of the individual using them.
Information system is a set of systems which helps management at different levels to take better
decisions by providing the necessary information to managers.

Managers are the key people in an organization who ultimately determine the destiny of the
organization. They set the agenda and goals of the organization, plan for achieving the goals, implement
those plans and monitor the situation regularly to ensure that deviations from the laid down plan is
controlled.
The managers decide on all such issues that have relevance to the goals and objectives of the
organization. The decisions range from routine decisions taken regularly to strategic decisions, which
are sometimes taken once in the lifetime of an organization.
The decisions differ in the following degrees,
 Complexity
 Information requirement for taking the decision
 Relevance
 Effect on the organization
 Degree of structured behavior of the decision-making process.
The different types of decisions require different type of information as without information one cannot
decide.

Geographic information systems


A geographic information system (GIS) is a computer system for capturing, storing, checking, and
displaying data related to positions on Earth's surface. By relating seemingly unrelated data, GIS can
help individuals and organizations better understand spatial patterns and relationships. GIS can show
many different kinds of data on one map, such as streets, buildings, and vegetation.

56 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
GIS helps users understand patterns, relationships, and geographic context. The benefits include
improved communication and efficiency as well as better management and decision making.
Geographic Information Systems (GIS) are fundamental tools for learning geography. It is a system
(hardware + database engine) that is designed to efficiently, assemble, store, update, analyze,
manipulate, and display geographically referenced information (data identified by their locations).
GIS therefore encourages peoples to think spatially, or geographically. GIS is much more than a
container of maps in digital form. GIS is a spatial decision support system.
A working GIS integrates five key components: hardware, software, data, people, and methods.
General-purpose GIS software performs six major tasks such as input, manipulation, management,
query and analysis, Visualization.

It was Dr. Roger F. Tomlinson who first coined the term geographic information system (GIS). He
created the first computerized geographic information system in the 1960s while working for the
Canadian government—a geographic database still used today by municipalities across Canada for land
planning.
Using GIS to solve problems
GIS works as a tool to help frame an organizational problem. The tool can help organizations make
various analysis with acquired data, and to share results that can be tailored to different audiences
through maps, reports, charts, and tables and delivered in printed or digital format.

Types of Data in GIS


vector data
Vector data is used to represent real world features in a GIS. A vector feature can have a geometry type
of point, line or a polygon. Each vector feature has attribute data that describes it. Feature geometry is
described in terms of vertices. Point geometries are made up of a single vertex (X,Y and optionally Z).

a raster data

In its simplest form, a raster consists of a matrix of cells (or pixels) organized into rows and columns (or
a grid) where each cell contains a value representing information, such as temperature. Rasters are
digital aerial photographs, imagery from satellites, digital pictures, or even scanned maps.

57 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Characteristics of a Good Information System
An effective IS assembles data available from company operations, external inputs and past activities
into information that shows what the company has achieved in key areas of interest, and what is required
for further progress.
The most important characteristics of an IS are those that give decision-makers confidence that their
actions will have the desired consequences.
Relevance
The information a manager receives from an IS has to relate to the decisions the manager has to make.
An effective MIS takes data that originates in the areas of activity that concern the manager at any given
time, and organizes it into forms that are meaningful for making decisions.
If a manager has to make pricing decisions, for example, an MIS may take sales data from the past five
years, and display sales volume and profit projections for various pricing scenarios.

Accuracy
A key measure of the effectiveness of an IS is the accuracy and reliability of its information. The
accuracy of the data it uses and the calculations it applies generally determine the effectiveness of the
resulting information. However, not all data needs to be equally accurate.

Usefulness
The information a manager receives from an IS may be relevant and accurate, but it is only useful if it
helps him with the particular decisions he has to make. The MIS has to make useful information easily
accessible.

Timeliness
MIS output must be current. Management has to make decisions about the future of the organization
based on data from the present, even when evaluating trends. The more recent the data, the more these
decisions will reflect present reality and correctly anticipate their effects on the company. When the
collection and processing of data delays its availability, the MIS must take into consideration its
potential inaccuracies due to age and present the resulting information accordingly, with possible ranges
of error. Data that is evaluated in a very short time frame can be considered real-time information.
Completeness
An effective MIS presents all the most relevant and useful information for a particular decision. If some
information is not available due to missing data, it highlights the gaps and either displays possible
scenarios or presents possible consequences resulting from the missing data.

Components of information systems

The computer age introduced a new element to businesses, universities, and a multitude of other
organizations: a set of components called the information system, which deals with collecting and
organizing data and information. An information system is described as having five components.

Computer hardware
This is the physical technology that works with information. Hardware can be as small as a smartphone
that fits in a pocket or as large as a supercomputer that fills a building. Hardware also includes the
peripheral devices that work with computers, such as keyboards, external disk drives, and routers. With
the rise of the Internet of things, in which anything from home appliances to cars to clothes will be able
to receive and transmit data, sensors that interact with computers are permeating the human
environment.

58 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
software
The hardware needs to know what to do, and that is the role of software. Software can be divided into
two types: system software and application software. The primary piece of system software is the
operating system, such as Windows or iOS, which manages the hardware’s operation. Application
software is designed for specific tasks, such as handling a spreadsheet, creating a document, or
designing a Web page.

Data Communication Network


This component connects the hardware together to form a network. Connections can be through wires,
such as Ethernet cables or fibre optics, or wireless, such as through Wi-Fi. A network can be designed to
tie together computers in a specific area, such as an office or a school, through a local area network
(LAN). If computers are more dispersed, the network is called a wide area network (WAN). The Internet
itself can be considered a network of networks.

Databases /Data
This component is where the “material” that the other components work with resides. A database is a
place where data is collected and from which it can be retrieved by querying it using one or more
specific criteria. A data warehouse contains all of the data in whatever form that an organization needs.
Databases and data warehouses have assumed even greater importance in information systems with the
emergence of “big data,” a term for the truly massive amounts of data that can be collected and
analyzed.

Human resources and procedures


The final, and possibly most important, component of information systems is the human element: the
people that are needed to run the system and the procedures they follow so that the knowledge in the
huge databases and data warehouses can be turned into learning that can interpret what has happened in
the past and guide future action

Application of Information Systems


Information Systems is the study of what technology/tools can best meet the Information management
needs of a particular organization. These tools can be human, process-based or technological.

Electronic Commerce and Electronic Data Interchange


Electronic Commerce(EC)
Electronic commerce may be defined as the entire set of processes that support commercial activities
on a network and involve information analysis. These activities spawn product information and display
events, services, providers, consumers’ advertisers, support for transactions, brokering systems for a
variety of services and actions (e.g., Finding certain products, finding cheaply priced products etc.),
security of transactions, user authentication. etc.
Electronic commerce has experienced an explosion due to the convergence of these technology
developments, the merging of the telecommunications and computing industries, and the business
climate.
Electronic commerce involves issues (both technical and non-technical) that are multi- disciplinary in
nature. The objectives of E-Commerce are several. Primarily they involve increasing the speed and
efficiency of business transactions and processes, and improving customer service.

59 Technical University of Kenya Contact email: [email protected]


Information and Communications Technology
Electronic data interchange
Electronic Data Interchange or EDI is the electronic exchange of business documents from one
company’s computer to another’s computer in nationally standard structured data formats.
Using EDI to exchange business documents eliminates the rekeying of the data, resulting in more
accurate data. In EDI, data necessary for conducting business is transmitted directly into a system
without human intervention. We can transmit and receive EDI documents with companies such as
banks, railroads, customers and suppliers. The companies who send and receive EDI with are referred to
as trading partners.
For many years’ business data has been exchanged electronically on tapes, disks, diskettes and over
direct computer to computer hookups. Companies realized replacing the paper used in business
transactions with electronic communication saved both money and time
Customers and suppliers were interested in sending and receiving electronic documents; however, each
company had different documents need and different computer and communication medium. The effort
to maintain different document formats and different communications hookups became time consuming
and costly. Electronic transactions started becoming standardize. Different standards for different
industries still exist.

Application systems and information systems


Application systems and information systems are different in terms of their purpose and functionality.
Application systems directly provide users with specific information content to meet their needs, while
information systems are designed to collect, transmit, process, store, and provide information to users

All software need not be information system software while information system is always implemented
as software.
For example, a company writing BIOS software or anti-virus software is not dealing with any
information systems. Information systems let you access data for example your company employee
payroll system, employee master file etc.

60 Technical University of Kenya Contact email: [email protected]

You might also like