Introduction To Information Technology 21 January 2025
Introduction To Information Technology 21 January 2025
JANUARY 2024
REFERENCE MATERIAL COLLECTIONS
[email protected]
Information and Communications Technology
Chapter one
COMPUTER SYSTEM
A computer system is an electronic machine which accepts data applies a prescribed set of instructions
to produce desired results.
Computing
Data Results
function
The electronic machine is referred to as computer hardware and a set of instructions referred to as
computer program.
The processing performed on the data is determined by a program, the machine is provided with an
appropriate program in order to perform a specified computation and the machine can perform a wide
variety of computations.
Any computation can be expressed as a sequence of instructions selected from a small set (i.e. a
program). The machine executes the instructions, using the provided data, to generate the results.
The are two categories of machines
- Programmable Machine
- Hardware programmed machine
Programmable Machine
Programmable refers to the ability of a device or system to be programmed or customized to perform
specific tasks or functions. It allows you to write and execute instructions or code to control the behavior
and functionality of the device, making it adaptable and flexible.
"Programmable devices" refer to electronic devices or systems that can be configured or instructed to
perform specific tasks by using a set of instructions known as a program or code.
System
The term system is derived from Greek word system, which means an organized relationship among
functioning units or components. A system exists because it is designed to achieve one or more
objectives.
Therefore system is a set of interacting or interdependent components forming an integrated whole or a
set of elements (often called 'components' ) and relationships which are different from relationships of
the set or its elements to other elements or sets. In a system the different components are connected with
each other and they are interdependent.
1 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
A component is either an irreducible part or an aggregate of parts, also called a subsystem. The simple
concept of a component is very powerful
Interdependent components may refer to physical parts or managerial steps known as subsystem of a
system. Most systems share common characteristics, including:
A system has structure, it contains parts (or components) that are directly or indirectly related to
each other;
A system has interconnectivity: the parts and processes are connected by structural and/or
behavioral relationships.
A system has behavior, it exhibits processes that fulfill its function or purpose;
The term system may also refer to a set of rules that governs structure and/or behavior.
Systems approach as an organized way of dealing with a problem.
The system takes input from outside, processes it, and sends the resulting output back to its
environment. The arrows in the figure show this interaction between the system and the world outside of
it.
The concept of a system is shown below:
.
For example, just as with an automobile or a stereo system, with proper design, we can repair or
upgrade the system by changing individual components without having to make changes throughout the
entire system.
The components are interrelated; that is, the function of one is somehow tied to the functions of the
others.
A system has a boundary, within which all of its components are contained and which establishes the
limits of a system, separating it from other systems.
Environment: The environment is the 'supersystem' within which an organization operates. It excludes
input, processes and outputs. It is the source of external elements that impinge on the system. All
systems have a boundary and operate within an environment.
EVOLUTION OF COMPUTERS
First Generation of Computer (1937 – 1946):
In 1943 an electronic computer name the Colossus was built for the military. Other developments
continued until in 1946 the first general– purpose digital computer, the Electronic Numerical Integrator
and Calculator (ENIAC) was built.
Computers of this generation could only perform single task, and they had no operating system.
Characteristics:
i. Sizes of these computers were as large as the size of a room.
ii. Possession of Vacuum Tubes to perform calculation.
iii. They used an internally stored instruction called program.
iv. Use capacitors to store binary data and information.
v. They use punched card for communication of input and output data and
information
vi. They have about One Thousand 1000 circuits per cubic foot.
vii. These computers were still generating a lot of heat
Characteristics:
i. The computers were still large, but smaller than the first generation of
computers.
ii. They use transistor in place of Vacuum Tubes to perform calculation.
iii. They were produced at a reduced cost compared to the first generation
of computers.
iv. Possession of magnetic tapes as for data storage.
v. They were using punch cards as input and output of data and information. The use of keyboard as an
input device was also introduced.
vi. These computers were still generating a lot of heat in which an air conditioner is needed to maintain
a cold temperature.
vii. They have about one thousand circuits per cubic foot.
Characteristics:
i. They used large-scale integrated circuits, which were used for both data processing and storage. They
have hundred thousand circuits per cubic foot.
ii. Computers were miniaturized, that is, they were reduced in size compared to previous generation.
iii. Keyboard and mouse were used for input while the monitor was used as output device.
Examples:
i. IBM system 360
ii. UNIVAC 9000 series.
Characteristics:
i. Possession of microprocessor which performs all the task of a computer system use today.
ii. The size of computers and cost was reduced.
iii. Increase in speed of computers.
iv. Very large scale (VLS) integrated circuits were used.
v. They have millions of circuits per cubic foot.
Examples:
i. IBM system 3090, IBM RISC6000, IBM RT.
ii. HP 9000.
iii. Apple Computers.
Characteristics:
i. Consist of extremely large scale integration.
ii. Parallel processing
iii. Possession of high speed logic and memory chip.
iv. High performance, micro-miniaturization.
v. Ability of computers to mimic human intelligence, e.g. voice
recognition, facial face detector, thumb print.
vi. Satellite links, virtual reality.
vii. They have billions of circuits per cubic.
Examples:
i. Super computers
ii. Robots
iii. Facial face detector
iv. Thumb print
Types Computers
There are two distinct families of computing device and analog, digital and hybrid available to us today.
These two types of computer operate on quite different principles. Hence there are three categories of
computers.
Analog computer system
Digital computer system
Hybrid computer system
Analog computers are used to process analog data. Analog data is of continuous nature and which is not
discrete or separate. Such type of data includes temperature, pressure, speed weight, voltage, depth etc.
These quantities are continuous and having an infinite variety of values.
Analog computers are the first computers being developed and provided the basis for the development
of the modern digital computers.
Analog computers do not require any storage capability because they measure and compare quantities in
a single operation. Output from an analog computer is generally in the form of readings on a series of
dial (Speedometer of a car) or a graph on strip chart.
Analog computers are widely used for certain specialized engineering and scientific applications, for
calculation and measurement of analog quantities. They are frequently used to control process such as
those found in oil refinery where flow and temperature measurements are important.
Firstly, Electronic analog computer was developed in USA, and initially they were used in the different
missiles, airplane layout, and in flight.
Analog computers are in use for some specific applications, like the flight computer in aircraft, ships,
submarines, and some appliances in our daily life such as refrigerator, speedometer, etc.
Examples
The examples of an analog computer are astrolabe, oscilloscope, autopilot, telephone lines, speedometer,
etc.
The digital computer is a digital system that performs various computational tasks. The
word digital implies that the information in the computer is represented by variables that take a limited
number of discrete values. These values are processed internally by components that can maintain a
limited number of discrete states.
5 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Digital computers use the binary number system, which has two digits: 0 and 1. A binary digit is called
a bit. Information is represented in digital computers in groups of bits. By using various coding
techniques, groups of bits can be made to represent not only binary numbers but also other discrete
symbols, such as decimal digits or letters of the alphabet.
2. Mini Computer
Minicomputers are most popular for their quality of multiprocessing. It means that mini computers can
use multiple processors connected to a common memory and the same peripheral devices to perform
different tasks simultaneously. Minicomputers have greater complexity than microcomputers. These are
also known as mid-range computers and are generally used for applications like engineering
computations, data handling, etc.
3. Mainframe Computer
Mainframe computers were first introduced during the early 1930s and were properly functional in the
year 1943. They are known for their exceptional data handling capacity and reliability. This quality is
used in applications where bulk data handling and security are of main concern. For example, in banks,
during the census, etc. Harvard Mark 1 was the first mainframe computer.
Examples
Automated Teller Machine is a perfect example of mainframe computers. Data handling capacity and
reliability are the two most notable traits of mainframe computers. This is why they are employed in
applications where security and flawless operation are of prime concern.
4. Super Computer
Supercomputers are basically used in scientific and research-related applications. They require an entire
room for their set up and operation. Supercomputers employ thousands of processors that work together
to perform trillions of calculations per second. Therefore, they require external cooling pipes to manage
the generated heat. They are fast, accurate, and secure. Some of the prominent organizations that make
use of supercomputers are the National Nuclear Security Administration, NASA, ISRO, etc.
Ultrasound Machine
Ultrasound is the high-frequency sound waves that a normal human cannot hear directly. These sound
waves are very advantageous in medical technology. An ultrasound machine consists of a transducer
probe that transmits the high-frequency sound, this sound travels freely till it strikes an obstacle. When
the ultrasound strikes the obstacle, it gets reflected back. The reflected signal is picked up by the same
transducer probe and is sent for processing to generate the output in image form. Ultrasound devices are
neither completely analogue nor completely digital. Therefore, an ultrasound machine can be placed
under the category of hybrid computers.
Electrocardiogram Machine
Electrocardiogram or ECG machine is designed to measure heart activity. It makes use of 12-13 sensors
that pick the body signals and translate them to the digital data. This digital data is then processed by the
controller, and the output is generated, usually, in the form of an electrocardiograph. Body signals are
analogue in nature, and the output is generated in both analogue and digital form. Therefore, ECG
machine is an example of hybrid computers.
Computers in hospitals
Computers play a crucial role in the health and medical sectors. In health care, it enabled hospitals to
maintain a record of their millions of patients in order to contact them for treatment or related to their
appointment, medicine updates, or disease updates
2. Monitor patients
Computers in hospitals are also used to monitor blood pressure, heart rate, and other critical medical
equipment. The computer monitoring system also collects useful data from patients. This data can be
accessed for future reference or can be further used for studies.
4. Inventory
For a patient's treatment, it is important to know what medicines are in stock. It is crucial to keep an
inventory list up to date because if the doctor prescribes a medicine and that medicine is out of stock so,
it can cause a slowdown in the recovery. Meanwhile, inventory management is very important for health
clinics and hospitals. So, computers enable the inventory managers to monitor the stock levels.
Chapter two
COMPUTER HARDWARE
This is the physical computer or touchable parts of the computer system. The study of a computer
hardware system covers:
functions,
components and their interconnections
Computer hardware is the physical part of a computer, including its digital circuitry, as
distinguished from the computer software that executes within the hardware. The hardware of a
computer is infrequently changed, in comparison with software and hardware data, which are "soft" in
the sense that they are readily created, modified or erased on the computer
The main components of computer hardware Include:
Input unit
Processing unit
Output unit
Storage unit
INPUT UNIT
The main function of input hardware is to capture raw data and convert it into computer usable form as
quickly and efficiently as possible.
Keyboard Entry
A computer keyboard is an electromechanical component designed to create special standardized
electronic codes when a key is pressed. The codes are transmitted a long the cable that connects the
keyboard to the computer system updated each time the card is used.
9 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Direct Entry
Involve non-keyboard input device. This type of data capture systems minimize the amount of human
activity required to get data into a compiler system. The most common ones include: Card readers,
Scanning devices, pointing devices, Voice input devices etc
Card readers
The technology has evolved from use of punched cards to smart cards. Smart cards are designed to be
carried like credit cards. This card is wiped into special card reading point-of-sale terminal and then
enters a password on the keyboard.
Scanning Device
Are designed to read data from source documents into computer usable form. The devices use light
sensitive mechanism read data. Examples include. Barcode reader ,Optical mark reader , Magnetic
character reader
Fax machine etc.
Pointing devices
These devices allow the user to identify and select the necessary command or option by moving the
cursor to a certain location on the screen or tablet. They are used in menu-driven software. These
devices include:
Light pens, A mouse, Touch screen, Digitizer etc.
PROCESSING UNIT
It is referred to as central processing unit (CPU) or simply microprocessor. A microprocessor is a tiny,
enormously powerful high speed electronic brain etched on a single silicon semiconductor chip which
contains the basic logic, storage and arithmetic functions of a computer.
It receives and decodes instructions from input devices like keyboards, disks then send them over a bus
system consisting of microscopic etched conductive "wiring" to be processed by its arithmetic calculator
and logic unit. The results are temporarily stored in memory cells and released.
In 1970, Intel Corporation introduced the first dynamic RAM, which increased IC memory by a factor of
four. These early products identified Intel as an innovative young company. However, their next
product, the microprocessor, was more than successful by setting in motion an engineering feat that
dramatically altered the course of electronics.
The hardware is a programmable consisting of a general-purpose arithmetic and logic unit able to
execute a small set of simple/basic functions selectable by control signals and instruction interpreter
which reads instruction codes one at a time and generates an appropriate sequence of control signals.
As a computer technology evolved from one generation to the next the size of this unit has become
smaller and smaller while processing capacity has increased tremendously.
Every time the system becomes smaller, cheaper and more powerful, knew groups of users presented
themselves new areas of application.
Future of microprocessor
The future looks bright for Microprocessor with the advancement in technology and it is constantly
improving every year. We will soon be able to see more miniature devices that are powerful, energy
efficient and surely blow your mind. On the other side microprocessor hardware improvements are
becoming more and more difficult to accomplish as, even Gordon Moore believes, the exponential
upward curve in microprocessor hardware advancements “can’t continue forever.” Because the future
winners are far from clear today, it is way too early to predict whether some form o
Each computer's CPU can have different cycles based on different instruction sets, but will be similar to
the following cycle:
3.In case of a memory instruction (direct or indirect) the execution phase will be in the next clock pulse.
If the instruction has an indirect address, the effective address is read from main memory, and any
required data is fetched from main memory to be processed and then placed into data registers. If the
instruction is direct, nothing is done at this clock pulse. If this is an I/O instruction or a Register
instruction, the operation is performed (executed) at clock Pulse.
The result generated by the operation is stored in the main memory, or sent to an output device. Based
on the condition of any feedback from the ALU, Program Counter may be updated to a different address
from which the next instruction will be fetched.
The cycle is then repeated.
Note in the diagram that the CU directs the CPU through a sequence of different states. The speed with
which the CPU cycles from state to state is governed by a device called the system clock. Clock speed is
often measured in gigahertz (GHz) where a gigahertz is one billion cycles per second. Thus, a 2.9 GHz
processor could execute 2.9 billion cycles in one second.
Arithmetic Logic Unic
The CPU utilizes two types of memory, RAM and ROM. RAM, or random access memory, stores both
the instructions and data that the CPU acts on to accomplish some task using the operating cycle just
described. RAM is dynamic, meaning that as the task progresses, the contents of RAM change. This
allows RAM to store any results that the task might produce. RAM is volatile, meaning that its contents
are lost if power to the computer is interrupted or turned off.
On the other hand, ROM is neither dynamic nor volatile. Its contents are fixed during the manufacturing
process for the computer and can't be changed during processing. In addition, its contents are not lost
when the power to the computer is interrupted or turned off. Although ROM is not directly involved in
the normal CPU operating cycle described above, it does serve a very important function in that it
contains all the information necessary to start or restart the computer. Starting the computer when the
power is off is called booting the computer. Restarting the computer when it already has power turned
on is called rebooting the computer.
Information is moved between the CPU and memory across a connection called a bus. The amount of
information that can be moved to or from memory at one time is referred to as the bus width..
OUTPUT UNIT
Converts digital signals to human readable form. These devises falls into two main categories.
Hard copy output.
Soft copy output.
Hard copy
Refers to information that has been recorded on a tangible medium such as paper or microfilm. The
principal hard copy output devices include:
Printer
Microfilm recorders
Graphic plotters.
Printer
Is a device capable of producing characters, symbols and graphics on paper.
There are many ways of classifying printers, the most common is based on the mode of printing.
Impact printer
Forms print images by pressing an inked ribbon aganaist a papere with a hammer like mechanism.
Print single color image, the color of the ribbon.
Examples include: Dot matrix , Line printers e.t.c.
Non-impact printer
Print by spraying ink on paper. The ink is contained in an electronic container called cartridge. There
are different sizes and design of cartridges for different types of printers. The cartridges are either black
or color.
Examples include Inkjet printers, Laser printers etc
Microform recorder
Reduces output between 24 to 48 smaller and then record onto a micro form.
Microforms are photographically reduced documents on to a film.
15 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
A typical 16mm roll will hold the equivalent of 3,000 A4 pages. One typical microfiche will hold
equivalent of about 98 A4 pages.
Graphic plotters
Are used in printing maps and structural and architectural designs.
Softcopy
Refers to the output displayed on a computer screen.
The principal softcopy output devices are screens and voice output systems.
Computer screen
Is commonly known as visual display unit ( VDU) or monitor.
They fall into two categories.
- Monochrome monitor
- Color monitor.
Voice output
The most common are small loudspeakers commonly fitted in desktop computers.
BACKING STORAGE
Is also known as secondary or auxiliary storage. It holds data and instructions until it is deleted.
The contents of the storage is not directly accessed by the CPU, it must be copied to the main memory
first.
Two main components constitute a secondary storage unit.
Storage medium
Read / write device
Magnetic tape
A narrow plastic strip coated with magnetic material just like the tape in a tape-recorder, the data is
written to or read from the tape as it passes the magnetic heads sequentially.
Uses
Magnetic tapes are often used to make a copy of hard discs for back-up reasons. This is automatically
done overnight on the KLB network and the tapes are kept in a safe place away from the server.
Advantages
Magnetic Disk
The main categories of magnetic disk is Hard disks
Hard discs
It is the principal storage of a computer system and is always assembled in the system unit. Data is
stored by magnetizing the surface of flat, circular plates called platters which have a surface that can be
magnetized. They constantly rotate at very high speed. A read/write head floats on a cushion of air a
fraction of a millimeter above the surface of the disc. The drive is inside a sealed unit because even a
speck of dust could cause the heads to crash.
Programs and data are held on the disc in blocks formed by tracks and sectors. These are created when
the hard disc is first formatted and this must take place before the disc can be used. Disc is usually
supplied pre-formatted.
For a drive to read data from a disc, the read/write head must move in or out to align with the correct
track (the time to do this is called the seek time). Then it must wait until the correct sector approaches
the head.
Uses
The hard disc is usually the usual main backing storage media for a typical computer or server. It is
used to store:
- The operating system (e.g. Microsoft® Windows)
- Applications software (e.g. word-processor, database, spreadsheet, etc.)
Files such as documents, music, video etc.
A typical home/school microcomputer would have a disc capacity of up to 80 gigabytes.
Advantages
Very fast access to data. Data can be read directly from any part of the hard disc (random access). The
access speed is about 1000 KB per second.
Disadvantages
Non really! Few home users have the data on their home computer hard drive backed up so it can be a
real disaster when they eventually fail.
Optical discs
The data is read by a laser beam reflecting or not reflecting from the disc surface.
Like a floppy disc, a CD-ROM only starts spinning when requested and it has to spin up to the correct
speed each time it is accessed. It is much faster to access than a floppy but it is currently slower than a
hard disc.
Uses
Most software programs are now sold on CD-Rom.
Advantages
CD-ROM's hold large quantities of data (650 MB).
They are relatively tough as long as the surface does not get too scratched.
17 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Chapter Two
SOFTWARE
Software is a set of logically related programs which controls the computer hardware.
Program is a group of logically related instructions which directs a computer on how to perform very
specific processing tasks.
The instructions direct the computer what to do step by step and the sequence of instructions determines
the order in which the instructions are executed.
SYSTEM SOFTWARE
System software is the software that helps the computer system to function and perform all its tasks.
Therefore it is a set of programs designed to enable the computer to manage it own resources ( i.e
hardware and application software) It includes the operating system, which manages the hardware and
software resources of the system, as well as the various utility programs that help to maintain and
optimize the system.
System software jobs typically involve working with these different components to ensure they function
correctly and efficiently. This can include troubleshooting and resolving issues and developing new
features and enhancements.
There are several different types of system software that we will look at in more detail very shortly:
Operating Systems
Utility programs
Library programs are a compiled collection of subroutines
Translator software (Compiler, Assembler, Interpreter)
Operating system
a collection of programs that make the computer hardware conveniently available to the user and also
hide the complexities of the computer's operation. The Operating System (such as Windows 7 or Linux)
interprets commands issued by application software (e.g. word processor and spreadsheets). The
Operating System is also an interface between the application software and computer. Without the
operating system, the application programs would be unable to communicate with the computer.
Since system software runs at the most basic level of your computer, it is called "low-level" software. It
generates the user interface and allows the operating system to interact with the hardware. Fortunately,
you don't have to worry about what the system software is doing since it just runs in the background. It's
nice to think you are working at a "high-level" anyway
User Interface.
The operating system provides an interface between the user and the computer system.
Job management.
18 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
It controls the running of programs which one gets executed first, then next.
In small computer, the operating system responds to interactive commands from the user and loads the
requested application program into memory for execution. Larger computers programs are run in shifts.
Task management
Controls the simultaneous execution of programs this referred to a multitasking. Operating systems of
this category have the ability to prioritize programs so that one job gets done between the other.
In order to provide users at terminals with the fastest response time, batch programs can be put on
lowest priority and interactive programs can be given highest priority.
Multi-tasking is accomplished by executing instructions for one function while data is coming into or
going out of the computer for another large computer are designed to overlap these operations and data
can move simultaneously in and out of the computer through separate channels with the operating
system governing these actions.
Data Management
Operating system keeps track of data on the disk. The application program does not know how to get it.
When a program is ready to accept data,it signals the operating system finds the data and delivers it to
the program. Conversely, when the program is ready to putput the operating system transfers the data
from the program onto the available space on disk.
Device Management
Operating system controls the input and from the peripheral devices.
The operating system is responsible for providing control management of all devices, not just disk
drives. When a new type of peripheral is added to the computer the operating system is updated with
anew driver for that device. The driver contains the specific instructions necessary to run it. The
operating system calls the drivers foe input and output, and the drivers talk to the hardware.
Single user
Supports one user at a time or a P.C
This category can furher be classified into:
- Single user single tasking.
- Single user multitasking.
Utility programs
Are small, powerful programs with a limited capability, they are usually operated by the user to maintain
a smooth running of the computer system. Various examples include file management, diagnosing
problems and finding out information about the computer etc. Notable examples of utility programs
include copy, paste, delete, file searching, disk defragmenter, disk cleanup. However, there are also
other types that can be separately installable from the Operating System.
Library Programs
Library programs are compiled libraries of commonly-used routines. On a Windows system they usually
carry the file extension dll and are often referred to as run-time libraries. The libraries are run-time
because they are called upon by running programs when they are needed. When you program using a
run-time library, you typically add a reference to it either in your code or through the IDE in which you
are programming.
Some library programs are provided within operating systems like Windows or along with development
tools like Visual Studio. For example, it is possible to download and use a library of routines that can be
used with Windows Media Player. This includes things like making playlists, functions and procedures
for accessing and manipulating the music library (which is a binary file) and playback routines.
Using library programs saves time when programming. It also allows the programmer to interact with
proprietary software without having access to its source code.
Language Translators
Whatever language or type of language we use to write our programs, they need to be in machine code
in order to be executed by the computer. There are 3 main categories of translator used,
Assembler
An assembler is a program that translates the mnemonic codes used in assembly language into the bit
patterns that represent machine operations. Assembly language has a one-to-one equivalence with
machine code, each assembly statement can be converted into a single machine operation.
Compiler
A compiler turns the source code that you write in a high-level language into object code (machine
code) that can be executed by the computer.
The compiler is a more complex beast than the assembler. It may require several machine operations to
represent a single high-level language statement. As a result, compiling may well be a lengthy process
with very large programs.
Interpreter
Interpreters translate the source code at run-time. The interpreter translates statements one-at-a-time as
the program is executed.
20 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Interpreters are often used to execute high-level language programs whilst they are being developed
since this can be quicker than compiling the entire program. The program would be compiled when it is
complete and ready to be released.
Interpreters are also used with high-level scripting languages like PHP, Javascript and many more.
These instructions are not compiled and have to be interpreted either by the browser (in the case of
Javascript) or by interpreters on the server (in the case of PHP).
APPLICATION SOFTWARE
Application software is the software that allows the computer to be applied to a particular problem.
Application software would not normally be applied to the management of the resources of the computer
such as memory, time spent processing job, and dealing with input and output. Application software is
used for a particular purpose or application such as order processing, payroll, stock control, games,
creating websites, image editing, word-processing and so on.
Software acquisition
There are many approaches of acquiring application software, the most popular approach is based on
the mode of acquisition. Thus there are two main categories.
Tailor-made application software
Purchased Application Software
Office Applications
Enable the user handle office tasks e.g word processor, Spreadsheets, Database manager etc.
Word processor – Enable the user to create, edit and produce documents.
Examples are Ms Word e.t.c.
Spreadsheet - Is an electronic spreadsheet which enable the user to capture data and
develop specialized reports. Examples are Ms Excel.
Database Management - Allows the user to store large amounts of records and
manipulated with great flexibility to produce meaningful Management
report. Examples Ms Access.
Presentation – Enable the to make presentation
Example Ms powerpoint
21 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Software Developments
Programming Languages
Programming languages are the primary tools for creating software. The concept of language
generations, sometimes called levels, is closely connected to the advances in technology that brought
about computer generations. So far 5 generations of programming languages have been defined. These
ranges from machine level languages (1GL) to languages necessary for AI & Neural Networks (5GL). A
brief introduction of each of the five generations is given below:
Disadvantage
Programs are portable
A translator is needed to translate the symbolic statements of a high-level language into
computer-executable machine language.
Fourth Generation Programming Language
With each generation, programming languages have become easier to use and more like natural
languages. However, fourth-generation languages (4GLs) seem to sever connections with the prior
generation because they are basically nonprocedural. In a nonprocedural language, users define only
what they want the computer to do, without supplying all the details of how something is to be done.
These languages are similar or closer to human languages. General characteristics of 4GL are:
- Closer to human languages
- Portable
- Database supportive
- Simple and requires less effort than 3GL
Advantage
The programming process became much easier because the programmers generate programs
Programs are portable
Disadvantage
Programs are portable
A translator is needed to translate the symbolic statements into computer-executable machine
language.
Object-Oriented Languages
In object-oriented programming, a program is no longer a series of instructions, but a collection of
objects. These objects contain both data and instructions, are assigned to classes, and can perform
specific tasks. With this approach, programmers can build programs from pre-existing objects and can
use features from one program in another. These results in faster development time, reduced
maintenance costs, and improved flexibility for future revisions. Some examples of object-oriented
languages are: C, Java, and Ada
Fifth Generation Natural Languages
Natural language programming is a subfield of AI that deals with the ability of computers to understand
and process human language. It is an interdisciplinary field that combines linguistics, computer science,
and artificial intelligence.
Languages used for writing programs for Artificial Intelligence, Neural Network, Plasma Computing
etc.
Natural language programming is a subfield of AI that deals with the ability of computers to understand
and process human language. It is an interdisciplinary field that combines linguistics, computer science,
and artificial intelligence.
Natural language processing (NLP) is the ability of a computer program to understand human language
as it's spoken and written -- referred to as natural language.
SOFTWARE DEVELOPMENT
Software development is the process of computer programming, documenting, testing, and bug fixing
resulting in a product. Software development in a broader sense, it includes all that is involved between
the conceptions of the desired software through to the final manifestation of the software. Therefore,
software development may include research, new development, prototyping, modification, reuse, re-
engineering, maintenance, or any other activities that result in software products.
Software can be developed for a variety of purposes, the three most common being to meet specific
needs of a specific client/business (the case with custom software), to meet a perceived need of some set
of potential users (the case with commercial and open source software), or for personal use (e.g. a
scientist may write software to automate a task).
Use: Software development was in its infancy, primarily used for scientific and military purposes.
Applications:
Scientific calculations and simulations.
Military and defense systems.
Business data processing.
Applications:
Commercial data processing.
Early database management systems.
Development of operating systems.
The advent of personal computers brought software development to a broader audience. This era
witnessed:
Key Points:
Personal Computers: The advent of personal computers brought software development to a broader
audience.
Graphical User Interfaces (GUI): Graphical interfaces like Windows and Macintosh OS improved user
experience.
Use: Expansion into home computing, gaming, and word processing.
Applications:
Word processing software (e.g., MS Word).
Early PC games (e.g., Pong and Pac-Man).
Development of GUI-based operating systems.
5. 2001: Apple introduced Mac OS X, combining the Unix-based architecture with user-friendly
interfaces, influencing modern operating systems.
2008: The release of the Apple App Store marked the start of the mobile app era, transforming software
development.
2009: Bitcoin, a decentralized digital currency, introduced blockchain technology, opening up new
possibilities for software applications.
Applications:
Cloud-based storage and computing (e.g., Amazon Web Services).
AI-powered virtual assistants (e.g., Siri and Alexa).
Internet of Things (IoT) applications for smart homes and cities.
Year Wise Evolution of Software Development
A software developer job involves designing, creating, testing, and maintaining software applications.
They may work in various industries, including computer science, engineering, information technology
and business.
Chapter Three
DATA COMMUNICATION
Data communication is the process of transferring data from one place to another or between two
locations. It allows electronic and digital data to move between two networks, no matter where the two
are located geographically.
The fundamental purpose of data communications is to exchange information between user's computers,
terminals and applications programs. In its simplest form data communications takes place between two
devices that are directly connected by some form of point-to-point transmission medium.
Computer network
Computer networking refers to interconnected computing devices that can exchange data and share
resources with each other. These networked devices use a system of rules, called communications
protocols, to transmit information over physical or wireless technologies.
Computer network is built with two basic blocks: nodes or network devices and links. The links connect
two or more nodes with each other. The way these links carry the information is defined by
communication protocols.
Objectives of Deploying a Computer Network
a) Resource sharing
A network allows data and hardware to be accessible to every pertinent user, across departments,
geographies, and time zones.
b) Resource availability
A network ensures that resources are not present in inaccessible silos and are available from multiple
points. facilitate sharing of network resources, such as: Data, Hardware facilities, Software
c) Performance management
When one or more processors are added to the network, it improves the system’s overall
performance and accommodates this growth.
Cost savings
This not only improves performance but also saves money. Since it enables employees to access
information in seconds, networks save operational time, and subsequently, costs. Centralized
network administration also means that fewer investments need to be made for IT support.
d) Increased storage capacity
Network-attached storage devices are a boon for employees who work with high volumes of data.
With businesses seeing record levels of customer data flowing into their systems, the ability to
increase storage capacity is necessary in today’s world.
e) Streamlined collaboration & communication
Networks have a major impact on the day-to-day functioning of a company. Employees can share
files, view each other’s work, sync their calendars, and exchange ideas more effectively.
f) Reduction of errors
Networks reduce errors by ensuring that all involved parties acquire information from a single
source, even if they are viewing it from different locations.
g) Secured remote access
A secure network ensures that users have a safe way of accessing and working on sensitive data,
even when they’re away from the company premises.
1. Network Devices
Network devices or nodes are computing devices that need to be linked in the network. Some network
devices include:
Computers, mobiles, and other consumer devices: These are end devices that users directly and
frequently access. For example, an email originates from the mailing application on a laptop or mobile
phone.
Terminal
Also referred to as a workstation or remote site, It is used for accessing the network resources.
They fall into three main categories:
Dump terminal – Used for input and retrieval e.g. ATM
Smart terminal - Used for input, retrieval and limited processing.
Intelligent terminal - Used for input, retrieval, limited processing and storage of data
Servers: These are application or storage servers where the main computation and data
storage occur. Handles all computer processing requests from terminals. It stores application
programs processing requests for and computer communication programs. Computer systems
used as “host” computers vary considerably in terms of size and capability.
Routers: Routing is the process of selecting the network path through which the data
packets traverse. Routers are devices that forward these packets between networks to
ultimately reach the destination. They add efficiency to large networks.
A hub
Is a device into which you can connect all device s on a network so that they can talk to each
other. Can add delays to your network.
Switch
Is used to connect all devices on a home network so that they can talk together switches regulate
traffic providing more efficient traffic flow.
A bridge
Is used to allow traffic from one network segment to the other. When network segments are
combined into a single large network, paths exist between the individual network segments.
These paths are called routes, and devices like routers and bridges keep tables which define how
to get a particular computer in the network.
Switches: are electronic devices that receive network signals and clean or strengthen
them. Hubs are repeaters with multiple ports in them. A switch is a multi-port bridge.
Multiple data cables can be plugged into switches to enable communication with multiple
network devices.
Gateways: Gateways are hardware devices that act as ‘gates’ between two distinct
networks. They can be firewalls, routers, or servers.
Modem
Modem accepts digital signals and converts them to analog and vice versa.
29 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
To allow digital data to be transmitted over telephone lines, is converted (i.e. modulated) to
analog signals and transmitted upon reaching their destination the analog signal is demodulated
to digital signals.
2. Communication Links
Is a facility by which data is transmitted between locations in a computer network. The channel may be
one or a combination of transmission media below.
Wired
Twisted cable
coaxial cables
fiber optics
The already established complex telephone networks permits data to be transmitted over the network.
Coaxial cable
Commit of a hollow outer conductor which surrounds a single inner conductor. Both the outer and inner
conductor is insulated.
Permit high-speed data transmission with minimal signal distortion.
Advantages
Fiber optic cables have much lower error rates than telephone cables.
Transmits 10,000 times faster than the microwave systems.
it is resistant to illegal data theft (tapping)
it is very cheap
b. WiFi
WiFi has played a critical role in providing high-throughput data transfer in homes and for enterprises
— it’s another well-known IoT wireless technology. It can be quite effective in the right situations,
though it has significant limitations with scalability, coverage, and high power consumption.
The high energy requirements often make WiFi a poor solution for large networks with battery-operated
sensors, such as smart buildings and industrial use. Instead, it’s more effective with devices like smart
home appliances. The latest WiFi technology, WiFi 6, does offer improved bandwidth and speed, though
it’s still behind other available options. And it carries security risks that other options don’t.
c. Zigbee.
Zigbee is the leading wireless standard behind IoT devices like smart home equipment, consumer
electronics, healthcare gear, and industrial ...
Zigbee is a wireless technology specifically designed for low-power, low-data rate applications. It is
commonly used in smart home devices, industrial ..
e. LPWAN (Cat-M1/NB-IoT)
Low power wide area networks (LPWAN) provide long-range communication using small, inexpensive
batteries. This family of technologies is ideal for supporting large-scale IoT networks where a
significant range is required. However, LPWANs can only send small blocks of data at a low rate.
LPWANs are ideally suited for use cases that don’t require time sensitivity or high bandwidth, like a
water meter for example. They can be quite effective for asset tracking in a manufacturing facility,
facility management, and environmental monitoring. Keep in mind that standardization is important to
ensure the network’s security, interoperability, and reliability.
g. NFC
Near-field communication (NFC) is a short-range wireless connectivity technology that uses magnetic
field induction to enable communication between devices when they're touched together or brought
within a few centimeters of each other. This includes authenticating credit cards, enabling physical
access, transferring small files and jumpstarting more capable wireless links.
h. RFID
Radio Frequency Identification (RFID) uses radio frequency signals to track and monitor objects and
assets efficiently and accurately. RFID systems consist of tags, equipped with unique identification
numbers, that can be attached to or embedded in objects like credential badges and wearables for
sporting events, parking tags to hang in automobiles, loyalty cards, and warehousing labels. RFID tags
exchange information wirelessly but do not connect to the internet directly. RFID technology requires a
gateway and cellular connectivity to send data to a cloud platform.
RFID is a wireless technology that uses radio waves to identify and track objects. It consists of a tag
with a unique identifier and a reader that captures.
i. Cellular
One of the best-known wireless technology is cellular, particularly in the consumer mobile market. A
cellular network is a system of radio waves dispersed across land in the shape of cells, with a base
station permanently fixed in each cell. These cells work together to provide greater geographic radio
coverage.
Every base station is connected to the mobile switching centre to create a call and mobility network by
connecting mobile phones to wide area networks.
First Generation (1G): Available in the 1980s, analogue voice communication was the primary feature
of the first generation of cellular networks.
Second Generation (2G): These networks brought text messaging (SMS) and better audio quality,
signaling the shift to digital technologies.
Third Generation (3G): These networks significantly increased data transmission speeds, making
multimedia apps, video calling, and mobile internet access possible.
Fourth Generation (4G): These networks are intended to offer reduced latency, increased support for
multimedia applications, and quicker data transfer rates. For 4G networks, Long-Term Evolution (LTE)
is a standard.
Fifth Generation (5G): The newest Generation of cellular technology, known as 5G (Fifth Generation)
networks, are intended to offer much higher data transmission speeds, lower latency, and support for a
large number of connected devices. To accomplish these improvements, 5G uses various frequency
bands, including millimeter-wave frequencies.
j. Microwave Link
Microwave means very short wave. The microwave frequency spectrum is 1GHZ to 30 30GHZ.Lower
frequency bands are congested and demand for point to point communication.
A Microwave Link in Electronic Communication performs the same functions as a copper or optic fiber
cable, but in a different manner, by using point-to-point microwave transmission between repeaters.
A Microwave Link in Electronic Communication terminal has a number of similarities to a coaxial cable
terminal. Where a cable system uses a number of coaxial cable pairs, a Microwave Link in Electronic
Communication will use a number of carriers at various frequencies within the bandwidth allocated to
the system. The effect is much the same, and once again a spare carrier is used as a “protection” bearer
in case one of the working bearers fails. Finally, there are interconnections at the terminal to other
microwave or cable systems, local or-trunk.
The antennas most frequently used are those with parabolic reflectors. Hoghorn antennas are preferred
for high-density links, since they are broadband and low-noise. They also lend themselves to so-called
frequency reuse, by means of separation of signals through vertical and horizontal polarization.
The towers used for Microwave Link in Electronic Communication range in height up to about 25 m,
depending on the terrain, length of that particular link and location of the tower itself. Such link
repeaters are unattended, and, unlike coaxial cables where direct current is fed down the cable, repeaters
must have their own power supplies. The 200 to 300 W of dc power required by a link is generally
provided by a battery. In turn, the power is replenished by a generator, which may be diesel, wind-
driven or, in some (especially desert) locations, solar. The antennas themselves are mounted near the top
of the tower, a few meters apart in the case of space diversity.
Basically, microwave links are cheaper and have better properties for TV transmission, although coaxial
cable is much less prone to interference.
Microwave technologies can be a very secure form of communication. If a signal needs to be transmitted
over a short distance.
k. Satellite systems
A Satellite System is defined as a network comprising satellites, system control centers, gateways, and
terminals that enable direct communication channels between satellites and terminals, as well as
connections to land networks through feeder links. The system is managed and monitored by a central
control center, with satellites classified as intelligent or dumb based on their functionality.
Communications satellites are radio-relay stations in space. They serve much the same purpose as the
microwave towers one sees along the highway. The satellites receive radio signals transmitted from the
ground, amplify them, and retransmit them back to the ground. Since the satellites are at high altitude,
they can “see” across much of the earth. This gives them their principal communications advantage: the
ability to span large distances. Ground links, such as microwave relays, are inherently limited in their
ability to cover large distances by the terrain.
Satellite technologies
To date, there are over 1,200 satellites orbiting Earth to support three main categories of systems: (1) to
collect data (including weather data, pictures, etc.), (2) to broadcast location-related information, and (3)
to relay communications. The first category deals with remote sensing that enables large-scale data
collection for monitoring both short- and long-term phenomena, including emergencies and disasters.
Major uses of the collected data include mapping and cartography, weather and environmental
monitoring, and change detection .
Applications
Satellite systems that provide vehicle tracking and dispatching (OMNITRACs) are commercially
successful. Satellite navigation systems (the Global Positioning System or GPS) are very widely used. A
new wireless system for Digital Audio Broadcasting (DAB) has recently been introduced in Europe.
3. Communication protocols
A communications protocol is a set of formal rules describing how to transmit or exchange data,
especially across a network.
These types of protocols use typical rules as well as methods like a common language to interact with
computers or networks to each other. For instance, if a user wants to send an e-mail to another, then the
user will create the e-mail on his personal computer by including the details along with the message and
attachments.
Once the user sends the e-mail, then immediately multiple actions can take place so that the receiver gets
the email. The message moves over the network and reaches the recipient. These protocols provide the
information on how the note will be enclosed so that it can move over the system, how the receiver
computer can verify for errors, etc
A standardized communications protocol is one that has been codified as a standard. Some common
protocols include WiFi, the internet protocol suite (TCP/IP),, and the Hypertext Transfer Protocol
(HTTP). IEEE 802, etc.
TCP/IP is a conceptual model that standardizes communication in a modern network. It suggests four
functional layers of these communication links:
Network access layer: This layer defines how the data is physically transferred. It includes how
hardware sends data bits through physical wires or fibers.
34 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Internet layer: This layer is responsible for packaging the data into understandable packets and
allowing it to be sent and received.
Transport layer: This layer enables devices to maintain a conversation by ensuring the
connection is valid and stable.
Application layer: This layer defines how high-level applications can access the network to
initiate data transfer.
4. Network Security
Network security encompasses all the steps taken to protect the integrity of a computer network and the
data within it. Network security is important because it keeps sensitive data safe from cyber-attacks and
ensures the network is usable and trustworthy. Successful network security strategies employ multiple
security solutions to protect users and organizations from malware and cyber-attacks, like distributed
denial of service.
A network is composed of interconnected devices, such as computers, servers and wireless networks.
Many of these devices are susceptible to potential attackers. Network security involves the use of a
variety of software and hardware tools on a network or as software as a service. Security becomes more
important as networks grow more complex and enterprises rely more on their networks and data to
conduct business. Security methods must evolve as threat actors create new attack methods on these
increasingly complex networks.
Security is critical when unprecedented amounts of data are generated, moved, and processed across
networks.
Regulatory issues
Many governments require businesses to comply with data security regulations that cover aspects of
network security
5. Software
Software falls into two categories:
-Operating system
-Application software
Operating system
Controls data transmission in the network through receiving data establishing contact with terminals and
processing any line errors which may occur
Browser
Browser is an interactive program that permits a user to view information from the World Wide Web
(www).
Operating system
World Wide Web (WWW)- is a large scale, on-line repository of information that users can
Access using a browser.
Software
Browser
Operating System
Information
World Wide Web (www)
Information ‘Browsing’
End users often working from PCs are able to search and find information of interest.
Newsgroups
Under a facility on the internet called “Usenet” individuals can gain access to a very wide range of
information topics. The Usenet software receives “postings” of information and transmits new postings
of information and transmits new postings to users who have registered their interest in receiving the
information. Software user groups include: Professional bodies
Access computer
User can easily access any computer within the network as long as the address is known.
Evolution of networking
The evolution of networking has been one of the most transformative technological advancements in
modern history. Today, networking plays an integral role in the way we communicate, conduct business,
and connect. In this blog, we will explore the history, evolution, present stage, and future of networking,
and highlight how networks are part of our daily lives.
Networking has come a long way since the early days of communication as shown in the table above. In
the mid-20th century, computer networks were developed, allowing computers to communicate with
each other over long distances.
The advent of the internet in the 1990s brought about a new era of networking. The internet allowed
people to connect on a global scale, and the development of the World Wide Web made it easy to access
information and services from anywhere in the world. The growth of social media platforms, mobile
devices, and cloud computing has further accelerated the evolution of networking.
Present Stage
Today, networking is an essential part of our daily lives. We use networks to communicate with friends
and family, access information and entertainment, conduct business, and even control our homes. Social
media platforms like Facebook, Twitter, and Instagram allow us to connect with people around the
world and share our experiences. Mobile devices like smartphones and tablets have made it easy to stay
connected on the go, while cloud computing has made it possible to access data and services from
anywhere in the world.
Businesses and governments also rely heavily on networking to operate efficiently. Networks allow
businesses to connect with customers, partners, and suppliers around the world, and to collaborate on
projects in real time.
Future
The future of networking is bright, with new technologies promising to bring about even more
transformative changes. One of the most exciting developments is the emergence of 5G networks, which
will offer faster speeds, lower latency, and greater reliability than current networks. This will enable new
applications like autonomous vehicles, virtual and augmented reality, and smart cities.
Other emerging technologies like the Internet of Things (IoT), artificial intelligence (AI), and
blockchain are also poised to revolutionize networking. IoT devices will enable the creation of smart
homes, smart cities, and even smart factories, while AI will help us better manage and analyze the vast
amounts of data generated by these devices. Blockchain technology, on the other hand, will enable
secure and transparent transactions between parties, without the need for intermediaries.
What is intelligence?
All but the simplest human behavior is ascribed to intelligence, while even the most
complicated insect behavior is usually not taken as an indication of intelligence. What is the difference?
Consider the behavior of the digger wasp, Sphex ichneumons. When the female wasp returns to her
burrow with food, she first deposits it on the threshold, checks for intruders inside her burrow, and only
then, if the coast is clear, carries her food inside. The real nature of the wasp’s instinctual behavior is
revealed if the food is moved a few inches away from the entrance to her burrow while she is inside: on
emerging, she will repeat the whole procedure as often as the food is displaced. Intelligence—
conspicuously absent in the case of the wasp—must include the ability to adapt to new circumstances.
Psychologists generally characterize human intelligence not by just one trait but by the combination of
many diverse abilities.
Learning
Learning is a crucial component of AI as it enables AI systems to learn from data and improve
performance without being explicitly programmed by a human.AI technology learns by labeling data,
discovering patterns within the data, and reinforcing this learning via feedback.
There are a number of different forms of learning as applied to artificial intelligence. The simplest is
learning by trial and error.
For example, a simple computer program for solving mate-in-one chess problems might try moves at
random until mate is found. The program might then store the solution with the position so that, the next
time the computer encountered the same position, it would recall the solution. This simple memorizing
of individual items and procedures—known as rote learning—is relatively easy to implement on a
computer.
Generalization involves applying past experience to analogous new situations. For example, a program
that learns the past tense of regular English verbs by rote will not be able to produce the past tense of a
word such as jump unless the program was previously presented with jumped, whereas a program that is
able to generalize can learn the “add -ed” rule for regular verbs ending in a consonant and so form the
past tense of jump on the basis of experience with similar verbs.
Example: Voice recognition systems like Siri or Alexa learn correct grammar and the skeleton of a
language
Example: A writing assistant, like Grammar, knows when or when not to add commas and other
punctuation marks
Problem solving
Problem solving in AI is similar to reasoning and decision making. AI systems take in data, manipulate
it and apply it to create a solution that solves a specific problem.
Example: A chess game understands its opponent's moves and then decides to make the best decision
based on the game's rules and predicting future moves and outcomes
Perception
Perception refers to AI utilizing different real or artificial sense organs. The AI system can take in data
and perceive suggested objects, and understand its physical relationship (e.g, distance) to said objects.
Perception often involves image recognition, object detection, image segmentation, and video analysis.
Example: Self-driving cars gather visual data to recognize roads, lanes, and obstacles and then map
these objects.
In perception the environment is scanned by means of various sensory organs, real or artificial, and the
scene is decomposed into separate objects in various spatial relationships. Analysis is complicated by
the fact that an object may appear different depending on the angle from which it is viewed, the
direction and intensity of illumination in the scene, and how much the object contrasts with the
surrounding field. At present, artificial perception is sufficiently advanced to enable optical sensors to
identify individuals and enable autonomous vehicles to drive at moderate speeds on the open road.
40 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Computer Vision
As the name suggests, computer vision is dedicated to analyzing and comprehending visual media,
whether images or videos. It’s the component that enables AI algorithms to accurately and reliably
identify objects that the machine “sees” and react accordingly.
Language
A language is a system of signs having meaning by convention. In this sense, language need not be
confined to the spoken word. Traffic signs, for example, form a mini-language, it being a matter of
convention that ⚠ means “hazard ahead” in some countries. It is distinctive of languages that linguistic
units possess meaning by convention, and linguistic meaning is very different from what is
called natural meaning, exemplified in statements such as “Those clouds mean rain” and “The fall in
pressure means the valve is malfunctioning.”
An important characteristic of full-fledged human languages—in contrast to birdcalls and traffic signs—
is their productivity. A productive language can formulate an unlimited variety of sentences.
Large language models like ChatGPT can respond fluently in a human language to questions and
statements. Although such models do not actually understand language as humans do but merely select
words that are more probable than others, they have reached the point where their command of a
language is indistinguishable from that of a normal human.
Machine learning(ML)
ML is the ability of machines to learn from data and algorithms automatically.
Machine learning has found applications in many fields beyond gaming and image classification, which
include:
The pharmaceutical company Pfizer used the technique to quickly search millions of
possible compounds in developing the COVID-19 treatment Paxlovid.
Google uses machine learning to filter out spam from the inbox of Gmail users.
Banks and credit card companies use historical data to train models to detect fraudulent
transactions.
Deepfakes are AI-generated media produced using two different deep-learning algorithms: one
that creates the best possible replica of a real image or video and another that detects whether the
replica is fake and, if it is, reports on the differences between it and the original. The
41 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
first algorithm produces a synthetic image and receives feedback on it from the
second algorithm; it then adjusts it to make it appear more real. The process is repeated until the
second algorithm does not detect any false imagery.
Deepfake media portray images that do not exist in reality or events that have never occurred.
Widely circulated deepfakes include:
an image of Pope Francis in a puffer jacket,
an image of former U.S. president Donald Trump in a scuffle with police officers,
a video of Facebook CEO Mark Zuckerberg giving a speech about his
Kenyan Politicians in different circumstances
Such events did not occur in real life.
Applications of Machine Learning
Machine Learning has a wide range of applications like predictive analytics, image recognition, and
even speech recognition. That’s especially true when combined with other AI components, such as
computer vision and NLP.
Applications of NLP
NLP has various applications, including text and audio translation from one language to another,
sentiment analysis of the emotion and meaning behind sentences, and interactive chatbots capable of
understanding and participating in a human conversation.
Examples of NLP
Some examples of NLP being used in various sectors include:
Siri: Apple's virtual assistant uses NLP to understand and respond to voice and text user
commands.
Google Translate: NLP automatically and instantaneously translates text from one language to
another.
42 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Gmail Smart Reply: This feature uses NLP to suggest quick email responses.
4. Robotics
Intelligent robots are mechanical structures in various shapes that are programmed to perform specific
tasks based on human instructions.
Robotics utilizes AI to develop and design robots or machines capable of performing tasks
autonomously or semi-autonomously. Generally, robotics involves other components of AI technology,
such as NLP, ML, or perception
Depending on the environment of use (land, air, and sea), they are called drones and rovers. In the
petroleum industry, they have been used in innovative and beneficial ways: in production; to connect
different segments of drill pipes during drilling, in underwater welding to conduct underwater
maintenance and repair tasks; in exploration to map outcrops for building digital models for geologists;
and in field operations to inspect remote sites and challenging terrains that are potentially dangerous for
humans to navigate.
Some of the benefits derived from the use of robots in the oil and gas industry include improving safety,
increasing productivity, automating repetitive tasks, and reducing operational costs by diminishing
downtime.
Applications of Robotics
Robotics has countless applications, from manufacturing and healthcare to exploring remote regions and
even search-and-rescue operations using nearly autonomous vehicles.
Examples of Robotics
Robotics is used in various sectors. Examples of robotics include:
Amazon's warehouse robots: These robots are used to move items around Amazon's warehouses
to fulfill deliveries.
Da Vinci Surgical System: This robot performs minimally invasive surgeries that are impossible
with humans alone.
NASA's Mars rovers: These robots are used to explore the surface of Mars.
Expert Systems
Expert systems are machines or software applications that provide explanation and advice to users
through a set of rules provided by an expert. The rules are programmed into software to reproduce the
knowledge for nonexperts to solve a range of actual problems. Examples of this are found in the fields
of medicine, pharmacy, law, food science, and engineering, and maintenance. In the oil and gas industry,
expert systems have been used from exploration through production, from research through operations,
and from training through fault diagnosis.
Intelligent Agents
Multi-agent systems (MAS) is a subfield of AI that builds computational systems capable of making
decisions and take actions autonomously. These systems are capable of maintaining information about
their environment and making decisions based on their perception about the state of the environment,
their past experiences, and their objectives. Agents can also interface with other agents to collaborate on
common goals. They emulate human social behavior by sharing partial views of a problem, enabling
collaboration, and cooperating with other agents to make appropriate and timely decisions to reach
desired objectives. Agents have been implemented successfully, mostly in the manufacturing industries,
and are proven to have potential benefits in the petroleum industry.
Uses of MAS include:
Managing supply chain
Addressing various production- and maintenance-related tasks
Processing and managing the distributed nature of the oil and gas business
Verifying, validating, and securing data streams in complex process pipelines
Getting insights from data to increase operational efficiency
Scheduling maintenance
Preventing theft and fraud
AI in business intelligence
AI is playing an increasingly important role in business intelligence (BI). AI-powered BI tools can help
businesses collect, analyze, and visualize data more efficiently and effectively. This can lead to
improved decision-making, increased productivity, and reduced costs.
Some of the ways that AI is being used in BI include:
Data collection: Collecting data from a variety of sources, including structured data (for example,
databases) and unstructured data (for example, text documents, images, and videos)
Data analysis: To analyze data and identify patterns, trends, and relationships
Data visualization: AI can help create visualizations that make it easier to understand data
44 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Decision-making: Insights and recommendations generated by AI models can help drive data-driven
decision-making for businesses
AI in healthcare
AI is also playing an increasingly important role in healthcare. AI-powered tools can help doctors
diagnose diseases, develop new treatments, and provide personalized care to patients. For example:
Disease diagnosis: AI can be used to analyze patient data and identify patterns that may indicate a
disease. This can help doctors diagnose diseases earlier and more accurately.
Treatment development: By analyzing large datasets of patient data, AI can identify new patterns and
relationships that can be used to develop new drugs and therapies.
Personalized care: By analyzing a patient's data, AI can help doctors develop treatment plans that are
tailored to the patient's specific needs.
AI in education
AI could be used in education to personalize learning, improve student engagement, and automate
administrative tasks for schools and other organizations.
Personalized learning: AI can be used to create personalized learning experiences for students. By
tracking each student's progress, AI can identify areas where the student needs additional support and
provide targeted instruction.
Improved student engagement: AI can be used to improve student engagement by providing interactive
and engaging learning experiences. For example, AI-powered applications can provide students with
real-time feedback and support.
Automated administrative tasks: Administrative tasks, such as grading papers and scheduling classes can
be assisted by AI models, which will help free up teachers' time to focus on teaching.
AI in finance
AI can help financial services institutions in five general areas: personalize services and products, create
opportunities, manage risk and fraud, enable transparency and compliance, and automate operations and
reduce costs. For example:
Risk and fraud detection: Detect suspicious, potential money laundering activity faster and more
precisely with AI.
Personalized recommendations: Deliver highly personalized recommendations for financial products
and services, such as investment advice or banking offers, based on customer journeys, peer interactions,
risk preferences, and financial goals.
Document processing: Extract structured and unstructured data from documents and analyze, search and
store this data for document-extensive processes, such as loan servicing, and investment opportunity
discovery.
AI in manufacturing
Some ways that AI may be used in manufacturing include:
Improved efficiency: Automating tasks, such as assembly and inspection
Increased productivity: Optimizing production processes
Improved quality: AI can be used to detect defects and improve quality control
Computer data processing is any process that uses a computer program to enter data and summarise,
analyse or otherwise convert data into usable information.
Because data is most useful when well-presented and actually informative, data-processing systems are
often referred to as information systems.
Information the word comes from a Latin word informae, which means “to build from” or to give
structure”.
Hence information is trends, patterns, tendencies (measurement of central tendency) that users need in
order to perform their jobs.
Information system is the set of devices, procedures and operations designed with the aid of user to
produce desired information and communicate it to the user for decision-making.
This system accepts data, processes it to produce desired information.
Qualities of Information
Relevance
The information a manager receives from an IS has to relate to the decisions the manager has to make
Accuracy
A key measure of the effectiveness of an IS is the accuracy and reliability of its information. The
accuracy of the data it uses and the calculations it applies generally determine the effectiveness of the
resulting information. However, not all data needs to be equally accurate.
Usefulness
The information a manager receives from an IS may be relevant and accurate, but it is only useful if it
helps him with the particular decisions he has to make. The MIS has to make useful information
easily accessible.
Timeliness
Management has to make decisions about the future of the organization based on data from the present,
even when evaluating trends. The more recent the data, the more these decisions will reflect present
reality and correctly anticipate their effects on the company.
Completeness
An effective IS presents all the most relevant and useful information for a particular decision. If some
information is not available due to missing data, it highlights the gaps and either displays possible
scenarios or presents possible consequences resulting from the missing data.
Uses of Information
Businesses and other organizations need information for many purposes: we have summarized the five
main uses in the table below.
Decision-making
i. Strategic information: used to help plan the objectives of the business as a whole and to measure
how well those objectives are being achieved. Strategic information include: Profitability of each
part of the business
and Size, growth & competitive structure of the markets in which a business operates
ii. Tactical Information: this is used to decide how the resources of the business should be
employed.
ii. Examples include: Information about business productivity (e.g. units produced per employee;
staff turnover)
iii. Operational: Information: this information is used to make sure that specific operational tasks are
carried out as planned/intended (i.e. things are done properly).
Users are the essential ingredients, which convert information to action through the knowledge,
understanding and skills, which they bring to bear on the data provided.
It is only at the level of the user that the information system actually provides benefit or value to the
organization. Users will themselves undertake further processing of the information received.
Transaction Processing
Transaction Processing is similar to interactive processing. Data for each transaction is processed very
shortly after the transaction occurs. A transaction is completely processed before the next transaction.
This may result in a particular transaction having to wait while an earlier one is processed. The delay
will usually be short. An example might be holiday bookings where a second transaction will not be
initiated until the first is completed to avoid the possibility of double booking.
Batch Processing
Batch processing is where a group of similar transactions are collected over a period of time and
processed in a batch. Eg: payroll system
Paper documents are collected into batched (e.g. of 50), they are checked, control
totals/hash totals are calculated and written into a batch header document.
The data is keyed offline from the main computer and it is validated by a computer
program. It is stored on a transaction file.
Data is verified by being entered a second time by a different keyboard operator.
The transaction file is transferred to the main computer.
Processing begins at a scheduled time.
The transaction file may be sorted into the same sequence as the master file to speed up
the processing of data.
The master file is updated.
Any required reports are produced.
A database is an integrated collection of a number of record types which are integrated by various
specific relationships, is also called a databank. Thus is an organized collection of logically related data,
managed in such a way as to enable the user or application programs to view the complete collection or
logical subset of the collection as a unit.
49 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
The data in the database will be expected to be both integrated and shared particularly on multipurpose-
user system.
Motivation for databases over files: integration for easy access and update, non-redundancy, multi-
access.
Data Models
The Evolution of Database Modeling
The various data models that came before the relational database model (such as the hierarchical
database model and the network database model) were partial solutions to the never-ending problem of
how to store data and how to do it efficiently. The relational database model is currently the best
solution for both storage and retrieval of data. Examining the relational database model from its roots
can help to understand critical problems the relational database model is used to solve; therefore, it is
essential to understand how the different data models evolved into the relational database model as it
is today.
The evolution of database modeling occurred when each database model improved upon the previous
one. The initial solution was no virtually database model at all: the file system (also known as flat files).
Hierarchical model
The term Hierarchical model covers a broad concept spectrum it often refers to a lot of setups like multi-
level models where there are various levels of information or data all related become larger form. it is
similar to the network model. A kind of database management system that links record together like a
family tree such that each record type has only one owner The hierarchical data model organizes data in
a tree structure. there is a hierarchy of parent and child data segments. this structure implies that a record
can have repeating information generally in the child data segments. Data in a series of records which
have a set of field values attached to it . It collects all the instances of a specific record together to it .it
50 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
collects all the instances of a specific record together as a record type. To create links between these
record types, the hierarchical model uses parent child relationships. There are 1:N mapping between
record types. This is done by using trees, likes set theory used in the relational model „‟borrowed‟‟
from math‟s.
Network model
Network models is a database model conceived as a flexible way of representing objects and their
relationship The popularity of the network data model coincided with the popularity of the hierarchical
data model. Some data were more naturally model with more than one parent per child, so the network
model permitted the modeling of many to many relationships in data. A set consists of an owner record
type, a set name and a member record type. Member record type can have that role in more than one set,
hence the multi parent concept is supported.
An owner record type can also be a member or owner in another set. Thus the complete network of
relationships is represented by several pair wise sets, in each set some (one)record type is owner and one
or more record types are members,
It very similar to the hierarchical model infant the hierarchical model Is a subset of the network model
Relational model
A relational database allows the definition of data structures, storage and retrieval operations and
integrity constraints. Relational model is a data base model based on first order predicate logic. The
relational model used the basic concept of a relation or table. the column or fields in the table identify
the attributes such as name, age; also a tuple or row contains all the data of single instance of the table
such as a person. In the relational model every tuple must have a unique identification or key based on
the data. Often, keys are used to join data from two or more relations based on matching identification.
The relational model also includes concepts such as foreign keys, which are primary keys in one relation
that re kept in another relation tallow for the joining of data. For examples- your parents SSN are keys
for the tuples that represent them and they are foreign keys in the tuple that represents you more also
,certain fields may be designated as keys, which means that searches for specific values of that field will
use indexing to speed them up .where fields in two different tables take values from the same set, a join
operation can be performed to select related records in the two tables by matching values in those tables
Advantages
Ease for use, Flexibility: Different tables from which information has to be linked and extracted can be
easily manipulated by operators such as project and join to give information in the form in which it is
desired Security control and authorization can also be implemented more easily by moving sensitive
attributes in a given table into a separate relation with its own authorization controls
TYPES OF DATABASES
Basically there are two types of databases, which are analytical databases and operationaldatabases.
Analytical Database
An analytic database, also called an analytical database, is a read-only system that stores historical data
on business metrics such as sales performance and inventory levels. Business analysts, corporate
executives and other workers can run queries and reports against an analytic database. An analytical
database system provides access to all of the data collected by an entity in interactive time.
The analytical database system transforms relational database data. An analytic database is specifically
designed to support business intelligence(BI)and analytic applications, typically as part of a data o rdata
mart. This differentiates it from an operational, transactional or OLTP database, which is used
for transaction processing i.e., order entry and other “run the business” applications.
On the web you will often see analytic databases in the form of inventory catalogs such asAmazon.com;
it usually holds descriptive information about all available products in the inventory. Analytical
databases also called OLAP (on line analytical processing)
Operational database
Operational Database is the database-of-record, consisting of system-specific reference data and event
data belonging to a transaction-update system. It may also contain system control data such as
indicators, flags, and counters. The operational database is the source of data for the data warehouse. It
contains detailed data used to run the day-to-day operations of the business. The data continually
changes as updates are made, and reflect the current value of the last transaction. An operational
database, as the name implies, is the database that is currently and progressive in use capturing real time
data and supplying data for real time computations and other analyzing processes. For example, an
operational database is the one which used for taking order and fulfilling them in a store whether it is a
traditional store or an online store. Other areas in business that use an operational database is in a
catalog fulfillment system any other Point of Sale system
Although, data warehousing is a promising technology it can become problematic for companies that
fail to use core principles. Finally, to having a proper design, data warehouse must be properly
maintained and implemented.
Data mining
Data mining, also known as "knowledge discovery," refers to computer-assisted tools and techniques for
sifting through and analyzing these vast data stores in order to find trends, patterns, and correlations that
can guide decision making and increase understanding. Data mining covers a wide variety of uses,
from analyzing customer purchases to discovering galaxies. In essence, data mining is the equivalent of
finding gold nuggets in a mountain of data. The monumental task of finding hidden gold depends
heavily upon the power of computers.
In summary, the purpose of DM is to analyze and understand past trends and predict future trends. By
predicting future trends, business organizations can better position their products and services for
financial gain. Nonprofit organizations have also achieved significant benefits from data mining, such as
in the area of scientific progress. The concept of data mining is simple yet powerful. The simplicity of
the concept is deceiving, however. Traditional methods of analyzing data, involving query-and-report
approaches, cannot handle tasks of such magnitude and complexity.
The use of data base management systems has boosted the activities in the modern business world. The
systems are designed to hold or store large amount of information. It has also been utilized in leaning
institution where by any information for every student is stored and can easily retrieved when required.
For example, if a student is engaged in bad activities parents can be traced easily because the
information regarding to that student can easily be retrieved from his/her detains that he/she filled on the
registration. Another example of database management systems use is that of booking tickets by
travellers, it gives opportunity to travellers to book in advance and when there is date of departure , there
record can easily be retrieved with ease and with less time.
Finally, we can not only attribute globalization to development of database management system but also
to development of new technologies within any organization in a country. The witnessing of technology
transformation alongside with alternations experienced in the trading environment has led to a
reconsideration of fundamental archival assumptions, thought and methods. The use of spreadsheets in
storing and retrieving information has led to a lot of deficiencies such as spending longer hours in
retrieving the information and limited storage of information space, but with the use of DBMS is an
efficient way of keeping such information because it captures nearly all trading dealings, safeguard
complete records and completely acknowledging proceedings or records within the organization
53 Technical University of Kenya Contact email: [email protected]
Information and Communications Technology
Chapter seven
INFORMATION SYSTEMS
Defining a system
System is a set of interacting or interdependent components forming an integrated whole or a set of
elements (often called ‘components’) and relationships which are different from relationships of the set
or its elements to other elements or sets. In a system the different components are connected with each
other and they are interdependent. Every system is delineated by its spatial and temporal boundaries,
surrounded and influenced by its environment, described by its structure and purpose and expressed in
its functioning.
The term system may also refer to a set of rules that governs structure and/or behavior. The term
institution is used to describe the set of rules that govern structure and/or behavior.
INFORMATION SYSTEM
Data consists of the raw facts representing events occurring in the organization before they are
organized into an understandable and useful form for humans.
Information the word comes from a Latin word informae, which means “to build from” or to give
structure”.
Hence information is trends, patterns, tendencies (measurement of central tendency) that users need in
order to perform their jobs.
An Information System can be defined technically as a set of interrelated components that collect (or
retrieve), process, store and distribute information to support decision making and control in an
organization.
Information systems should not be confused with information technology. They exist independent of
each other and irrespective of whether they are implemented well. Information systems use computers
(or Information Technology) as tools for the storing and rapid processing of information leading to
analysis, decision-making and better coordination and control. Hence information technology forms the
basis of modern information systems.
The principal tasks of information systems specialists involve modifying the applications for their
employer’s needs and integrating the applications to create a coherent systems architecture for the firm.
Generally, only smaller applications are developed internally. Certain applications of a more personal
nature may be developed by the end users themselves.
Depending on how you create your classification, you can find almost any number of different types of
information system. However, it is important to remember that different kinds of systems found in
organizations exist to deal with the particular problems and tasks that are found in organizations.
Examples
Management Information Systems
Geographical Information Systems etc
Thus, while there are several different versions of the pyramid model, the most common is probably a
four level model based on the people who use the systems. Basing the classification on the people who
use the information system means that many of the other characteristics such as the nature of the task
and informational requirements, are taken into account more or less automatically.
Transaction Processing System are operational-level systems at the bottom of the pyramid. They are
usually operated directly by shop floor workers or front line staff, which provide the key data required to
support the management of operations. This data is usually obtained through the automated or semi-
automated tracking of low-level activities and basic transactions.
However, within our pyramid model, Management Information Systems are management-level systems
that are used by middle managers to help ensure the smooth running of the organization in the short to
medium term.
A Decision Support System can be seen as knowledge based system, used by senior managers, which
facilitates the creation of knowledge and allow its integration into the organization. These systems are
often used to analyze existing structured information and allow managers to project the potential effects
of their decisions into the future.
Managers are the key people in an organization who ultimately determine the destiny of the
organization. They set the agenda and goals of the organization, plan for achieving the goals, implement
those plans and monitor the situation regularly to ensure that deviations from the laid down plan is
controlled.
The managers decide on all such issues that have relevance to the goals and objectives of the
organization. The decisions range from routine decisions taken regularly to strategic decisions, which
are sometimes taken once in the lifetime of an organization.
The decisions differ in the following degrees,
Complexity
Information requirement for taking the decision
Relevance
Effect on the organization
Degree of structured behavior of the decision-making process.
The different types of decisions require different type of information as without information one cannot
decide.
It was Dr. Roger F. Tomlinson who first coined the term geographic information system (GIS). He
created the first computerized geographic information system in the 1960s while working for the
Canadian government—a geographic database still used today by municipalities across Canada for land
planning.
Using GIS to solve problems
GIS works as a tool to help frame an organizational problem. The tool can help organizations make
various analysis with acquired data, and to share results that can be tailored to different audiences
through maps, reports, charts, and tables and delivered in printed or digital format.
a raster data
In its simplest form, a raster consists of a matrix of cells (or pixels) organized into rows and columns (or
a grid) where each cell contains a value representing information, such as temperature. Rasters are
digital aerial photographs, imagery from satellites, digital pictures, or even scanned maps.
Accuracy
A key measure of the effectiveness of an IS is the accuracy and reliability of its information. The
accuracy of the data it uses and the calculations it applies generally determine the effectiveness of the
resulting information. However, not all data needs to be equally accurate.
Usefulness
The information a manager receives from an IS may be relevant and accurate, but it is only useful if it
helps him with the particular decisions he has to make. The MIS has to make useful information easily
accessible.
Timeliness
MIS output must be current. Management has to make decisions about the future of the organization
based on data from the present, even when evaluating trends. The more recent the data, the more these
decisions will reflect present reality and correctly anticipate their effects on the company. When the
collection and processing of data delays its availability, the MIS must take into consideration its
potential inaccuracies due to age and present the resulting information accordingly, with possible ranges
of error. Data that is evaluated in a very short time frame can be considered real-time information.
Completeness
An effective MIS presents all the most relevant and useful information for a particular decision. If some
information is not available due to missing data, it highlights the gaps and either displays possible
scenarios or presents possible consequences resulting from the missing data.
The computer age introduced a new element to businesses, universities, and a multitude of other
organizations: a set of components called the information system, which deals with collecting and
organizing data and information. An information system is described as having five components.
Computer hardware
This is the physical technology that works with information. Hardware can be as small as a smartphone
that fits in a pocket or as large as a supercomputer that fills a building. Hardware also includes the
peripheral devices that work with computers, such as keyboards, external disk drives, and routers. With
the rise of the Internet of things, in which anything from home appliances to cars to clothes will be able
to receive and transmit data, sensors that interact with computers are permeating the human
environment.
Databases /Data
This component is where the “material” that the other components work with resides. A database is a
place where data is collected and from which it can be retrieved by querying it using one or more
specific criteria. A data warehouse contains all of the data in whatever form that an organization needs.
Databases and data warehouses have assumed even greater importance in information systems with the
emergence of “big data,” a term for the truly massive amounts of data that can be collected and
analyzed.
All software need not be information system software while information system is always implemented
as software.
For example, a company writing BIOS software or anti-virus software is not dealing with any
information systems. Information systems let you access data for example your company employee
payroll system, employee master file etc.