0% found this document useful (0 votes)
32 views46 pages

Fundamentals of Computer Notes

A computer is an electronic device that processes data according to instructions from software, playing a crucial role in various fields like education, business, healthcare, and entertainment. It consists of hardware components such as the CPU, RAM, storage devices, and input/output devices, all of which work together to perform tasks. Software is divided into application software for user-specific tasks and system software for managing hardware operations.

Uploaded by

utkarshbedii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
32 views46 pages

Fundamentals of Computer Notes

A computer is an electronic device that processes data according to instructions from software, playing a crucial role in various fields like education, business, healthcare, and entertainment. It consists of hardware components such as the CPU, RAM, storage devices, and input/output devices, all of which work together to perform tasks. Software is divided into application software for user-specific tasks and system software for managing hardware operations.

Uploaded by

utkarshbedii
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 46

Unit - 1

Definition:

"A computer is an electronic device that manipulates data and performs tasks according to a set of
instructions called programs."

• Data Manipulation: At its core, a computer receives input (data), processes it, and then outputs
something meaningful. The processing is done based on instructions provided by software, which we
will talk about later.

• Set of Instructions (Programs): These are the software that tells the computer what to do. Without
software, the hardware is just a collection of components doing nothing.

It takes input (data) from users via software and input devices (hardware), processes it using a central
processing unit (CPU), stores information, and produces output (results) to perform various tasks.
Importance of Computers:

"Computers are everywhere! They play a huge role in various fields of life."

• In Education: Facilitating e-learning platforms, simulations, and digital classrooms.

• In Business: Automating processes, managing data, handling finances, and communication.

• In Healthcare: Managing patient records, medical imaging, and diagnostics.

• In Entertainment: Playing games, streaming movies, and creating digital content.

How Different Components Communicate?

Mahima Mangal
Let's discuss some important component of computer in detail,

Component Description

The CPU often referred to as the "brain" of the computer. It’s responsible for
Central executing instructions, performing calculations, and handling tasks that ensure the
Processing Unit system runs efficiently. The CPU processes input data and transforms it into useful
(CPU) information. It consists of the Arithmetic Logic Unit (ALU) and Control Unit
(CU).

The main circuit board connects and allows communication between all computer
Motherboard
components.

Random Access Memory (RAM) stores data temporarily for quick access while
Memory (RAM)
the computer is running.

Includes Hard Disk Drives (HDD) and Solid-State Drives (SSD) that store data
Storage
permanently.

Devices are used to input data into the computer. Examples: keyboard, mouse,
Input Devices
scanner.

Devices that display or output the results of the computer’s processing. Examples:
Output Devices
printer, speakers.

Hardware
The physical devices that make up the computer are called Hardware. The hardware units are responsible for
entering, storing and processing the given data and then displaying the output to the users. The basic hardware
units of a general-purpose computer are keyboard, mouse, memory, CPU, monitor and printer. Among these
hardware units, keyboard and mouse are used to input data into the computer, memory is used to store the
entered data, CPU is used to process the entered data and monitor and printer are used to display the processed
data to the users.

CPU is the main component inside the computer that is responsible for performing various operations and
also for managing the input and output devices. It includes two components for its functioning, Arithmetic
Logic Unit (ALU) and Control Unit (CU). ALU is used to perform the arithmetic operations, such as addition,
subtraction, etc. and logic operations, such as AND, OR, etc. on the data obtained from the memory. CU is

Mahima Mangal
used to control the activities related to the input and output devices. It obtains the instructions from the
memory, decodes them and then, executes them, so as to deliver output to the users.

Types of Hardware:

• Central Processing Unit (CPU) - Executes instructions and performs calculations.

• Memory (RAM) - Temporarily stores data that the CPU needs during operation.

• Storage Devices (HDD/SSD) - Store data permanently, even when the computer is turned off.

• Input Devices - Allow users to interact with the computer (e.g., keyboard, mouse).
• Output Devices - Display or produce results of the computer’s processing (e.g., monitors, printers).

Software
The term software refers to a set of programs and instructions that help the computers in carrying out their
processing. Software is very necessary for the proper functioning of a computer. There are mainly two types
of software, viz. Application Software and System Software:

Application software: The programs, which are designed to perform a specifi c task for the user, are known
as application software. Application software is also referred as end-user programs because its functions are
used by the user for obtaining the desired results. Word processor, database programs, presentation programs
and spreadsheets are the examples of application software.

System software: The programs, which are designed to control the different operations of the computer, are
known as system software. It mainly manages the activities of the computer hardware and interacts with the
application software to perform a particular task. Operating systems, compilers, assemblers, interpreters and
device drivers are the examples of system software.

Mahima Mangal
Components
Hardware: Introduction
1. Definition of Hardware
o Hardware refers to the physical, tangible components of a computer system —
the parts you can touch (circuit boards, keyboard, mouse, etc.).
o These components perform the basic functions required for computing: input,
processing, storage, and output.
2. Role of Hardware in a Computer System
o Hardware works in conjunction with software: software gives instructions, and
hardware executes them.
o In the context of data flow: first, data is input (via input devices), then the CPU
(processor) and memory process or transform it, then results are given out via
output devices.
o Hardware also includes peripherals, which are auxiliary devices connected to
the computer (e.g., keyboard, disk drives, printers) but are not part of the core
CPU/memory module.
3. Classification of Hardware Components
o Input devices — to feed data and commands into the computer.
o Processing unit — CPU (Arithmetic Logic Unit, Control Unit) handles
computation.
o Memory / Storage units — primary memory (RAM, ROM), secondary storage
(HDD, SSD).
o Output devices — to present the results of computation to the user.

4. Input Devices
• Definition:
An input device is any hardware component that allows data or user commands to enter into the
computer system.
• Classification & Examples:
Based on how they work (mechanical, audio, visual, etc.), input devices can be categorized into
several types. Below are common input devices, their functions, and examples:

Mahima Mangal
Device Type Description & Function Example(s)
Keyboard One of the most fundamental input Standard 101/102-key
devices; used for typing text, keyboard, function keys,
commands, alphanumeric data. numeric keypad.
Pointing Devices Allow the user to point, click, or Mouse, Trackball, Joystick,
manipulate the cursor on screen. Light Pen.
Visual Input Capture visual information from the Scanner, Digital Camera,
Devices real world and convert it into digital Webcam.
data.
Audio Input Convert analog audio signals (voice, Microphone.
Devices sound) into digital signals.
Game/ Used for specialized input, especially Joystick, Gamepad.
Specialized in interactive or graphical applications.
Controllers

For more details: https://www.geeksforgeeks.org/computer-science-fundamentals/input-and-


output-devices/
Output Devices
Definition:
An output device is any hardware component that conveys processed data (information) from
the computer to the user in human-perceivable form.
Classification & Examples:
Output devices can also be categorized into different types depending on how they present
data (visual, audio, hard-copy, etc.). Below are common types and their functions:
Device Type Description & Function Example(s)
Display / Show text, graphics, and video on a Monitor (CRT, LCD)
Visual screen.
Printer Produce a physical (hard-copy) Impact printers, Non-impact
representation of data (text, images) printers (laser, inkjet)
on paper.

Mahima Mangal
Audio Convert digital sound data into Speakers, Headphones.
Output analog signals that can be heard.
Projection Display visual output on a larger Projectors.
Devices surface for audiences.
Hybrid I/O Devices that act both as input and Touchscreen (displays output and
Devices output. takes touch input), USB drives,
Modems.

Mahima Mangal
Central Processing Unit

Mahima Mangal
Mahima Mangal
Mahima Mangal
Mahima Mangal
Memory
Memory is the electronic storage space where a computer keeps the instructions and data it needs to access
quickly. It's the place where information is stored for immediate use. Memory is an important component of
a computer, as without it, the system wouldn’t operate correctly. The computer’s operating system (OS),
hardware, and software all rely on memory to function properly. It acts as a storage unit or device where data
to be processed and the instructions necessary for processing are kept. Both input and output data can be
stored in memory.

Mahima Mangal
Computers are used not only for processing of data for immediate use, but also for storing of large volume
of data for future use. In order to meet these two specific requirements, computers use two types of storage
locations—one, for storing the data that are being currently handled by the CPU and the other, for storing the
results and the data for future use.
The storage location where the data are held temporarily is referred to as the primary memory while the
storage location where the programs and data are stored permanently for future use is referred to as the
secondary memory. The primary memory is generally known as “memory” and the secondary memory as
“storage”. The data and instructions stored in the primary memory can be directly accessed by the CPU using
the data and address buses. However, the information stored in the secondary memory is not directly
accessible to CPU. Firstly, the information has to be transferred to the primary memory using I/O channels
and then, to the CPU.

Computers also use a third type of storage location known as the internal process memory. This memory is
placed either inside the CPU or near the CPU (connected through special fast bus).

Primary memory (also known as main memory) includes two types, namely, Random Access Memory
(RAM) and Read Only Memory (ROM). The data stored in RAM are lost when the power is switched off
and therefore, it is known as volatile memory. However, the data stored in ROM stay permanently even after
the power is switched off and therefore ROM is a non-volatile memory.

Secondary memory (also known as auxiliary memory) includes primarily magnetic disks and magnetic tapes.
These storage devices have much larger storage capacity than the primary memory. Information stored on
such devices remains permanent (until we remove it).

Internal process memory usually includes cache memory and registers both of which store data temporarily
and are accessible directly by the CPU. This memory is placed inside or near the CPU for the fast access of
data.

Mahima Mangal
Types of Computer Memory

In general, computer memory is divided into three types:

• Primary memory

• Secondary memory

• Cache memory

Now we discuss each type of memory one by one in detail:

1. Primary Memory
It is also known as the main memory of the computer system. It is used to store data and programs, or
instructions during computer operations. It uses semiconductor technology and hence is commonly called
semiconductor memory. Primary memory is of two types:

RAM (Random Access Memory):

It is a volatile memory. Volatile memory stores information based on the power supply. If the power supply
fails/ interrupted/stopped, all the data and information on this memory will be lost. RAM is used for booting
up or starting the computer. It temporarily stores programs/data which has to be executed by the processor.
RAM is of two types:

• S RAM (Static RAM):S RAM uses transistors and the circuits of this memory are capable of
retaining their state as long as the power is applied. This memory consists of the number of flip flops
with each flip flop storing 1 bit. It has less access time and hence, it is faster.

• D RAM (Dynamic RAM):D RAM uses capacitors and transistors and stores the data as a charge on
the capacitors. They contain thousands of memory cells. It needs refreshing of charge on capacitor
after a few milliseconds. This memory is slower than S RAM.
ROM (Read Only Memory):

Mahima Mangal
It is a non-volatile memory. Non-volatile memory stores information even when there is a power supply
failed/ interrupted/stopped. ROM is used to store information that is used to operate the system. As its name
refers to read-only memory, we can only read the programs and data that are stored on it. It contains some
electronic fuses that can be programmed for a piece of specific information. The information is stored in the
ROM in binary format. It is also known as permanent memory. ROM is of four types:

• MROM(Masked ROM): Hard-wired devices with a pre-programmed collection of data or


instructions were the first ROMs. Masked ROMs are a type of low-cost ROM that works in this way.

• PROM (Programmable Read Only Memory): This read-only memory is modifiable once by the
user. The user purchases a blank PROM and uses a PROM program to put the required contents into
the PROM. Its content can't be erased once written.

• EPROM (Erasable Programmable Read Only Memory):EPROM is an extension to PROM where


you can erase the content of ROM by exposing it to Ultraviolet rays for nearly 40 minutes.

• EEPROM (Electrically Erasable Programmable Read Only Memory): Here the written contents
can be erased electrically. You can delete and reprogram EEPROM up to 10,000 times. Erasing and
programming take very little time, i.e., nearly 4 -10 ms(milliseconds). Any area in an EEPROM can
be wiped and programmed selectively.
2. Secondary Memory

It is also known as auxiliary memory and backup memory. It is a non-volatile memory and used to store a
large amount of data or information. The data or information stored in secondary memory is permanent, and
it is slower than primary memory. A CPU cannot access secondary memory directly. The data/information
from the auxiliary memory is first transferred to the main memory, and then the CPU can access it.

Characteristics of Secondary Memory

• It is a slow memory but reusable.

• It is a reliable and non-volatile memory.


• It is cheaper than primary memory.

• The storage capacity of secondary memory is large.

• A computer system can run without secondary memory.

• In secondary memory, data is stored permanently even when the power is off.

Types of Secondary Memory

1. Magnetic Storage

Magnetic Tapes: Magnetic tape is a long, narrow strip of plastic film with a thin, magnetic coating on it that
is used for magnetic recording. Bits are recorded on tape as magnetic patches called RECORDS that run
along many tracks. Typically, 7 or 9 bits are recorded concurrently. Each track has one read/write head, which
allows data to be recorded and read as a sequence of characters. It can be stopped, started moving forward
or backwards or rewound.

Mahima Mangal
Magnetic Disks: A magnetic disk is a circular metal or a plastic plate and these plates are coated with
magnetic material. The disc is used on both sides. Bits are stored in magnetized surfaces in locations called
tracks that run in concentric rings. Sectors are typically used to break tracks into pieces.

Hard discs are discs that are permanently attached and cannot be removed by a single user.

2. Optical Disks: The optical storage systems are used for the same purpose as the magnetic storage systems.
However, like magnetic storage systems, the optical storage systems do not employ the magnetism medium
to read and store data. The optical storage systems use the laser light as the optical medium to retrieve as
well as record data. The following are some of the examples of optical storage systems:

• Compact Disk—Read Only Memory (CD-ROM)


• Digital Video Disc (DVD)
• Compact Disc—Recordable (CD- R)
• Compact Disc—Rewritable (CD-WR)
• Digital Video Disc—Recordable (DVD-R)
• Digital Video Disc—Rewritable (DVD-WR)
Note: The DVD disks have much higher storage capacity than the CD disks.

Like other storage systems, the optical storage systems are non-volatile in nature. Also, the optical storage
systems are more reliable as compared to the magnetic storage systems because they are less prone to
mechanical damage. Unlike magnetic storage systems, which are fully read and write-capable storage
devices, the optical storage devices are either read-only or writable. Among the writable optical storage
devices, those devices that can be used for writing data multiple times are termed as rewritable optical storage
devices. Some examples of read-only optical storage devices are CD-ROM and DVD, while some examples
of writable optical storage devices are CD-R, CD-RW and DVD-R.

3. Flash Memory: Flash memory is secondary memory and so it is not volatile which means it persists the
data even if there is not an electrical supply provided.

• Solid-State Drives (SSDs): Use flash memory chips to store data, providing much faster read and
write speeds, greater durability, and lower power consumption compared to HDDs.
• USB Flash Drives: Portable devices that use flash memory, making them convenient for transferring
data between computers.

Mahima Mangal
• Memory Cards: Smaller flash-based storage devices commonly used in cameras, phones, and other
electronics (e.g., SD cards).

Mahima Mangal
Software – Introduction and Types (System and Application Software)

1. Software – Introduction

Software is a set of programs, instructions, and related data that tell the computer what to do and how
to do it.

While hardware represents the physical part of a computer, software is the intangible logical part that
controls and coordinates hardware operations.

Without software, hardware is useless — it cannot perform any task on its own.

Example:
When we type a document in MS Word — the software (MS Word) interprets user commands and instructs
the hardware (keyboard, CPU, monitor, printer) to perform actions.

Characteristics of Software

1. Intangible: Cannot be touched, only used or seen as output.


2. Developed, not manufactured: Created by programming and coding.

3. Easily modified: Can be updated or upgraded without changing hardware.

4. Performs specific tasks: Designed for particular operations like editing, calculating, browsing, etc.

5. Dependent on hardware: Needs hardware to execute instructions.

Relationship between Hardware and Software

Hardware Software

Physical parts of computer Set of instructions that control hardware

Cannot work without software Cannot function without hardware

Example: Keyboard, CPU, Printer Example: MS Word, Windows OS

2. Types of Software

Software can be broadly classified into two main types:


1. System Software

2. Application Software

Mahima Mangal
A. System Software

System software is a collection of programs designed to control and manage the computer hardware and
provide a platform for running application software.

It acts as a bridge between hardware and the user/application programs.

Main Functions

• Manages and controls computer resources (CPU, memory, input/output devices).

• Provides an environment for application software to run.


• Handles system operations such as file management, device control, and task scheduling.

Major Types of System Software

1. Operating System (OS)

o Core software that manages all computer activities.

o Controls file handling, memory management, input/output, and user interface.

o Examples: Windows, Linux, macOS, Android.

o Example use: When you open a file or copy data, OS handles all background operations.
2. Device Drivers

o Special programs that allow the operating system to communicate with hardware devices.

o Example: Printer driver helps OS send data to the printer; sound driver helps play audio.

3. Utility Programs (Utilities)

o Supportive tools that perform specific system maintenance tasks.

o Examples: Antivirus software, Disk Cleanup, File Compression tools, Backup programs.

4. Language Translators
o Convert high-level programming language code into machine-readable form.

o Types:

▪ Assembler → Converts Assembly language to Machine code.

▪ Compiler → Converts High-level language to Machine code at once.

▪ Interpreter → Translates one line at a time.

o Examples: GCC compiler (C/C++), Python Interpreter.

Mahima Mangal
Examples of System Software

Category Example Function

Operating System Windows 11 Manages computer operations

Device Driver Printer Driver Connects hardware with OS

Utility WinRAR Compresses files

Translator Java Compiler Converts Java code into bytecode

B. Application Software
Application software refers to programs designed to perform specific tasks for the user.
These are built on top of system software and help users in solving real-world problems.
Functions of Application Software

• Perform specific user-oriented tasks like writing, designing, calculating, browsing, etc.

• Help in automation of business, educational, and personal work.

Types of Application Software

1. General-Purpose Application Software

o Designed for everyday use by many users.

o Examples:

▪ MS Word (Word Processing)


▪ MS Excel (Data Analysis & Spreadsheets)

▪ PowerPoint (Presentation)

▪ Web Browsers (Chrome, Firefox)

2. Customized Application Software

o Developed for specific organizations or users to meet unique needs.

o Examples:

▪ Payroll System for HR department


▪ Hospital Management System

▪ Library Management System


3. Special-Purpose Application Software

o Designed for a single specialized task.

Mahima Mangal
o Examples:

▪ Railway Reservation System

▪ Online Examination Software

▪ Accounting Software (Tally)


4. Open-Source and Freeware Applications

o Open-Source: Source code available freely for modification. (e.g., LibreOffice, GIMP)

o Freeware: Free to use but not editable. (e.g., Adobe Acrobat Reader)

Examples of Application Software

Category Example Use

Word Processing MS Word Creating documents

Spreadsheet MS Excel Data analysis

Presentation PowerPoint Slide shows

Accounting Tally Business finance management

Design AutoCAD Engineering drawings

Database MS Access Data storage and queries

3. Difference between System and Application Software

Basis System Software Application Software

Purpose Manages computer hardware & system Performs specific user tasks
operations

Dependency Runs first; supports other software Depends on system software to


run

User Interaction Works mostly in background User directly interacts with it

Examples Windows, Linux, Drivers MS Word, Tally, Chrome

Development More complex and general-purpose Simpler and task-oriented


Complexity

Mahima Mangal
Computer Languages
Computer Languages

Definition:

A computer language is a means of communication between humans and computers. It allows humans to
write programs that the computer can understand and execute. The language used in the communication of
instructions to a computer is known as computer language or programming language. There are many
different types of languages available today. A computer program can be written using any of the
programming languages depending upon the task to be performed and the knowledge of the person
developing the program. The process of writing instructions using a computer language is known as
programming or coding. The person who writes such instructions is referred as a programmer.

We know that natural languages such as English, Hindi or Tamil have a set of characters and use some rules
known as grammar in framing sentences and statements. Similarly, set of characters and rules known as
syntax that must be adhered to by the programmers while developing computer programs.

Low-level languages (Machine Language, Assembly Language) and High-level languages (Java, Python,
C++) are the two primary categories.

• Types of Computer Languages:

o Machine Language: The most basic language, consisting of binary codes (0s and 1s),
directly understood by the hardware.

o Assembly Language: A step above machine language, using mnemonic codes instead of
binary, which is then translated into machine code by an assembler.

o High-level Languages: User-friendly languages like Java, Python, C, that allow


programmers to write instructions in a more human-readable form. These languages are
abstracted from machine-specific details.
• Key Point: High-level languages are platform-independent, meaning they can run on different types
of computers with minimal changes in the code, unlike low-level languages, which are hardware-
dependent.

Generations of Programming Languages


Programming languages have been developed over the years in a phased manner. Each phase of
development has made the programming languages more user-friendly, easier to use and more powerful.
Each phase of improvement made in the development of the programming languages can be referred as a
generation. The programming languages, in terms of their performance, reliability and robustness can be
grouped into five different generations.
1 First generation languages (1GL)

2 Second generation languages (2GL)


3 Third generation languages (3GL)

4 Fourth generation languages (4GL)

Mahima Mangal
5 Fifth generation languages (5GL)

First Generation: Machine Languages


The first-generation programming languages are also called low-level programming language because they
were used to program the computer systems at a very low level of abstraction, i.e., at the machine-level.

During the 1940s, machine languages were developed to program the computer system. The machine
languages which used binary codes 0s and 1s to represent instructions were regarded as low-level
programming languages. The instructions written in the machine language could be executed directly by the
CPU of the computer system. These languages were hardware dependent languages. Therefore, it was not
possible to run a program developed for one computer system in another computer system. This is because
of the fact that the internal architecture of one computer system may be different from that of another. The
development of programs in machine languages was not an easy task for the programmers. One was required
to have thorough knowledge of the internal architecture of the computer system before developing a program
in machine language.

Second Generation: Assembly Languages


Like the first-generation programming languages, the second-generation programming languages also belong
to the category of low-level programming languages. The second-generation programming languages
comprise of assembly languages that use the concept of mnemonics for writing programs. Similar to the
machine language, the programmer of assembly language needs to have internal knowledge of the CPU
registers and the instructions set before developing a program. In the assembly language, symbolic names
are used to represent the opcode and the operand part of the instruction. For example, to move the contents
of the CPU register, a1 to another CPU register, b1 the following assembly language instruction can be used:

mov b1, a1
The above code shows the use of symbolic name, mov in an assembly language instruction. The symbolic
name, mov instructs the processor to transfer the data from one register to another. Using this symbolic name,
a value can also be moved to a particular CPU register.

The use of symbolic names made these languages little bit user-friendly as compared to the first- generation
programming languages. However, the second -generation languages were still machine dependent.
Therefore, one was required to have adequate knowledge of the internal architecture of the computer system
while developing programs in these languages.

Unlike the machine language programs, the programs written in the assembly language cannot be directly
executed by the CPU of the computer system because they are not written in the binary form. As a result,
some mechanism is needed to convert the assembly language programs into the machine understandable
form. A software program called assembler is used to accomplish this purpose. An assembler is a translator
program that converts the assembly language program into the machine language instructions. Figure below
shows the role of an assembler in executing an assembly language program.

Mahima Mangal
Third Generation: High-level Languages
The third-generation programming languages were designed to overcome the various limitations of the
first- and second -generation programming languages. The languages of the third and later generations are
considered as high-level programming languages because they enable the programmer to concentrate only
on the logic of the program without concerning about the internal architecture of the computer system. In
other words, we can also say that these languages are machine independent languages.

The third -generation programming languages are also quite user-friendly because they relieve the
programmer from the burden of remembering operation codes and instruction sets while writing a program.
The instructions used in the third and later generations of languages can be specified in English like
sentences, which are easy to comprehend for a programmer. The programming paradigm employed by most
of the third-generation programming languages was procedural programming, which is also known as
imperative programming. In the procedural programming paradigm, a program is divided into a large number
of procedures, also known as subroutines. Each procedure contains a set of instructions for performing a
specific task. A procedure can be called by the other procedures while a program is being executed by the
computer system.
These programs require translator programs for converting them into machine language. There are two types
of translator programs, namely, compiler and interpreter. Figure below shows the translation of a program
developed in the high-level programming language into the machine language program.

A program written in any high-level language can be converted by the compiler or the interpreter into the
machine-level instructions. Both the translator programs, compiler and interpreter, are used for the same
purpose except for one point of difference. The compiler translates the whole program into the machine
language program before executing any of the instructions. If there are any errors, the compiler generates
error messages which are displayed on the screen. All errors must be rectified before compiling again. On
the other hand, the interpreter executes each statement immediately after translating it into the machine
language instruction. Therefore, the interpreter performs the translation as well as the execution of the
instructions simultaneously. If any error is encountered, the execution is halted after displaying the error
message.

Mahima Mangal
Language Translators
Concept of Compiler

• Definition: A compiler is a program that translates the entire source code of a high-level
programming language into machine code or intermediate code in one go.

• How it Works:

1. The source code is written in a high-level language.


2. The compiler reads and analyzes the entire source code.

Mahima Mangal
3. It generates a machine code or object code (not directly executable, typically requires
linking).

• Steps Involved in Compilation:

o Lexical Analysis: The source code is broken down into tokens (keywords, variables,
operators).

o Syntax Analysis: The tokens are parsed into a syntax tree to check the syntax (grammar) of
the program.

o Semantic Analysis: Ensures that the program is logically correct, i.e., variables are declared
before use, types are correct, etc.

o Optimization: Improves the code to run more efficiently.

o Code Generation: The final code is translated into machine-level instructions.

• Advantages:

o Fast Execution: Since the entire program is compiled before execution, it typically runs
faster.

o Error Detection: All errors are detected at once during compilation, providing a clear report
of issues.

• Disadvantages:

o Time-consuming: Compilation can take time, especially for large programs.

o No Immediate Feedback: Errors can only be detected after the entire code is compiled.
• Examples of Compiled Languages:
o C, C++

Concept of Interpreter

• Definition: An interpreter is a program that translates a high-level program into machine code line
by line at runtime.

• How it Works:

1. The source code is read line by line.

2. Each line is immediately translated into machine code and executed.

• Steps Involved:
o The interpreter parses the code one statement at a time.
o It translates the statement into an intermediate form or directly into machine code and
executes it immediately.

• Advantages:
Mahima Mangal
o Immediate Feedback: Errors are caught and reported as soon as they occur, making
debugging easier.

o Portability: Since interpreters are available for different platforms, a program written in an
interpreted language can run on multiple platforms.

• Disadvantages:

o Slower Execution: Since translation happens at runtime, interpreted programs tend to be


slower compared to compiled programs.

o No Separate Executable: Interpreted programs don’t generate a machine code file that can
be run independently. The source code must be re-interpreted each time.

• Examples of Interpreted Languages:

o Python, Ruby, JavaScript, PHP.

Concept of Assembler

• Definition: An assembler is a program that converts assembly language code (a low-level human-
readable code) into machine code (binary instructions).

• How it Works:
1. Assembly language programs consist of mnemonics (human-readable representations of
machine instructions).

2. The assembler translates each mnemonic into its corresponding machine code instruction.
• Important Concepts:
o Assembly Language is specific to a particular processor architecture. For example, x86
assembly language is used for Intel processors.

o Assembler Directives: These are commands in the assembly code that instruct the
assembler on how to organize the code, allocate memory, etc.
o Machine Code: The output is machine-level code that the CPU understands, which is
directly executable.

Mahima Mangal
program modules

Mahima Mangal
Algorithm
An algorithm is a finite sequence of well-defined steps designed to solve a specific problem or
perform a computation.
It acts as a blueprint for programming, helping to design logic before writing actual code.

It can be understood by taking the example of cooking a new recipe. To cook a new recipe, one
reads the instructions and steps and executes them one by one, in the given sequence. The result
thus obtained is the new dish is cooked perfectly.

Characteristics of a Good Algorithm


1. Finiteness – Must terminate after a limited number of steps.

2. Definiteness – Each step must be clear and unambiguous.

3. Input – Should have 0 or more inputs.

4. Output – Must produce at least one result.

5. Effectiveness – Each operation must be basic enough to be performed exactly.

6. Generality – Should be applicable to a class of similar problems, not just one.

Algorithm to add 3 numbers and print their sum:


1. START

2. Declare 3 integer variables num1, num2, and num3.

3. Take the three numbers, to be added, as inputs in variables num1, num2, and num3 respectively.

4. Declare an integer variable sum to store the resultant sum of the 3 numbers.

5. Add the 3 numbers and store the result in the variable sum.
6. Print the value of the variable sum

7. END

What is Algorithm complexity and how to find it?


An algorithm is defined as complex based on the amount of Space and Time it consumes. Hence the
Complexity of an algorithm refers to the measure of the time that it will need to execute and get the
expected output, and the Space it will need to store all the data (input, temporary data, and output). Hence
these two factors define the efficiency of an algorithm.
The two factors of Algorithm Complexity are:

• Time Factor: Time is measured by counting the number of key operations such as comparisons in
the sorting algorithm.

• Space Factor: Space is measured by counting the maximum memory space required by the
algorithm to run/execute.

Therefore the complexity of an algorithm can be divided into two types:

Mahima Mangal
1. Space Complexity: The space complexity of an algorithm refers to the amount of memory required by
the algorithm to store the variables and get the result. This can be for inputs, temporary operations, or
outputs.

How to calculate Space Complexity?


The space complexity of an algorithm is calculated by determining the following 2 components:

• Fixed Part: This refers to the space that is required by the algorithm. For example, input variables,
output variables, program size, etc.

• Variable Part: This refers to the space that can be different based on the implementation of the
algorithm. For example, temporary variables, dynamic memory allocation, recursion stack space,
etc.
Therefore Space complexity S(P) of any algorithm P is S(P) = C + SP(I), where C is the fixed part
and S(I) is the variable part of the algorithm, which depends on instance characteristic I.

2. Time Complexity: The time complexity of an algorithm refers to the amount of time required by the
algorithm to execute and get the result. This can be for normal operations, conditional if-else statements,
loop statements, etc.
How to Calculate, Time Complexity?
The time complexity of an algorithm is also calculated by determining the following 2 components:
• Constant time part: Any instruction that is executed just once comes in this part. For example,
input, output, if-else, switch, arithmetic operations, etc.
• Variable Time Part: Any instruction that is executed more than once, say n times, comes in this
part. For example, loops, recursion, etc.
Therefore Time complexity T(P) of any algorithm P is T(P) = C + TP(I), where C is the constant
time part and TP(I) is the variable part of the algorithm, which depends on the instance
characteristic I.

Pseudocode
A Pseudocode is defined as a step-by-step description of an algorithm. Pseudocode does not use any
programming language in its representation instead it uses the simple English language text as it is
intended for human understanding rather than machine reading.

Meaning

Pseudocode is a method of representing algorithms using structured, English-like statements.


It bridges the gap between natural language and programming code.

How to write Pseudocode?

Before writing the pseudocode of any algorithm the following points must be kept in mind.

• Organize the sequence of tasks and write the pseudocode accordingly.

• At first, establishes the main goal or the aim. Example:


This program will print first N numbers of Fibonacci series.

Mahima Mangal
• Use standard programming structures such as if-else, for, while, and cases the way we use them in
programming. Indent the statements if-else, for, while loops as they are indented in a program, it
helps to comprehend the decision control and execution mechanism. It also improves readability to
a great extent. Example:
IF "1"
print response
"I AM CASE 1"

IF "2"
print response
"I AM CASE 2"

• Use appropriate naming conventions. The human tendency follows the approach of following what
we see. If a programmer goes through a pseudo code, his approach will be the same as per that, so
the naming must be simple and distinct.

• Reserved commands or keywords must be represented in capital letters. Example: if you are
writing IF…ELSE statements then make sure IF and ELSE be in capital letters.

• Check whether all the sections of a pseudo code are complete, finite, and clear to understand and
comprehend. Also, explain everything that is going to happen in the actual code.

• Don't write the pseudocode in a programming language. It is necessary that the pseudocode is
simple and easy to understand even for a layman or client, minimizing the use of technical terms.
Example:

START

INPUT A, B
IF A > B THEN

PRINT "A is greater"

ELSE

PRINT "B is greater"


END IF

STOP

Advantages of Pseudocode

• Easier to understand and modify than real code.

• Language independent – can be converted into any programming language.


• Helps in logical thinking and error detection before actual coding.

Mahima Mangal
Conditions in Pseudocode (Decision Making)

Used to make logical choices.

➤ IF–THEN–ELSE Structure

IF condition THEN

statements

ELSE

statements

END IF

Example:

IF Marks >= 50 THEN

PRINT "Pass"
ELSE

PRINT "Fail"

END IF

➤ Nested IF

IF condition1 THEN

IF condition2 THEN
statements

END IF

END IF

➤ CASE / SWITCH

CASE option OF

1: PRINT "Add"
2: PRINT "Subtract"

OTHERWISE PRINT "Invalid"

END CASE

Loops in Pseudocode (Iteration)


Used for repetition of steps.

➤ FOR Loop
Mahima Mangal
FOR i = 1 TO 10 DO

PRINT i

END FOR

➤ WHILE Loop

SET i = 1

WHILE i <= 10 DO

PRINT i

i=i+1

END WHILE

Mahima Mangal
Unit 2
Operating System

An Operating System (OS) is a system software that acts as an intermediary between the user and the
computer hardware.
It provides a user-friendly interface and controls the execution of programs, management of resources,
and coordination of hardware components.
In short:

The Operating System manages all activities of a computer system — from input/output operations to
memory allocation and process control.

Definition

An Operating System is a collection of software programs that control the overall operation of a computer
system and provide an environment in which users can execute programs conveniently and efficiently.

Functions of Operating System

1. Process Management

• Handles creation, scheduling, and termination of processes.


• Allocates CPU time fairly and manages multitasking.

2. Memory Management

• Allocates and deallocates main memory to processes.


• Maintains memory maps and prevents overlap between processes.
3. File Management

• Manages creation, reading, writing, and deletion of files.


• Controls access rights and storage organization.

4. Device Management
• Controls and coordinates I/O devices via device drivers.
• Maintains device queues and handles interrupts.

5. Storage Management
• Manages secondary storage such as hard drives.
• Includes space allocation, file systems, and access control.
6. Security and Protection
• Protects data and resources from unauthorized access.
• Implements authentication, permissions, and encryption.

Mahima Mangal
7. User Interface (UI)

• Provides CLI (Command Line Interface) and GUI (Graphical User Interface) for user
interaction.

Characteristics of Operating System

1. Resource Manager: Efficiently manages hardware and software resources.

2. Concurrency: Allows multiple operations or processes simultaneously.


3. Multitasking: Enables execution of more than one program at a time.

4. Scalability: Supports small as well as large systems.

5. Reliability and Security: Protects system integrity and data.

6. Efficiency: Optimizes system performance and utilization.

Types of Operating Systems

1. Batch Operating System


This was the earliest form of operating system used in mainframe computers during the 1950s–70s. In this
system, users did not interact directly with the computer. Instead, jobs (programs, data) were collected,
grouped into batches, and executed one after another.

A computer operator would load a batch of jobs onto punch cards or magnetic tape and submit them to the
computer. The system processed them sequentially, producing the output once all jobs were done.

Example: IBM’s OS/360 and UNIVAC systems used batch processing.

Drawback: It’s efficient for large repetitive tasks but lacks interactivity — no real-time user control or
response.

2. Multiprogramming

Definition:
The ability of an OS to hold multiple programs in memory at the same time, allowing the CPU to switch
among them when one program is waiting for I/O.

Key idea: Increase CPU utilization.


Only one program executes at a time, others stay ready.

Mechanism:

• When a process waits for I/O, the CPU moves to another ready process.

• This reduces CPU idle time.


Example: Early UNIX, OS/360.

Mahima Mangal
Diagrammatically:
CPU executes → I/O wait → switch to next job → repeat.

Scheduling policies commonly used:

• Non-preemptive multiprogramming (older): Run until wait/exit.

• Preemptive policies (later): allow preemption for fairness or responsiveness (ties into multitasking).

3. Multitasking (Time-Sharing) Operating System

Definition:
An extension of multiprogramming, focusing on interactive systems where multiple tasks (processes)
appear to run simultaneously by rapid switching (time-sharing).

Multi-tasking is a logical extension of multiprogramming. Multitasking is the ability of an OS to execute


more than one task simultaneously on a CPU machine. These multiple tasks share common resources
(like CPU and memory). In multi-tasking systems, the CPU executes multiple jobs by switching among
them typically using a small time quantum, and the switches occur so quickly that the users feel like
interact with each executing task at the same time.
Key idea: Provide user responsiveness.

Example: Windows, macOS, Linux desktops.

In simple terms:

• Multiprogramming = background scheduling to keep CPU busy.

• Multitasking = fast switching to let user “feel” multiple programs are running simultaneously.

Example in practice:
Typing a Word document while downloading a file and listening to music — all appear concurrent due to
time-slicing.

Summary:

Multitasking = Time-sharing for interactive, responsive experience; logical evolution of


multiprogramming.

4. Multiprocessing Operating System

• Uses two or more processors (CPUs) to perform tasks simultaneously.

• Increases speed and reliability of the system.

Definition:
A system with two or more CPUs (processors) working together, sharing memory and peripherals (I/O
devices), under a single OS.
Here, true parallelism happens — multiple processes execute literally at the same time.
Key idea: Improve performance and reliability.

Types:

Mahima Mangal
• Symmetric Multiprocessing (SMP): All CPUs share same OS and memory (e.g., Linux SMP
kernel).

• Asymmetric Multiprocessing (AMP): One master CPU controls others.

Example:

• Modern servers with multiple cores (Windows Server, UNIX).

• Dual-core and quad-core CPUs in PCs.

Summary:
Multiprocessing = Physical parallel execution using multiple CPUs.

5. Distributed Operating System

Meaning:
A Distributed OS controls a group of independent computers and makes them appear to the user as a
single system.
Each computer (called a node) has its own processor and memory but is connected via a network.
How it works:

• Tasks are divided into subtasks and distributed across multiple machines.

• The OS coordinates them for execution and combines the results.

• Communication happens through message passing among nodes.

Examples: LOCUS, Amoeba, Google’s Android Cluster OS, Microsoft Azure Fabric.

Advantages:

• Resource Sharing – CPU, files, and devices can be shared.


• Fault Tolerance – If one node fails, others continue running.

• Scalability – Easy to add new machines.

Limitations:

• Depends heavily on network reliability.

• Complex synchronization between systems.

• Security management across multiple nodes is difficult.

6. Real-Time Operating System (RTOS)

Meaning:
An RTOS is designed for applications that need instant and predictable responses.
Used in time-critical systems, where even a small delay can cause failure.

Types:

Mahima Mangal
1. Hard RTOS: Deadlines are strict. Missing them leads to system failure.
Example: Flight control, missile guidance, medical devices.

2. Soft RTOS: Occasional delay acceptable.


Example: Audio-video streaming, online games.

How it works:

• Uses priority-based scheduling.

• Interrupts are handled immediately.

• Minimal delay between input and response (called latency).

Examples: VxWorks, QNX, RTLinux, FreeRTOS.


Advantages:

• Predictable and reliable performance.

• High stability for time-bound operations.

Limitations:

• Handles limited, small tasks.

• Expensive hardware and strict design needed.

7. Network Operating System (NOS)


Meaning:
A Network OS manages and coordinates computers connected in a local or wide area network
(LAN/WAN).
It allows multiple users to share files, printers, and applications.

How it works:

• A central server manages user accounts, data, and network resources.

• Client systems send requests; the server processes and responds.

Examples: Novell NetWare, Windows Server, UNIX/Linux-based NOS.


Advantages:

• Centralized control and easy management.

• Shared access to resources improves efficiency.

• Enhanced security with user-level authentication.

Limitations:

• High dependency on the central server.


• Network failure can affect all connected users.

Mahima Mangal
8. Mobile Operating System

Meaning:
Mobile OS is designed for smartphones, tablets, and handheld devices.
It manages hardware, sensors, touch interface, mobile connectivity, and apps.

Key features:

• Power management to save battery.

• App sandboxing for security.

• Wireless connectivity (Wi-Fi, Bluetooth, cellular).

• Touch-based interface and sensors (GPS, camera).


Examples: Android (Google), iOS (Apple), HarmonyOS (Huawei).

Advantages:

• User-friendly, portable, supports millions of apps.

• Frequent updates and wide app ecosystem.

Limitations:

• Limited multitasking compared to desktop OS.

• Hardware-dependent; less customization in closed OS (like iOS).

Mahima Mangal
Topologies

1. Bus Topology

In a Bus Topology, all computers are connected to a single central communication line called the bus or
backbone cable. When one computer sends data, it travels along the bus and can be received by any other
node on the network. Only the addressed device accepts the message, while others ignore it.
It’s simple and cost-effective — no central switch is needed — which made it very popular in early LANs
using coaxial cable Ethernet (10Base-2 or 10Base-5).
However, if the main cable breaks, the entire network goes down, and as more devices are added,
performance drops due to data collisions.
You can compare it to a single road shared by all vehicles — easy to build, but traffic jams increase as
users grow.
Today, bus topology is mostly replaced by star networks, but the concept still exists in small embedded or
IoT systems where devices share one data line.

2. Star Topology

In a Star Topology, all computers are connected to a single central hub or switch. Each node has an
independent connection, so if one cable fails, only that computer is affected — not the entire network.
Data passes from the sender to the central device, which forwards it to the correct receiver.
This structure is used in almost every modern setup — from college computer labs and office LANs to
Wi-Fi routers at home, where each system connects to the router as the central node.
Star topology is easy to maintain and expand, though it depends heavily on the hub or switch — if that
central device fails, the whole network stops.
Think of it like a railway junction: all routes meet at one central station controlling traffic.
Its efficiency and fault isolation make it the most widely used topology today.

3. Ring Topology
Mahima Mangal
In a Ring Topology, each computer connects to exactly two others, forming a closed loop. Data travels
around the ring, one node at a time, until it reaches its destination.
Older networks like IBM Token Ring used this structure, though it’s rare in modern LANs.
However, the idea thrives in telecom systems. Unidirectional rings, once used by BSNL or MTNL, sent
data in one direction around the ring. Modern networks by Airtel and Jio use bidirectional rings, where
data flows both ways — if one fiber line fails, the signal instantly reroutes the other way, maintaining
communication.
So while ring topology has faded from local networks, it remains vital in telecommunication and
industrial systems, where continuous and redundant connectivity is essential.

4. Mesh Topology

In a Mesh Topology, every node is connected to every other node directly, creating multiple data paths.
This means if one link fails, data can still travel through alternate routes — giving mesh the highest level of
reliability and fault tolerance.
It’s expensive and complex to install because each new device requires many connections, but it’s
unbeatable in critical systems.
You’ll find it in Internet backbone networks, military communication systems, and large data centers
where uptime is non-negotiable.
Mahima Mangal
For instance, ISP routers across cities form a mesh so that if one route fails, traffic automatically reroutes
through another.
You can imagine it as a network of roads where every city connects directly to every other city —
costly but never completely cut off.
5. Tree Topology

A Tree Topology combines features of star and bus structures. It has a hierarchical layout — a root node
at the top, branches connecting to intermediate nodes, and leaves representing end devices.
It’s highly scalable and organized, making it ideal for universities, corporate networks, and large
organizations with multiple departments.
For example, a university network might have a central server in the IT block (root), department-level
switches (branches), and lab computers (leaves).
If one branch fails, only that section is affected, not the entire network.
Think of it like an organizational chart — a structured hierarchy with clear levels of control and
communication.
Tree topology is the backbone of structured cabling systems used in large institutions and enterprise
networks.

6. Hybrid Topology

A Hybrid Topology is a combination of two or more basic topologies — for example, Star-Bus or Star-
Ring.
It is designed to take advantage of the strengths of each topology while minimizing their weaknesses.
Large companies, universities, and data centers often use hybrid structures — for instance, each department
Mahima Mangal
may use a Star topology internally, while departments connect to the main server backbone using a Bus or
Ring layout.
This setup offers flexibility, scalability, and fault tolerance at different layers.
A good analogy is a modern transport system — local roads (star) connect to main highways (bus),
forming a powerful hybrid network.
Hybrid topology is what most real-world enterprise and cloud networks follow today.

Mahima Mangal
Internet of Things

IoT refers to a network of “things” (devices) embedded with sensors, actuators, software — these objects
collect, send, and act on data from their environment.

In simple words, IOT is a system of interrelated devices connected to the internet to transfer data from one
to the other.

Example- A smart Home

When your alarm rings → IOT system can open the window blinds (curtains), Turn off AC, Turn on coffee
machine while also switching on water heater for you.

To ensure this seamless functioning it requires effective communication between devices and processing of
data exchanged.

IOT devices

1. General Devices: Devices that function for the user like home appliances. Alarms, Coffee
Machine, AC. They exchange information with each other via some wired or wireless
interfaces. Like Ethernet, WiFi, Zigbee, Bluetooth, GSMs.
2. Sensing Devices: Sensors and Actuators. (Sensors capture physical stimuli from
environment like measuring temperature, humidity, light intensity from environment and
convert them into electronic or electric signals. While Actuators take signals from Control
System and convert them into Physical action or motion.)

How the Communication happens. (Mechanism)

When the alarm goes off, two major events occur:

Step 1 — Event Generation (Perception Layer)


• The alarm device detects a trigger (e.g., time = 7:00 AM or sound threshold crossed).
• Its embedded sensor/processor gets activated and sends a signal — could be via Wi-Fi, Bluetooth,
Zigbee, or another short-range protocol.
Step 2 — Data Transmission (Network Layer)

• The alarm communicates this signal to a hub, router, or IoT gateway.

• The gateway routes this event data to either:

o Cloud server (e.g., AWS IoT Core, Google IoT Hub), or

o Local edge controller (if it’s a smart home setup like Apple HomeKit or Zigbee Hub).

Step 3 — Decision Logic (Processing Layer)

• A rule engine or automation script is already defined in software (example: in the IoT platform or
mobile app):
“IF Alarm = ON → THEN Blinds = OPEN.”

• When the alarm event is received, the IoT platform triggers the action command automatically.

Mahima Mangal
Step 4 — Command Execution (Application → Actuation)

• The blinds receive the command “OPEN”.

• Their motor actuator moves accordingly and convert the signal to physical action.

• Feedback loop confirms the action was completed (the blinds’ position sensor reports back
“OPENED” to IOT application of user).

Mahima Mangal
Smart Cities

Any city that collects data, transforms it into information, and uses the latest information to make decisions
in or near real-time to provide better services to citizens, improve operations, and lower cost can be
deemed as a smart city. For example, a smart city might lower congestion on its streets and lower pollution
by optimizing transportation infrastructure and assets. It might provide faster responses to public safety
incidents via real-time capture and analysis of sensor and surveillance data.

1. Smart Health (Wearable to Hospital)


Story:
A citizen wears a smartwatch that tracks heart rate and oxygen.
Flow:

1. Watch measures health data every few seconds.

2. It sends this data (through Wi-Fi or mobile network) → to the cloud health server.

3. The cloud rule engine checks: “Is heart rate too high or oxygen too low?”

4. If yes, an alert is automatically sent → to hospital dashboard or ambulance control.

5. Doctor/paramedic gets patient’s live location and vital signs.

Why Smart:
Because the system detects danger before humans notice — enabling instant medical response.

2. Citizen Connection (Smart Governance)

Story:
A citizen sees a pothole or broken streetlight and reports it on a city app.

Flow:
1. Citizen uploads complaint with photo + GPS location.

2. App sends it → to the City Cloud Portal.

3. The system checks: “Which department should handle it?” (rule-based routing).

4. Task is automatically assigned to Municipal Road Department.

5. When fixed, citizen gets an update — “Work completed!”

Why Smart:
Because the system links citizens → city departments → updates automatically.

3. Smart Waste Management

Story:
Bins in the city have sensors that tell how full they are.
Flow:

1. Sensor inside bin detects the fill level.

Mahima Mangal
2. When the bin reaches 80%, it sends signal → to the cloud waste dashboard.

3. The system checks all bins’ data and designs optimized garbage collection routes (so trucks cover
only full bins).

4. Route is sent to driver’s tablet in the garbage truck.

5. When emptied, bin automatically updates its status to “empty”.

Why Smart:
Because it saves time, fuel, and manpower — no truck drives to half-empty bins.

4. Smart Traffic Management

Story:
City cameras and sensors monitor vehicle flow on roads.

Flow:

1. Sensors detect car count and speed.


2. Data goes to traffic control cloud every few seconds.

3. Rules decide: “If traffic jam detected → increase green light time.”

4. If accident or emergency vehicle found → system gives green corridor automatically.

5. Traffic lights adjusted, alerts go to drivers via navigation apps.

Why Smart:
Because the system adapts in real-time — fewer jams, faster emergency movement.

Green Corridor

Suppose an ambulance has to travel from Apollo Hospital, Hyderabad to Osmania Hospital for an organ
transplant.
Under normal traffic conditions, the journey takes around 40 minutes.
But when a “Green Corridor” is activated, this is what happens:

1. The ambulance’s GPS sends a signal to the city control room, notifying that an emergency
vehicle is on its way.

2. The Command Centre (the city’s main traffic control hub) maps out the ambulance’s exact route.

3. The traffic signal network receives an automated command:

o “Keep all signals along this specific route green for 5–10 minutes.”

4. In real time, the police and control room coordinate to ensure there are no obstructions on that
path.

5. The ambulance passes through continuously green signals without stopping anywhere.
Result:
Travel time is reduced from 40 minutes to just 12 minutes.

Mahima Mangal

You might also like