21IT403Notes 1
21IT403Notes 1
Rajam, AP REV.: 00
(An Autonomous Institution Affiliated to JNTUGV,
AP)
Cohesive Teaching – Learning Practices (CTLP)
Class 4th Sem. – B. Tech Department: CSE & IT
Course Operating Systems Course Code 21IT403
Prepared by Ms. U. Archana
Lecture Topic Operating-System Overview: Computer-System Organization and Architecture,
Operating-System Structure, Operating-System Operations & Services, System
Calls & its types. Threads: Multi Core Programming, Multithreading Models,
Thread Scheduling algorithms.
Course Outcome
C01,CO2 Program Outcome (s) PO1, PO12
(s)
Duration 50 Min each Lecture 1-8 Unit I
Pre-requisite (s) Computer, Operating systems
1. Objective
➢ To impart knowledge on the Computer system organization and architecture
➢ Understand the objectives and functions of operating systems
➢ To get the idea on purpose of system call
➢ To discuss the various threads and its models.
2. Intended Learning Outcomes `
At the end of this session the students will able to:
A. Understand the computer system organization and architecture
B. Understand the objectives and functions of operating systems
C. Understand the role of system call
D. Understand the threads concepts and its various models.
3. 2D Mapping of ILOs with Knowledge Dimension and Cognitive Learning Levels of RBT
4. Teaching Methodology
❖ Power Point Presentation, Chalk & Board, Visual Presentation
5. Evocation
6. Deliverables
Lecture-1
Computer Startup
▪ bootstrap program is loaded at power-up or reboot
▪ Typically stored in ROM or EPROM, generally known as firmware
▪ Initializes all aspects of system
▪ Loads operating system kernel and starts execution
Lecture-2
The occurrence of an event is usually signaled by an interrupt from either the hardware or the
software. Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by
way of the system bus. Software may trigger an interrupt by executing a special operation called a
system call (also called a monitor call). When the CPU is interrupted, it stops what it is doing and
immediately transfers execution to a fixed location. The fixed location usually contains the starting
address where the service routine for the interrupt is located. The interrupt service routine executes; on
completion, the CPU resumes the Interrupted computation.. Interrupts are an important part of
computer architecture. Each computer design has its own interrupt mechanism, but several functions
are common. The interrupt must transfer control to the appropriate interrupt service routine. The
straightforward method for handling this transfer would be to invoke a generic routine to examine the
interrupt information. The routine, in turn, would call the interrupt-specific handler. However,
interrupts must be handled quickly. Since only a predefined number of interrupts is possible, a table of
pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine
is called indirectly through the table, with no intermediate routine needed. Generally, the table of
pointers is stored in low memory (the first hundred or so locations). These locations hold the addresses
of the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is
then indexed by a unique device number, given with the interrupt request, to provide the address of the
interrupt service routine for the interrupting device.
Storage Structure
The wide variety of storage systems can be organized in a hierarchy. according to speed and
cost. The higher levels are expensive, but they are fast. As we move down the hierarchy, the cost per
bit generally decreases, whereas the access time generally increases. This trade-off is reasonable; if a
given storage system were both faster and less expensive than another—other properties being the
same—then there would be no reason to use the slower, more expensive memory. In fact, many early
storage devices, including paper tape and core memories, are relegated to museums now that magnetic
tape and semiconductor memory have become faster and cheaper.
The top four levels of memory in Figure 1.4 may be constructed using semiconductor memory.
In addition to differing in speed and cost, the various storage systems are either volatile or nonvolatile.
As mentioned earlier, volatile storage loses its contents when the power to the device is removed. In the
absence of expensive battery and generator backup systems, data must be written to nonvolatile storage
for safekeeping. In the hierarchy shown in Figure 1.4, the storage systems above the solid-state disk are
volatile, whereas those including the solid-state disk and below are nonvolatile. Solid-state disks have
several variants but in general are faster than magnetic disks and are nonvolatile. One type of solid-
state disk stores data in a large DRAM array during normal operation but also contains a hidden
magnetic hard disk and a battery for backup power. If external power is interrupted, this solid-state
disk’s controller copies the data from RAM to the magnetic disk.
When external power is restored, the controller copies the data back into RAM. Another form
of solid-state disk is flash memory, which is popular in cameras and personal digital assistants (PDAs),
in robots, and increasingly for storage on general-purpose computers. Flash memory is slower than
DRAM but needs no power to retain its contents. Another form of nonvolatile storage is NVRAM,
which is DRAM with battery backup power. This memory can be as fast as DRAM and (as long as the
battery lasts) is nonvolatile.
Cache Memory
• Important principle, performed at many levels in a computer (in hardware,
operatingsystem, software)
• Information in use copied from slower to faster storage temporarily
• Faster storage (cache) checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
• Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy
I/O Structure
Interrupt-driven I/O is fine for moving small amounts of data but can produce high overhead
when used for bulk data movement such as disk I/O. To solve this problem, direct memory access
(DMA) is used. After setting up buffers, pointers, and counters for the I/O device, the device
controller transfers an entire block of data directly to or from its own buffer storage to memory, with
no intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that
the operation has completed, rather than the one interrupt per byte generated for low-speed devices.
While the device controller is performing these operations, the CPU is available to accomplish other
work. Some high-end systems use switch rather than bus architecture. On these systems, multiple
components can talk to other components concurrently, rather than competing for cycles on a shared
bus. In this case, DMA is even more effective. The following Figure shows the interplay of all
components of a computer system.
Personal computers PCs appeared in the 1970s. Operating systems for these computers have
benefited in several ways from the development of operating systems for mainframes. Microcomputers
were immediately able to adopt some of the technology developed for larger operating systems. On the
other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use
of the computer and when other computers and other users can access the files on a PC, file protection
again becomes a necessary feature of the operating system.
Multiprocessor System
Within the past several years, multiprocessor systems (also known as parallel systems or multicore
systems) have begun to dominate the landscape of computing. Such systems have two or more
processors in close communication, sharing the computer bus and sometimes the clock, memory, and
peripheral devices. Multiprocessor systems first appeared prominently appeared in servers and have
since migrated to desktop and laptop systems. Recently, multiple processors have appeared on mobile
devices such as smart phones and tablet computers.
Increased throughput
By increasing the number of processors, we expect to get more work done in less time. The
speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple processors
cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly.
This overhead, plus contention for shared resources, lowers the expected gain from additional
processors. Similarly, N programmers working closely together do not produce N times the amount of
work a single programmer would produce.
Multiprocessor systems can cost less than equivalent multiple single-processor systems,
because they can share peripherals, mass storage, and power supplies. If several programs operate on
the same set of data, it is cheaper to store those data on one disk and to have all the processors share
them than to have many computers with local disks and many copies of the data.
Increased reliability
If functions can be distributed properly among several processors, then the failure of one
processor will not halt the system, only slow it down. If we have ten processors and one fails, then each
of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the
entire system runs only 10 percent slower, rather than failing altogether.
Two types:
Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently thatusers
can interact with each job while it is running, creating interactive computing
▪ Response time should be < 1 second
▪ Each user has at least one program executing in memory 🢡process
▪ If several jobs ready to run at the same time 🢡 CPU scheduling
▪ If processes don’t fit in memory, swapping moves them in and out to run
▪ Virtual memory allows execution of processes not completely in memory
Dual-mode operation
Timer
Lecture-4
Operating-System Services
An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs. The specific services provided, of course,
differ from one operating system to another, but we can identify common classes. These operating-
system services are provided for the convenience of the programmer, to make the programming task
easier. One set of operating-system services provides functions that are helpful to the user.
User interface:
Almost all operating systems have a user interface (UI). This interface can take several forms.
One is a command-line interface (CLI), which uses text commands and a method for entering them
(say, a program to allow entering and editing of commands). Another is a batch interface, in which
commands and directives to control those commands are entered into files, and those files are executed.
Most commonly/ a graphical user interface (GUI) is used. Here, the interface is a window system with
a pointing device to direct I/O, choose from menus, and make selections and a keyboard to enter text.
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
Some systems provide two or all three of these variations.
Program execution:
The system must be able to load a program into memory and to run that program. The
program must be able to end its execution, either normally or abnormally (indicating error).
I/O operations. A running program may require I/O, which may involve a file or an I/O device. For
specific devices, special functions may be desired (such as recording to a CD or DVD drive or
blanking a CRT screen). For efficiency and protection, users usually cannot control I/O devices
directly. Therefore, the operating system must provide a means to do I/O.
File-system manipulation:
The file system is of particular interest. Obviously, programs need to read and write files and
directories. They also need to create and delete them by name, search for a given file, and list file
information. Finally, some programs include permissions management to allow or deny access to files
or directories based on file ownership.
Communications:
There are many circumstances in which one process needs to exchange information with
another process. Such communication may occur between processes that are executing on the same
computer or between processes that are executing on different computer systems tied together by a
computer network. Communications may be implemented via shared memory or through message
passing, in which packets of information are moved between processes by the operating system.
Error detection:
The operating system needs to be constantly aware of possible errors. Errors may occurin the
CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a
parity error on tape, a connection failure on a network, or lack of paper in the printer), and in the user
program (such as an arithmetic overflow, an attempt to access an illegal memory location, or a too-
great use of CPU time). For each type of error, the operating system should take the appropriate action
to ensure correct and consistent computing. Debugging facilities can greatly enhance the user's and
programmer's abilities to use the system efficiently.
Another set of operating-system functions exists not for helping the user but rather for ensuring the
efficient operation of the system itself. Systems with multiple users can gain efficiency by sharing the
computer resources among the users.
Resource allocation:
When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of {hem. Many different types of resources are managed by the operating system.
Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas
others (such as I/O devices) may have much more general request and release code. For instance, in
determining how best to use the CPU, operating systems have CPU- scheduling routines that take into
account the speed of the CPU, the jobs that must be executed, the number of registers available, and
other factors. There may also be routines to allocate printers, modems, USB storage drives, and other
peripheral devices.
Accounting:
The owners of information stored in a multiuser or networked computer system may want to
control use of that information. When several separate processes execute concurrently, it should not be
possible for one process to interfere with the others or with the operating system itself. Protection
involves ensuring that all access to system resources is controlled. Security of the system from
outsiders is also important. Such security starts with requiring each user to authenticate himself or
herself to the system, usually by means of a password, to gain access to system resources. It extends to
defending external I/O devices, including modems and network adapters, from invalid access attempts
and to recording all such connections for detection of break-ins. If a system is to be protected and
secure, precautions must be instituted throughout it. A chain is only as strong as its weakest link.
Lecture-5
❑ Often, more information is required than simply identity of desired system call
❑ Exact type and amount of information vary according to OS and call
❑ Three general methods used to pass parameters to the OS
❑ Simplest: pass the parameters in registers
❑ In some cases, may be more parameters than registers
❑ Parameters stored in a block, or table, in memory, and address of block passed asa
parameter in a register
❑ This approach taken by Linux and Solaris
❑ Parameters placed, or pushed, onto the stack by the program and popped off thestack
by the operating system
❑ Block and stack methods do not limit the number or length of parameters being
passed
Process control
◦ end, abort
◦ load, execute
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
File management
◦ create file, delete file ◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote devices
Lecture-6
Tutorial-1
QUESTIONS
Q.1. What are the various tasks or services offered by the Operating System?
ANSWERS
Ans 1:
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
• Program execution: The operating system loads the contents of a file into memory and
begins its execution. A user level program could not be trusted to properly allocate CPU
time.
• I/O operations: Disks, tapes, serial lines, and other devices must be communicated with
at a very low level. The user needs to only specify the device and the operation to
perform on it, while the system converts the request into device specific or controller
specific commands. User- level programs cannot be trusted to access only devices they
should have access to and to access them only when they are otherwise unused.
• File-system manipulation: There are many details in file creation, deletion, allocation,
and naming that users should not have to perform. Blocks of disk space are used by files
and must be tracked. Deleting a file requires removing the name file information and
freeing the allocated blocks. Protections must also be checked to assure proper file
access. User programs could neither ensure adherence to protection methods nor be
trusted to allocate only free blocks and de-allocate blocks on file deletion.
• Communications: Message passing between systems requires message to be turned into
packets of information, sent to the network controller, transmitted across a
communications medium, and reassembled by the destination system. Packet ordering
and data correction must take place. Again user programs might not coordinate access
to the network device, or they might receive packets destined for other processes.
• Error detection: Error detection occurs at both the hardware and software levels. At the
hardware level, all data transfers must be inspected to ensure that data have not been
corrupted in transit. All data on media must be checked to be sure they have not
changed since they were written to the media. At the software level, media must be checked for
data consistency; for instance, whether the numbers of allocated and unallocated blocks of
storage match the total number on the device. There errors are frequently process independent,
so there must be a global program that handles all types of errors. Also by having errors
processed by the operating system, processes need not contain code to catch and correct all the
errors possible on a system.
Multicore programming helps to create concurrent systems for deployment on multicore processor and
multiprocessor systems. A multicore processor system is basically a single processor with multiple
execution cores in one chip. It has multiple processors on the motherboard or chip. A Field-
Programmable Gate Array (FPGA) is might be included in a multiprocessor system. A FPGA is an
integrated circuit containing an array of programmable logic blocks and a hierarchy of reconfigurable
interconnects. Input data is processed by to produce outputs. It can be a processor in a multicore or
multiprocessor system, or a FPGA.
The multicore programming approach has following advantages −
• Multicore and FPGA processing helps to increase the performance of an embedded
system.
• Also helps to achieve scalability, so the system can take advantage of increasing numbersof
cores and FPGA processing power over time.
Concurrent systems that we create using multicore programming have multiple tasks executing in
parallel. This is known as concurrent execution. When multiple parallel tasks are executed by a
processor, it is known as multitasking. A CPU scheduler, handles the tasks that execute in parallel. The
CPU implements tasks using operating system threads. So that tasks can execute independently but
have some data transfer between them, such as data transfer between a data acquisition module and
controller for the system. Data transfer occurs when there is a data dependency.
❑ Dividing activities
❑ Balance
❑ Data splitting
❑ Data dependency
❑ Testing and debugging
❑ Parallelism implies a system can perform more than one task simultaneously
❑ Concurrency supports more than one task making progress
❑ Single processor / core, scheduler providing concurrency
❑ Types of parallelism
❑ Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
❑ Task parallelism – distributing threads across cores, each thread performing
unique operation
Multithreading Models
❖ Many-to-One
❖ One-to-One
❖ Many-to-Many
Many-to-One
❖ One-to-One
▪ Similar to M:M, except that it allows a user thread to be bound to kernel thread
▪ Examples
▪ IRIX
▪ HP-UX
▪ Tru64 UNIX
▪ Solaris 8 and earlier
Thread Scheduling
As mentioned briefly in the previous section, many computer configurations have a single
CPU. Hence, threads run one at a time in such a way as to provide an illusion of concurrency.
Execution of multiple threads on a single CPU in some order is called scheduling. The Java runtime
environment supports a very simple, deterministic scheduling algorithm called fixed-priority
scheduling. This algorithm schedules threads on the basis of their priority relative to other Runnable
threads.
When a thread is created, it inherits its priority from the thread that created it. You also can modify
a thread's priority at any time after its creation by using the setPriority method. Thread priorities are
integers ranging between MIN_PRIORITY and MAX_PRIORITY (constants defined in the Thread
class). The higher the integer, the higher the priority. At any given time, when multiple threads are
ready to be executed, the runtime system chooses for execution the Runnable thread that has the
highest priority. Only when that thread stops, yields, or becomes Not Runnable will a lower-priority
thread start executing. If two threads of the same priority are waiting for the CPU, the scheduler
arbitrarily chooses one of them to run. The chosen thread runs until one of the following conditions is
true:
Then the second thread is given a chance to run, and so on, until the interpreter exits.The Java runtime
system's thread scheduling algorithm is also preemptive. If at any time a thread with a higher priority
than all other Runnable threads becomes Runnable, the runtime system chooses the new higher-priority
thread for execution. The new thread is said to preempt the other threads.
In this scheduling algorithm, the scheduler picks the threads thar arrive first in the runnable queue.
In the above table, we can see that Thread t1 has arrived first, then Thread t2, then t3, and at last t4, and
the order in which the threads will be processed is according to the time of arrival of threads.
Time-slicing scheduling:
Usually, the First Come First Serve algorithm is non-preemptive, which is bad as it may lead to infinite
blocking (also known as starvation). To avoid that, some time-slices are provided to the threads so that
after some time, the running thread has to give up the CPU. Thus, the other waiting threads also get time
to run their job.
In the above diagram, each thread is given a time slice of 2 seconds. Thus, after 2 seconds, the first
thread leaves the CPU, and the CPU is then captured by Thread2. The same process repeats for the other
threads too.
Preemptive-Priority Scheduling:
The name of the scheduling algorithm denotes that the algorithm is related to the priority of the threads.
Lecture-9
Tutorial-2
QUESTIONS
Q.1. Give suitable Examples of Windows and Unix System Calls.
ANSWERS
Example: MS-DOS
• Single tasking
• Shell invoked when system booted
• Simple method to run
program o No process
created
• Single memory space
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
• Loads program into memory, overwriting all but the kernel
• Program exit -> shell reloaded
7. Keywords
➢ Operating systems
➢ Multiprocessor
➢ Thread
➢ System call
➢ Multithread
8. Sample Questions
Remember
1. Define Operating Systems.
2. Define thread.
3. List the various multithreading models.
4. Define System call.
5. List the various computer system architectures.
6. Write the benefits of multiprogramming.
Understanding
1. Explain the various functions of operating systems.
2. Explain the computer system organization with neat diagram.
3. Explain about various architectures of computer system.
4. Differentiate between thread and multithread with neat diagram.
5. Explain about various multithreading models with diagram.
6. Explain about types of system calls.
9. Stimulating Question
A thread is usually defined as a “light weight process” because an operating system (OS)
maintains smaller data structures for a thread than for a process. Justify this.
------