0% found this document useful (0 votes)
22 views26 pages

21IT403Notes 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views26 pages

21IT403Notes 1

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

GMR Institute of Technology GMRIT/ADM/F-44

Rajam, AP REV.: 00
(An Autonomous Institution Affiliated to JNTUGV,
AP)
Cohesive Teaching – Learning Practices (CTLP)
Class 4th Sem. – B. Tech Department: CSE & IT
Course Operating Systems Course Code 21IT403
Prepared by Ms. U. Archana
Lecture Topic Operating-System Overview: Computer-System Organization and Architecture,
Operating-System Structure, Operating-System Operations & Services, System
Calls & its types. Threads: Multi Core Programming, Multithreading Models,
Thread Scheduling algorithms.
Course Outcome
C01,CO2 Program Outcome (s) PO1, PO12
(s)
Duration 50 Min each Lecture 1-8 Unit I
Pre-requisite (s) Computer, Operating systems

1. Objective
➢ To impart knowledge on the Computer system organization and architecture
➢ Understand the objectives and functions of operating systems
➢ To get the idea on purpose of system call
➢ To discuss the various threads and its models.
2. Intended Learning Outcomes `
At the end of this session the students will able to:
A. Understand the computer system organization and architecture
B. Understand the objectives and functions of operating systems
C. Understand the role of system call
D. Understand the threads concepts and its various models.

3. 2D Mapping of ILOs with Knowledge Dimension and Cognitive Learning Levels of RBT

Cognitive Learning Levels (2D)


Knowledge
Remember Understand Apply Analyze Evaluate Create
Dimension (1D)
Factual A, B
Conceptual C
Procedural D
Meta Cognitive

4. Teaching Methodology
❖ Power Point Presentation, Chalk & Board, Visual Presentation

5. Evocation
6. Deliverables

Lecture-1

Operating Systems Overview


What is an Operating System?
A program that acts as an intermediary between a user of a computer and the computer hardware
Operating system goals:
• Execute user programs and make solving user problems easier
• Make the computer system convenient to use
• Use the computer hardware in an efficient manner
Computer system can be divided into four components:
o Hardware – provides basic computing resources
▪ CPU, memory, I/O devices
o Operating system
▪ Controls and coordinates use of hardware among various applications andusers
o Application programs – define the ways in which the system resources are used to solve
the computing problems of the users
▪ Word processors, compilers, web browsers, database systems, video
games
o Users
▪ People, machines, other computers
User View
User View The user’s view of the computer varies according to the interface being used. Most
computer users sit in front of a PC, consisting of a monitor, keyboard, mouse, and system unit. Such a
system is designed for one user to monopolize its resources. The goal is to maximize the work (or
play) that the user is performing. In this case, the operating system is designed mostly for ease of use,
with some attention paid to performance and none paid to resource utilization— how various hardware
and software resources are shared. Performance is, of course, important to the user; but such systems
are optimized for the single-user experience rather than the requirements of multiple users.
System View
From the computer’s point of view, the operating system is the program most intimately involved with
the hardware. In this context, we can view an operating system as a resource allocator. A computer
system has many resources that may be required to solve a problem: CPU time, memory space, file-
storage space, I/O devices, and so on. The operating system acts as the manager of these resources.
Facing numerous and possibly conflicting requests for resources, the operating system must decide how
to allocate them to specific programs and users so that it can operate the computer system efficiently
and fairly. As we have seen, resource allocation is especially important where many users access the
same mainframe or minicomputer.
What Operating Systems Do
• Depends on the point of view
• Users want convenience, ease of use and good performance
Don’t care about resource utilization
• But shared computer such as mainframe or minicomputer must keep all users happy
• Users of dedicate systems such as workstations have dedicated resources but frequently use
shared resources from servers
• Handheld computers are resource poor, optimized for usability and battery life
• Some computers have little or no user interface, such as embedded computers in devicesand
automobiles

Operating System Definition


• OS is a resource allocator
• Manages all resources
• Decides between conflicting requests for efficient and fair resource use
• OS is a control program
• Controls execution of programs to prevent errors and improper use of the computer
• No universally accepted definition
• Everything a vendor ships when you order an operating system‖ is a good approximation
• But varies wildly
• The one program running at all times on the computer‖ is the kernel.
• Everything else is either
• a system program (ships with the operating system) , or
• an application program.

Computer Startup
▪ bootstrap program is loaded at power-up or reboot
▪ Typically stored in ROM or EPROM, generally known as firmware
▪ Initializes all aspects of system
▪ Loads operating system kernel and starts execution

Lecture-2

Computer System Organization and Architecture


Computer System Organization
Computer System Operation
A modern general-purpose computer system consists of one or more CPUs and a number of
device controllers connected through a common bus that provides access to shared memory . Each
device controller is in charge of a specific type of device (for example, disk drives, audio devices, or
video displays). The CPU and the device controllers can execute in parallel, competing for memory
cycles. To ensure orderly access to the shared memory, a memory controller synchronizes access to the
memory. For a computer to start running—for instance, when it is powered up or rebooted—it needs to
have an initial program to run.
This initial program, or bootstrap program, tends to be simple. Typically, it is stored within the
computer hardware in read-only memory (ROM) or electrically erasable programmable read-only
memory (EEPROM), known by the general term firmware. It initializes all aspects of the system, from
CPU registers to device controllers to memory contents. The bootstrap program must know how to load
the operating system and how to start executing that system. To accomplish this goal, the bootstrap
program must locate the operating-system kernel and load it into memory. Once the kernel is loaded
and executing, it can start providing services to the system and its users. Some services are provided
outside of the kernel, by system programs that are loaded into memory at boot time to become system
processes, or system daemons that run the entire time the kernel is running.

The occurrence of an event is usually signaled by an interrupt from either the hardware or the
software. Hardware may trigger an interrupt at any time by sending a signal to the CPU, usually by
way of the system bus. Software may trigger an interrupt by executing a special operation called a
system call (also called a monitor call). When the CPU is interrupted, it stops what it is doing and
immediately transfers execution to a fixed location. The fixed location usually contains the starting
address where the service routine for the interrupt is located. The interrupt service routine executes; on
completion, the CPU resumes the Interrupted computation.. Interrupts are an important part of
computer architecture. Each computer design has its own interrupt mechanism, but several functions
are common. The interrupt must transfer control to the appropriate interrupt service routine. The
straightforward method for handling this transfer would be to invoke a generic routine to examine the
interrupt information. The routine, in turn, would call the interrupt-specific handler. However,
interrupts must be handled quickly. Since only a predefined number of interrupts is possible, a table of
pointers to interrupt routines can be used instead to provide the necessary speed. The interrupt routine
is called indirectly through the table, with no intermediate routine needed. Generally, the table of
pointers is stored in low memory (the first hundred or so locations). These locations hold the addresses
of the interrupt service routines for the various devices. This array, or interrupt vector, of addresses is
then indexed by a unique device number, given with the interrupt request, to provide the address of the
interrupt service routine for the interrupting device.
Storage Structure
The wide variety of storage systems can be organized in a hierarchy. according to speed and
cost. The higher levels are expensive, but they are fast. As we move down the hierarchy, the cost per
bit generally decreases, whereas the access time generally increases. This trade-off is reasonable; if a
given storage system were both faster and less expensive than another—other properties being the
same—then there would be no reason to use the slower, more expensive memory. In fact, many early
storage devices, including paper tape and core memories, are relegated to museums now that magnetic
tape and semiconductor memory have become faster and cheaper.

The top four levels of memory in Figure 1.4 may be constructed using semiconductor memory.
In addition to differing in speed and cost, the various storage systems are either volatile or nonvolatile.
As mentioned earlier, volatile storage loses its contents when the power to the device is removed. In the
absence of expensive battery and generator backup systems, data must be written to nonvolatile storage
for safekeeping. In the hierarchy shown in Figure 1.4, the storage systems above the solid-state disk are
volatile, whereas those including the solid-state disk and below are nonvolatile. Solid-state disks have
several variants but in general are faster than magnetic disks and are nonvolatile. One type of solid-
state disk stores data in a large DRAM array during normal operation but also contains a hidden
magnetic hard disk and a battery for backup power. If external power is interrupted, this solid-state
disk’s controller copies the data from RAM to the magnetic disk.

When external power is restored, the controller copies the data back into RAM. Another form
of solid-state disk is flash memory, which is popular in cameras and personal digital assistants (PDAs),
in robots, and increasingly for storage on general-purpose computers. Flash memory is slower than
DRAM but needs no power to retain its contents. Another form of nonvolatile storage is NVRAM,
which is DRAM with battery backup power. This memory can be as fast as DRAM and (as long as the
battery lasts) is nonvolatile.
Cache Memory
• Important principle, performed at many levels in a computer (in hardware,
operatingsystem, software)
• Information in use copied from slower to faster storage temporarily
• Faster storage (cache) checked first to determine if information is there
o If it is, information used directly from the cache (fast)
o If not, data copied to cache and used there
• Cache smaller than storage being cached
o Cache management important design problem
o Cache size and replacement policy

I/O Structure
Interrupt-driven I/O is fine for moving small amounts of data but can produce high overhead
when used for bulk data movement such as disk I/O. To solve this problem, direct memory access
(DMA) is used. After setting up buffers, pointers, and counters for the I/O device, the device
controller transfers an entire block of data directly to or from its own buffer storage to memory, with
no intervention by the CPU. Only one interrupt is generated per block, to tell the device driver that
the operation has completed, rather than the one interrupt per byte generated for low-speed devices.
While the device controller is performing these operations, the CPU is available to accomplish other
work. Some high-end systems use switch rather than bus architecture. On these systems, multiple
components can talk to other components concurrently, rather than competing for cycles on a shared
bus. In this case, DMA is even more effective. The following Figure shows the interplay of all
components of a computer system.

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Computer System Architecture

Single Storage System

Personal computers PCs appeared in the 1970s. Operating systems for these computers have
benefited in several ways from the development of operating systems for mainframes. Microcomputers
were immediately able to adopt some of the technology developed for larger operating systems. On the
other hand, the hardware costs for microcomputers are sufficiently low that individuals have sole use
of the computer and when other computers and other users can access the files on a PC, file protection
again becomes a necessary feature of the operating system.

Multiprocessor System

Within the past several years, multiprocessor systems (also known as parallel systems or multicore
systems) have begun to dominate the landscape of computing. Such systems have two or more
processors in close communication, sharing the computer bus and sometimes the clock, memory, and
peripheral devices. Multiprocessor systems first appeared prominently appeared in servers and have
since migrated to desktop and laptop systems. Recently, multiple processors have appeared on mobile
devices such as smart phones and tablet computers.

Multiprocessor systems have three main advantages:

Increased throughput
By increasing the number of processors, we expect to get more work done in less time. The
speed-up ratio with N processors is not N, however; rather, it is less than N. When multiple processors
cooperate on a task, a certain amount of overhead is incurred in keeping all the parts working correctly.
This overhead, plus contention for shared resources, lowers the expected gain from additional
processors. Similarly, N programmers working closely together do not produce N times the amount of
work a single programmer would produce.

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Economy of scale

Multiprocessor systems can cost less than equivalent multiple single-processor systems,
because they can share peripherals, mass storage, and power supplies. If several programs operate on
the same set of data, it is cheaper to store those data on one disk and to have all the processors share
them than to have many computers with local disks and many copies of the data.

Increased reliability

If functions can be distributed properly among several processors, then the failure of one
processor will not halt the system, only slow it down. If we have ten processors and one fails, then each
of the remaining nine processors can pick up a share of the work of the failed processor. Thus, the
entire system runs only 10 percent slower, rather than failing altogether.

Two types:

1. Asymmetric Multiprocessing – each processor is assigned a specie task.


2. Symmetric Multiprocessing – each processor performs all tasks
Clustered systems

▪ Like multiprocessor systems, but multiple systems working together


▪ Usually sharing storage via a storage-area network (SAN)
▪ Provides a high-availability service which survives failures
▪ Asymmetric clustering has one machine in hot-standby mode
▪ Symmetric clustering has multiple nodes running applications,
monitoringeach other
▪ Some clusters are for high-performance computing (HPC)
▪ Applications must be written to use parallelization
▪ Some have distributed lock manager (DLM) to avoid conflicting operations

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Lecture-3

Operating System Structure & Operations

Operating System Structure

Multiprogramming (Batch system) needed for efficiency


▪ Single user cannot keep CPU and I/O devices busy at all times
▪ Multiprogramming organizes jobs (code and data) so CPU always has one to
execute
▪ A subset of total jobs in system is kept in memory
▪ One job selected and run via job scheduling
▪ When it has to wait (for I/O for example), OS switches to another job

Timesharing (multitasking) is logical extension in which CPU switches jobs so frequently thatusers
can interact with each job while it is running, creating interactive computing
▪ Response time should be < 1 second
▪ Each user has at least one program executing in memory 🢡process
▪ If several jobs ready to run at the same time 🢡 CPU scheduling
▪ If processes don’t fit in memory, swapping moves them in and out to run
▪ Virtual memory allows execution of processes not completely in memory

Operating System Operations

▪ Interrupt driven (hardware and software)


▪ Hardware interrupt by one of the devices
▪ Software interrupt (exception or trap):
▪ Software error (e.g., division by zero)
▪ Request for operating system service
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
▪ Other process problems include infinite loop, processes modifying eachother
or the operating system

Dual-mode operation

▪ Dual-mode operation allows OS to protect itself and other system components


▪ User mode and kernel mode
▪ Mode bit provided by hardware
▪ Provides ability to distinguish when system is running user code or kernelcode
▪ Some instructions designated as privileged, only executable in kernelmode
▪ System call changes mode to kernel, return from call resets it to user
▪ Increasingly CPUs support multi-mode operations
▪ i.e. virtual machine manager (VMM) mode for guest VMs

Timer

▪ Timer to prevent infinite loop / process hogging resources


▪ Timer is set to interrupt the computer after some time period
▪ Keep a counter that is decremented by the physical clock.
▪ Operating system set the counter (privileged instruction)
▪ When counter zero generate an interrupt
▪ Set up before scheduling process to regain control or terminate program that
exceeds allotted time

Lecture-4

Operating-System Services

An operating system provides an environment for the execution of programs. It provides certain
services to programs and to the users of those programs. The specific services provided, of course,
differ from one operating system to another, but we can identify common classes. These operating-
system services are provided for the convenience of the programmer, to make the programming task
easier. One set of operating-system services provides functions that are helpful to the user.

User interface:

Almost all operating systems have a user interface (UI). This interface can take several forms.
One is a command-line interface (CLI), which uses text commands and a method for entering them
(say, a program to allow entering and editing of commands). Another is a batch interface, in which
commands and directives to control those commands are entered into files, and those files are executed.
Most commonly/ a graphical user interface (GUI) is used. Here, the interface is a window system with
a pointing device to direct I/O, choose from menus, and make selections and a keyboard to enter text.
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
Some systems provide two or all three of these variations.

Program execution:

The system must be able to load a program into memory and to run that program. The
program must be able to end its execution, either normally or abnormally (indicating error).
I/O operations. A running program may require I/O, which may involve a file or an I/O device. For
specific devices, special functions may be desired (such as recording to a CD or DVD drive or
blanking a CRT screen). For efficiency and protection, users usually cannot control I/O devices
directly. Therefore, the operating system must provide a means to do I/O.

File-system manipulation:

The file system is of particular interest. Obviously, programs need to read and write files and
directories. They also need to create and delete them by name, search for a given file, and list file
information. Finally, some programs include permissions management to allow or deny access to files
or directories based on file ownership.

Communications:

There are many circumstances in which one process needs to exchange information with
another process. Such communication may occur between processes that are executing on the same
computer or between processes that are executing on different computer systems tied together by a
computer network. Communications may be implemented via shared memory or through message
passing, in which packets of information are moved between processes by the operating system.

Error detection:
The operating system needs to be constantly aware of possible errors. Errors may occurin the
CPU and memory hardware (such as a memory error or a power failure), in I/O devices (such as a
parity error on tape, a connection failure on a network, or lack of paper in the printer), and in the user
program (such as an arithmetic overflow, an attempt to access an illegal memory location, or a too-
great use of CPU time). For each type of error, the operating system should take the appropriate action
to ensure correct and consistent computing. Debugging facilities can greatly enhance the user's and
programmer's abilities to use the system efficiently.

Another set of operating-system functions exists not for helping the user but rather for ensuring the
efficient operation of the system itself. Systems with multiple users can gain efficiency by sharing the
computer resources among the users.

Resource allocation:

When there are multiple users or multiple jobs running at the same time, resources must be
allocated to each of {hem. Many different types of resources are managed by the operating system.
Some (such as CPU cycles, main memory, and file storage) may have special allocation code, whereas
others (such as I/O devices) may have much more general request and release code. For instance, in
determining how best to use the CPU, operating systems have CPU- scheduling routines that take into
account the speed of the CPU, the jobs that must be executed, the number of registers available, and
other factors. There may also be routines to allocate printers, modems, USB storage drives, and other
peripheral devices.

Accounting:

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


We want to keep track of which users use how much and what kinds of computer resources.
This record keeping may be used for accounting (so that users can be billed) or simply for accumulating
usage statistics. Usage statistics may be a valuable tool for researchers who wish to reconfigiire the
system to improve computing services.

Protection and security:

The owners of information stored in a multiuser or networked computer system may want to
control use of that information. When several separate processes execute concurrently, it should not be
possible for one process to interfere with the others or with the operating system itself. Protection
involves ensuring that all access to system resources is controlled. Security of the system from
outsiders is also important. Such security starts with requiring each user to authenticate himself or
herself to the system, usually by means of a password, to gain access to system resources. It extends to
defending external I/O devices, including modems and network adapters, from invalid access attempts
and to recording all such connections for detection of break-ins. If a system is to be protected and
secure, precautions must be instituted throughout it. A chain is only as strong as its weakest link.

Lecture-5

System Calls & Types of System Calls


System Calls

• Programming interface to the services provided by the OS


• Typically written in a high-level language (C or C++)
• Mostly accessed by programs via a high-level Application Programming Interface (API)
rather than direct system call use
• Three most common APIs are Win32 API for Windows, POSIX API for POSIX-based systems
(including virtually all versions of UNIX, Linux, and Mac OS X), and Java APIfor the Java
virtual machine (JVM)

❑ Typically, a number associated with each system call


❑ System-call interface maintains a table indexed according to these numbers
❑ The system call interface invokes the intended system call in OS kernel and returnsstatus
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
of the system call and any return values
❑ The caller need know nothing about how the system call is implemented
❑ Just needs to obey API and understand what OS will do as a result call
❑ Most details of OS interface hidden from programmer by API
❑ Managed by run-time support library (set of functions built into libraries
included with compiler)

❑ Often, more information is required than simply identity of desired system call
❑ Exact type and amount of information vary according to OS and call
❑ Three general methods used to pass parameters to the OS
❑ Simplest: pass the parameters in registers
❑ In some cases, may be more parameters than registers
❑ Parameters stored in a block, or table, in memory, and address of block passed asa
parameter in a register
❑ This approach taken by Linux and Solaris
❑ Parameters placed, or pushed, onto the stack by the program and popped off thestack
by the operating system
❑ Block and stack methods do not limit the number or length of parameters being
passed

Types of System Calls

Process control
◦ end, abort
◦ load, execute
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
◦ create process, terminate process
◦ get process attributes, set process attributes
◦ wait for time
◦ wait event, signal event
◦ allocate and free memory
File management
◦ create file, delete file ◦ open, close
◦ read, write, reposition
◦ get file attributes, set file attributes
Device management
◦ request device, release device
◦ read, write, reposition
◦ get device attributes, set device attributes
◦ logically attach or detach devices
Information maintenance
◦ get time or date, set time or date
◦ get system data, set system data
◦ get process, file, or device attributes
◦ set process, file, or device attributes
Communications
◦ create, delete communication connection
◦ send, receive messages
◦ transfer status information
◦ attach or detach remote devices

Lecture-6

Tutorial-1

QUESTIONS
Q.1. What are the various tasks or services offered by the Operating System?

Q.2. How a Modern Computer works and draw a Dual-Core Design ?

ANSWERS

Ans 1:
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
• Program execution: The operating system loads the contents of a file into memory and
begins its execution. A user level program could not be trusted to properly allocate CPU
time.
• I/O operations: Disks, tapes, serial lines, and other devices must be communicated with
at a very low level. The user needs to only specify the device and the operation to
perform on it, while the system converts the request into device specific or controller
specific commands. User- level programs cannot be trusted to access only devices they
should have access to and to access them only when they are otherwise unused.
• File-system manipulation: There are many details in file creation, deletion, allocation,
and naming that users should not have to perform. Blocks of disk space are used by files
and must be tracked. Deleting a file requires removing the name file information and
freeing the allocated blocks. Protections must also be checked to assure proper file
access. User programs could neither ensure adherence to protection methods nor be
trusted to allocate only free blocks and de-allocate blocks on file deletion.
• Communications: Message passing between systems requires message to be turned into
packets of information, sent to the network controller, transmitted across a
communications medium, and reassembled by the destination system. Packet ordering
and data correction must take place. Again user programs might not coordinate access
to the network device, or they might receive packets destined for other processes.
• Error detection: Error detection occurs at both the hardware and software levels. At the
hardware level, all data transfers must be inspected to ensure that data have not been
corrupted in transit. All data on media must be checked to be sure they have not
changed since they were written to the media. At the software level, media must be checked for
data consistency; for instance, whether the numbers of allocated and unallocated blocks of
storage match the total number on the device. There errors are frequently process independent,
so there must be a global program that handles all types of errors. Also by having errors
processed by the operating system, processes need not contain code to catch and correct all the
errors possible on a system.

Ans.2. Dual-Core Design

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Lecture-7

Multi Core Programming, Multithreading Models

Multi Core Programming

Multicore programming helps to create concurrent systems for deployment on multicore processor and
multiprocessor systems. A multicore processor system is basically a single processor with multiple
execution cores in one chip. It has multiple processors on the motherboard or chip. A Field-
Programmable Gate Array (FPGA) is might be included in a multiprocessor system. A FPGA is an
integrated circuit containing an array of programmable logic blocks and a hierarchy of reconfigurable
interconnects. Input data is processed by to produce outputs. It can be a processor in a multicore or
multiprocessor system, or a FPGA.
The multicore programming approach has following advantages &minus;
• Multicore and FPGA processing helps to increase the performance of an embedded
system.
• Also helps to achieve scalability, so the system can take advantage of increasing numbersof
cores and FPGA processing power over time.
Concurrent systems that we create using multicore programming have multiple tasks executing in
parallel. This is known as concurrent execution. When multiple parallel tasks are executed by a
processor, it is known as multitasking. A CPU scheduler, handles the tasks that execute in parallel. The
CPU implements tasks using operating system threads. So that tasks can execute independently but
have some data transfer between them, such as data transfer between a data acquisition module and
controller for the system. Data transfer occurs when there is a data dependency.

Multicore or multiprocessor systems putting pressure on programmers, challenges include:

❑ Dividing activities
❑ Balance
❑ Data splitting
❑ Data dependency
❑ Testing and debugging
❑ Parallelism implies a system can perform more than one task simultaneously
❑ Concurrency supports more than one task making progress
❑ Single processor / core, scheduler providing concurrency
❑ Types of parallelism
❑ Data parallelism – distributes subsets of the same data across multiple cores,
same operation on each
❑ Task parallelism – distributing threads across cores, each thread performing
unique operation

Concurrent execution on single-core system:

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Parallelism on a multi-core system:

Multithreading Models

❖ Many-to-One
❖ One-to-One
❖ Many-to-Many
Many-to-One

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


✓ Many user-level threads mapped to single kernel thread
✓ One thread blocking causes all to block
✓ Multiple threads may not run in parallel on muticore system because only one may be in
kernel at a time
✓ Few systems currently use this model
✓ Examples:
✓ Solaris Green Threads
✓ GNU Portable Threads

❖ One-to-One

✓ Each user-level thread maps to kernel thread


✓ Creating a user-level thread creates a kernel thread
✓ More concurrency than many-to-one
✓ Number of threads per process sometimes restricted due to overhead
✓ Examples
✓ Windows
✓ Linux
✓ Solaris 9 and later

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


❖ Many-to-Many

▪ Allows many user level threads to be mapped to many kernel threads


▪ Allows the operating system to create a sufficient number of kernel threads
▪ Solaris prior to version 9
▪ Windows with the ThreadFiber package

❖ Two Level Model

▪ Similar to M:M, except that it allows a user thread to be bound to kernel thread
▪ Examples
▪ IRIX
▪ HP-UX
▪ Tru64 UNIX
▪ Solaris 8 and earlier

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Lecture-8

Thread Scheduling Algorithms

Thread Scheduling

As mentioned briefly in the previous section, many computer configurations have a single
CPU. Hence, threads run one at a time in such a way as to provide an illusion of concurrency.
Execution of multiple threads on a single CPU in some order is called scheduling. The Java runtime
environment supports a very simple, deterministic scheduling algorithm called fixed-priority
scheduling. This algorithm schedules threads on the basis of their priority relative to other Runnable
threads.

When a thread is created, it inherits its priority from the thread that created it. You also can modify
a thread's priority at any time after its creation by using the setPriority method. Thread priorities are
integers ranging between MIN_PRIORITY and MAX_PRIORITY (constants defined in the Thread
class). The higher the integer, the higher the priority. At any given time, when multiple threads are
ready to be executed, the runtime system chooses for execution the Runnable thread that has the
highest priority. Only when that thread stops, yields, or becomes Not Runnable will a lower-priority
thread start executing. If two threads of the same priority are waiting for the CPU, the scheduler
arbitrarily chooses one of them to run. The chosen thread runs until one of the following conditions is
true:

• A higher priority thread becomes runnable.


• It yields, or its run method exits.
• On systems that support time-slicing, its time allotment has expired.

Then the second thread is given a chance to run, and so on, until the interpreter exits.The Java runtime
system's thread scheduling algorithm is also preemptive. If at any time a thread with a higher priority
than all other Runnable threads becomes Runnable, the runtime system chooses the new higher-priority
thread for execution. The new thread is said to preempt the other threads.

Thread Scheduler Algorithms

First Come First Serve Scheduling:

In this scheduling algorithm, the scheduler picks the threads thar arrive first in the runnable queue.

Observe the following table:

Threads Time of Arrival


t1 0
t2 1
t3 2
t4 3

In the above table, we can see that Thread t1 has arrived first, then Thread t2, then t3, and at last t4, and
the order in which the threads will be processed is according to the time of arrival of threads.

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Hence, Thread t1 will be processed first, and Thread t4 will be processed last.

Time-slicing scheduling:

Usually, the First Come First Serve algorithm is non-preemptive, which is bad as it may lead to infinite
blocking (also known as starvation). To avoid that, some time-slices are provided to the threads so that
after some time, the running thread has to give up the CPU. Thus, the other waiting threads also get time
to run their job.

In the above diagram, each thread is given a time slice of 2 seconds. Thus, after 2 seconds, the first
thread leaves the CPU, and the CPU is then captured by Thread2. The same process repeats for the other
threads too.

Preemptive-Priority Scheduling:

The name of the scheduling algorithm denotes that the algorithm is related to the priority of the threads.

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


Suppose there are multiple threads available in the runnable state. The thread scheduler picks that thread
that has the highest priority. Since the algorithm is also preemptive, therefore, time slices are also
provided to the threads to avoid starvation. Thus, after some time, even if the highest priority thread has
not completed its job, it has to release the CPU because of preemption.

Lecture-9

Tutorial-2

QUESTIONS
Q.1. Give suitable Examples of Windows and Unix System Calls.

ANSWERS

Ans.1. Examples of Windows and Unix System Calls

Example: MS-DOS
• Single tasking
• Shell invoked when system booted
• Simple method to run
program o No process
created
• Single memory space
Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT
• Loads program into memory, overwriting all but the kernel
• Program exit -> shell reloaded

a)At system startup b)Running a program

7. Keywords
➢ Operating systems
➢ Multiprocessor
➢ Thread
➢ System call
➢ Multithread

8. Sample Questions

Remember
1. Define Operating Systems.
2. Define thread.
3. List the various multithreading models.
4. Define System call.
5. List the various computer system architectures.
6. Write the benefits of multiprogramming.
Understanding
1. Explain the various functions of operating systems.
2. Explain the computer system organization with neat diagram.
3. Explain about various architectures of computer system.
4. Differentiate between thread and multithread with neat diagram.
5. Explain about various multithreading models with diagram.
6. Explain about types of system calls.

9. Stimulating Question

A thread is usually defined as a “light weight process” because an operating system (OS)
maintains smaller data structures for a thread than for a process. Justify this.

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT


10. Mindmap

11. Student Summary


At the end of this session, the facilitator (Teacher) shall randomly pic-up few students to
summarize the deliverables

12. Reading Materials


• Operating System Concepts, Abraham Silberschatz, Greg Gagne, Peter B. Galvin, 9th
Edition, Wiley, 2016.
• Operating Systems, Harvey M. Deitel, Paul J. Deitel, David R. Choffnes, 3rd Edition,
PPH, 2004.
References
• Operating Systems: Internals and Design Principles, William Stallings, 7th Edition,
Pearson PPH, 2013.
• Operating systems: A Concept based Approach, D. M. Dhamdhere, 2nd Edition, TMH,
2006.
• Operating System: A Design Approach, Crowley, 1st Edition, TMH, 2001.
• Modern Operating Systems, Andrew S Tanenbaum, 3rd Edition, PHI, 2009.

13. Scope for Mini Project

------

Form No. AC 04. 00.2016 – GMRIT, Rajam, Andhra Pradesh, GMRIT

You might also like