0% found this document useful (0 votes)
22 views24 pages

l9 Concurrentprocesses2013

Uploaded by

23marvells
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
22 views24 pages

l9 Concurrentprocesses2013

Uploaded by

23marvells
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 24

Operating Systems

Lecture #9: Concurrent Processes

Written by David Goodwin


based on the lecture series of Dr. Dayou Li
and the book Understanding Operating Systems 4th ed.
by I.M.Flynn and A.McIver McHoes (2006)

Department of Computer Science and Technology,


University of Bedfordshire.

Operating Systems, 2013

15th April 2013


Outline
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction
1 Introduction
Configurations

Programming

Threads
2 Configurations

3 Programming

4 Threads

Operating Systems
24
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction 3

Configurations

Programming

Threads
Introduction

Operating Systems
24
What is parallel processing?

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction 4

Configurations
ˆ Parallel processing (also called multiprocessing)
Programming ˆ situation in which two or more procesors operate in unison
Threads ˆ i.e. two or more CPUs are executing instructions
simultaneously
ˆ each CPU can have a RUNNING process at the same time
ˆ Process manager must coordinate each processor
ˆ Process manager must synchronise the interaction among
CPUs
ˆ enhance throughput and increase computing power

Operating Systems
24
What is parallel processing?

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire ˆ Example: Information Retrieval System
ˆ Processor 1
Introduction 5
ˆ accepts a query
Configurations
ˆ checks for errors
Programming
ˆ passes request to Processor 2
Threads
ˆ Processor 2
ˆ searches database for required information
ˆ Processor 3
ˆ retrieves data from database (if kept off-line in secondary
storage)
ˆ data placed where Processor 2 can get it
ˆ Processor 2
ˆ passes information to Processor 4
ˆ Processor 4
ˆ routes the response back to the originator of the request

Operating Systems
24
Benefits
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire ˆ Increased reliability
Introduction 6 ˆ more than one CPU
Configurations ˆ if one fails, others can absorb the load
Programming ˆ failing processor must inform other processors
Threads
ˆ OS must re-structure its resource allocation strategies
ˆ faster processing
ˆ instructions processed two or more at a time
ˆ allocate CPU to each job
ˆ allocate CPU to each working set
ˆ subdivide individual instructions, called concurrent
programming
ˆ Challenges:
ˆ How to connect the processors in configurations?
ˆ How to orchestrate their interaction?

Operating Systems
24
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations 7

Programming

Threads
Configurations

Operating Systems
24
Master/Slave configuration

Lecture #9
Concurrent Processes
David Goodwin

ˆ Master/Slave configuration is asymmetric


University of
Bedfordshire

Introduction ˆ Essentially a single processor with additional “slaves”


Configurations 8
ˆ Master processor responsible for managing entire system
Programming
ˆ maintains status of processes, storage management, schedules
Threads
work for slave processors, executes all control programs.
ˆ suited for environments with front-end interactive users, and
back-end batch job mode

Operating Systems
24
Master/Slave configuration

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction ˆ Advantage:
Configurations 9
ˆ Simplicity
Programming

Threads
ˆ Disadvantage:
ˆ Reliability no higher than for single processor (if master fails
the whole system fails)
ˆ Poor use of resources (if matser is busy, slave must wait until
master becomes free until it can be assigned more work)
ˆ Increases the number of interrupts (slaves must interrupt the
master every time they need OS intervention e.g. IO
requests), creating long queues at the master processor

Operating Systems
24
Loosely Coupled configuration

Lecture #9
Concurrent Processes ˆ Loosely Coupled system features several complete computing
David Goodwin
University of
systems
Bedfordshire ˆ each has its own memory, IO devices, CPU, and OS
Introduction ˆ each processor controls its own resources
Configurations 10
ˆ each processor can communicate and cooperate with others
Programming
ˆ job assigned to one processor, and will remain there until
Threads
finished
ˆ job scheduling based on several requiremenyts and policies
(new jobs may be assigned to the processor with lightest
load)

Operating Systems
24
Loosely Coupled configuration

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations 11

Programming ˆ Advantage:
ˆ Isn’t prone to catastrophic system failures (when a processor
Threads

fails, others can continue work independently)


ˆ Disadvantage:
ˆ Difficult to detect when a processor has failed

Operating Systems
24
Symmetric configuration

Lecture #9
Concurrent Processes
David Goodwin ˆ Symmetric configuration best implimented if processors are
University of
Bedfordshire
the same type
ˆ Processor scheduling is decentralised
Introduction
ˆ Single copy of OS and a globqal table listing each process
Configurations 12

Programming
and its status (stored in a common area of memory)
Threads
ˆ Each processor uses the same scheduling algorithm
ˆ If interrupted, processor updates process list and finds
another to run (processors are kept busy)
ˆ Any given job can be executed on several processors
ˆ Presents a need for process synchronisation

Operating Systems
24
Symmetric configuration

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations 13 ˆ Advantage:
Programming
ˆ Reliable
Threads
ˆ Uses resources effectively
ˆ Balance loads well
ˆ Can degrade gracefully in the event of a failure
ˆ Disadvantage:
ˆ Processors must be well synchronised to avoid problems of
races and deadlocks

Operating Systems
24
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations

Programming

Threads
14 Programming

Operating Systems
24
Concurrent Programming Applications

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations

Programming 15
ˆ Multiprocessing can refer to one job using several processors
Threads ˆ This requires a programming language and computer system
that can support it, called concurrent processing system
ˆ Most programming languages are serial - instructions
executed one at a time
ˆ To resolve and arithmetic expression, every operation is done
in sequence

Operating Systems
24
Concurrent Programming Applications

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction
ˆ Example:
Configurations ˆ A = 3 ∗ B ∗ C + 4/(D + E) ∗ ∗(F − G)
Programming 16

Threads
step Operation Result
1 (F − G) Store difference in T1
2 (D + E) Store sum in T2
3 (T1 ) ∗ ∗(T2 ) Store power in T1
4 4/(T1 ) Store quotient in T2
5 3∗B Store product in T1
6 (T1 ) ∗ C Store product in T1
7 (T1 ) + (T2 ) Store sum in A

Operating Systems
24
Concurrent Programming Applications

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire
ˆ Arithmetic expressions can be processed differently if we use a
Introduction
language that allows concurrent processing
Configurations
ˆ Define COBEGIN and COEND to indicate to the compiler
Programming 17

Threads
which instructions can be processed concurrently

ˆ COBEGIN
T1 = 3 ∗ B step proc. Operation Result
T2 = D + E 1 1 3∗B Store difference in T1
T3 = F − G 2 (D + E) Store sum in T2
COEND 3 (F − G) Store difference in T3
ˆ COBEGIN
2 1 (T1 ) ∗ C Store product in T4
2 (T2 ) ∗ ∗(T3 ) Store power in T5
T4 = T1 ∗ C
3 1 4/(T5 ) Store quotient in T1
T 5 = T 2 ∗ ∗T 3
4 1 (T4 ) + (T1 ) Store sum in A
COEND
ˆ A = T 4 + 4/T 5

Operating Systems
24
Concurrent Programming Applications

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations
ˆ Increased computational speed
Programming 18
ˆ increased complexity of programming language
Threads
ˆ increased complexity of hardware (machinary and
communication among machines)
ˆ programmer must explicitly state which instructions are to be
executed in parallel, called explicit parallelism
ˆ solution: automatic detection by the compiler of instructions
that can be performed in parallel, called implicit parallelism

Operating Systems
24
Case 1: Array Operations

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction
ˆ To perform an array operation within a loop in three steps,
Configurations

Programming 19
the instruction might say:
Threads ˆ for(j=1;j<=3;j++)
a(j)=b(j)+c(j);
ˆ If we use three processors, the instruction can be performed
in a single step:
ˆ processor#1 performs: a(1)=b(1)+c(1)
processor#2 performs: a(2)=b(2)+c(2)
processor#3 performs: a(3)=b(3)+c(3)

Operating Systems
24
Case 2: Matrix Multiplication

Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction ˆ To perform C = A ∗ B where A and B represent two


Configurations matricies:   
Programming 20 1 2 3 1 2 3
Threads ˆ A = 4 5 6 , B = 4 5 6 
7 8 9 7 8 9
ˆ Several elements of a row of A are multiplied by the
corresponding elements of the column in B.
ˆ Serially, the answer can be computed in 45 steps (5 × 9)
ˆ With three processors the answer takes only 27 steps,
multiplying in parallel (3 × 9)

Operating Systems
24
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire

Introduction

Configurations

Programming

Threads 21
Threads

Operating Systems
24
Threads & Concurrent Programming

Lecture #9
Concurrent Processes
David Goodwin
University of ˆ We have considered cooperation and synchronisation of
Bedfordshire
traditional processes (known as heavyweight processes):
ˆ require space in main memory where they reside during
Introduction

Configurations
execution
ˆ may require other resources such as data files or IO devices
Programming

Threads 22
ˆ pass through several states: ready, running, waiting, delayed,
blocked
ˆ this requires an overhead from swapping between main
memory and secondary storage
ˆ To minimise overhead time, impliment the use of threads
ˆ defined as a smaller unit within a process, that can be
scheduled and executed (uses CPU)
ˆ each has its own processor registers, program counter, stack
and staus, but shares the data area and resources allocated
to its process

Operating Systems
24
Threads States
Lecture #9
Concurrent Processes
David Goodwin
University of
ˆ The same operations are performed on both traditional
Bedfordshire processes and threads.
Introduction ˆ The OS must be able to support:
Configurations
ˆ Creating new threads
Programming
ˆ Setting up a thread so it is ready to execute
Threads 23
ˆ Delaying, or putting to sleep, threads for a specific amount of
time
ˆ Blocking, or suspending, threads that are waiting for IO to
complete
ˆ Setting threads to wait state until specific event
ˆ Scheduling threads for execution
ˆ Synchronising thread execution using semaphores, events, or
conditional variables
ˆ Terminating a thread and releasing its resources
ˆ This is done by the OS tracking critical information for each
thread
Operating Systems
24
Thread Control Block
Lecture #9
Concurrent Processes
David Goodwin
University of
Bedfordshire
ˆ Just as processes are represented by Process Contol Blocks
Introduction

Configurations
(PCBs), threads are represented by Thread Contol Blocks
Programming
(TCBs):
Threads 24 ˆ Thread ID: unique identifier assigned by OS
ˆ Thread State: changes as the thread progresses though
execution
ˆ CPU information: how far the thread has executed, which
instruction is being performed, what data is begin used
ˆ Thread Priority: used by Thread Scheduler to determine
which thread should be selected for the ready queue
ˆ Pointer: to the process that created the thread
ˆ Pointers: to other threads created by this thread

Operating Systems
24

You might also like