Concurrent Systems
• A system in which:
– Multiple tasks can be executed at the same time
– The tasks may be duplicates of each other, or
distinct tasks
– The overall time to perform the series of tasks is
reduced
Sequential Processing
• Traditional algorithms are mostly sequential:
– They have one “thread” of execution
– One step follows another in sequence
– One processor is all that is needed to run the
algorithm
A Non-sequential Example
• Consider a house with a burglar alarm system.
• The system continually monitors:
– The front door
– The back door
– The sliding glass door
– The door to the deck
– The kitchen windows
– The living room windows
– The bedroom windows
• The burglar alarm is watching all of
these “at once” (at the same time).
Another Non-sequential Example
• Your car has an onboard digital dashboard that
simultaneously:
– Calculates how fast you’re going and displays it on
the speedometer
– Checks your oil level
– Checks your fuel level and calculates consumption
– Monitors the heat of the engine and turns on a
light if it is too hot
– Monitors your alternator to make sure it is
charging your battery
Concurrent Systems
• Is a system in which:
– Multiple tasks can be executed at the same time
– The tasks may be duplicates of each other, or
distinct tasks
– The overall time to perform the series of tasks is
reduced
Advantages of Concurrency
• Concurrent processes can reduce duplication
in code.
• The overall runtime of the algorithm can be
significantly reduced.
• More real-world problems can be solved than
with sequential algorithms alone.
• Redundancy can make systems more reliable.
Disadvantages of Concurrency
• Runtime is not always reduced, so careful
planning is required
• Concurrent algorithms can be more complex
than sequential algorithms
• Shared data can be corrupted
• Communications between tasks is needed
Achieving Concurrency
• Many computers today have more than one
processor (multiprocessor machines)
CPU 1 CPU 2
bus
Memory
Achieving
• Concurrency can also be achieved on a
Concurrency
computer with only one processor:
– The computer “juggles” jobs, swapping its
attention to each in turn
– “Time slicing” allows many users to get CPU
resources
– Tasks may be suspended while they wait for
something, such as device I/O
ZZZZ
task 2 ZZZZ
task 1 task 3
CPU
Concurrency vs. Parallelism
• Concurrency is the execution of multiple
tasks at the same time, regardless of the
number of processors.
• Parallelism is the execution the same task
on multiple processors
Types of Concurrent Systems
• Multiprogramming
• Multiprocessing
• Multitasking
• Distributed Systems
Multiprogramming
• Share a single CPU among many users or
tasks.
• May have a time-shared algorithm or a
priority algorithm for determining which
task to run next
• Give the illusion of simultaneous processing
through rapid swapping of tasks
(interleaving).
Multiprogramming
Memory
User 1
CPU
User 2
User1 User2
Multiprogramming
4
Tasks/Users
1 2 3 4
CPU’s
Multiprocessing
• Executes multiple tasks at the same
time
• Uses multiple processors to accomplish
the tasks
• Each processor may also timeshare
among several tasks
• Has a shared memory that is used by all
the tasks
Multiprocessing
Memory
User 1: Task1
CPU CPU CPU
User 1: Task2
User 2: Task1
User1 User2
Multiprocessing
4
Tasks/Users
3
Shared
Memory
2
1 2 3 4
CPU’s
Multitasking
• A single user can have multiple tasks
running at the same time.
• Can be done with one or more processors.
• Used to be rare and for only expensive
multiprocessing systems, but now most
modern operating systems can do it.
Multitasking
Memory
User 1: Task1
CPU
User 1: Task2
User 1: Task3
User1
Multitasking
4
Single User
Tasks
1 2 3 4
CPU’s
Parallelism
Parallelism
• Using multiple processors to solve a single
task.
• Involves:
– Breaking the task into meaningful pieces
– Doing the work on many processors
– Coordinating and putting the pieces
back together.
Parallelism
Network
Interface
Memory
CPU
Parallelism
4
Tasks
1 2 3 4
CPU’s
Pipeline Processing
Repeating a sequence of operations or
pieces of a task.
Allocating each piece to a separate
processor and chaining them together
produces a pipeline, completing tasks
faster. output
input
A B C D
Example
• Suppose you have a choice between a
washer and a dryer each having a 30 minutes
cycle or
• A washer/dryer with a one hour cycle
• The correct answer depends on how much
work you have to do.
One Load
Transfer
Overhead
wash dry
combo
Three Loads
wash dry
wash dry
wash dry
combo combo combo
Examples of Pipelined Tasks
• Automobile manufacturing
• Instruction processing within a computer
A 1 2 3 4 5
B 1 2 3 4 5
C 1 2 3 4 5
D 1 2 3 4 5
0 1 2 3 4 5 6 7
time
Task Queues
• A supervisor processor maintains a queue of tasks
to be performed in shared memory.
• Each processor queries the queue, dequeues the
next task and performs it.
• Task execution may involve adding more tasks to
the task queue.
P1 P2 P3 Pn
Super Task Queue
Distributed
Multiple computersSystems
working together with no central
program “in charge.”
ATM Buford
ATM Perimeter
ATM Student Ctr
ATM North Ave
Central
Bank
DISTRIBUTED SYSTEMS
Definition
•A distributed system is a system consisting of a collection of
autonomous machines connected by communication
networks and equipped with software systems designed to
produce an integrated and consistent computing
environment.
•Distributed systems enable people to cooperate and coordinate
their activities more effectively and efficiently.
Distributed Systems
• Advantages:
– No bottlenecks from sharing processors
– No central point of failure
– Processing can be localized for efficiency
• Disadvantages:
– Complexity
– Communication overhead
– Distributed control
Motivations for Distributed Systems
•resource sharing,
•openness,
•concurrency,
• scalability,
•fault-tolerance, and
•transparency
Motivations contd.
• Resource sharing. In a distributed system, the resources - hardware, software
and data can be easily shared among users. For example, a printer can be shared
among a group of users.
• Openness. The openness of distributed systems is achieved by specifying the
key software interface of the system and making it available to software
developers so that the system can be extended in many ways.
• Concurrency. The processing concurrency can be achieved by sending requests
to multiple machines connected by networks at the same time.
• Scalability. A distributed system running on a collection of a small number of
machines can be easily extended to a large number of machines to increase the
processing power.
• Fault-tolerance. Machines connected by networks can be seen as redundant
resources, a software system can be installed on multiple machines so that in
the face of hardware faults or software failures, the faults or failures can be
detected and tolerated by other machines.
Motivations Contd.
• Transparency. Distributed systems can provide many forms of
transparency such as:
1. Location transparency, which allows local and remote
information to be accessed in a unified way;
2. Failure transparency, which enables the masking of
failures automatically; and
3. Replication transparency, which allows duplicating
software/data on Multiple machines invisibly.
Network Architecture
• In order to utilize the full potential of computer networks,
international standards were created to ensure that any
system could communicate with any other system anywhere
in the world.
• This gives rise to Open System Interconnection (OSI) reference
model or OSI Architecture
• It is a seven-layer model for inter-process communication.
• Its architecture is comprised of application, presentation,
session, transport, network, data link and physical layers
The OSI Architecture
TCP/IP reference model
• It is a four-layer architecture: application,
transport, Internet, and network interface
• The current Internet based on ARPANET uses
this architecture
• In this model, the network interface (or access)
layer relies on the data link and physical layers
of the network, and the application layer
corresponds to application and presentation
layers of the OSI model, since there is no
session layer in the TCP/IP model.
TCP/IP reference model
Network Fault Tolerance
• The term network fault tolerance refers to how
resilient the network is against the failure of a
component.
• Network reliability refers to the reliability of the
overall network to provide communication in the
event of failure of a component or components in
the network.
Characteristics of Networks that makes them
vulnerable
• Unreliable communications;
• Unreliable resources (computers, storage,
software, etc.);
• Highly heterogeneous environment;
• Potentially very large amount of resources:
scalability;
• Potentially highly variable number of
resources.
Network Reliability Issues
• Communication network reliability depends on the
sustainability of both hardware and software
• network reliability nowadays encompasses more than what
was traditionally addressed through network availability
• Basic techniques used in dealing with network failures
include: retry (retransmission), complemented retry with
correction, replication (e.g., dual bus), coding, special
protocols (single handshake, double handshake, etc.), timing
checks, rerouting, and retransmission with shift (intelligent
retry), etc..
Protocols
• Generally speaking, a protocol is an agreement between
the communication parties on how communication is to
proceed.
• Network software is arranged in a hierarchy of layers.
Each layer presents an interface to the layers above it
that extends the properties of the underlying
communication system. One layer on one machine
carries on a conversation with the same layer on
another machine.
• The rules and conventions used in this conversation are
collectively known as the protocol of this layer.
Protocols Contd.
• The definition of a protocol has two important
parts:
a. A specification of the sequence of messages
that must be exchanged; and
b. A specification of the format of the data in
the messages.
Protocols Contd.
• A protocol is implemented by a pair of
software modules located in the sending and
receiving computers.
• Each network layer has one or more protocols
corresponding to it so that it can provide a
service to the layer above it and extend the
service provided by the layer below it.
• Hence, these protocols are arranged in a
hierarchy of layers as well.
Quality of Service (QoS)
• Quality of Service (QoS) is a somewhat vague term
referring to the technologies that classify network
traffic and then ensure that some of that traffic
receives special handling.
• The special handling may include attempts to provide
improved error rates, lower network transit time
(latency), and decreased latency variation (jitter).
• It may also include promises of high availability, which
is a combination of mean (average) time between
failures (MTBF) and mean time to repair (MTTR).
Software for Distributed Computing
• Traditional Client-Server Model:
The client-server model has been a dominant model for distributed
computing since the 1980s.
The development of this model has been sparked by research and the
development of operating systems that could support users working at
their personal computers connected by a local area network.
The issue was how to access, use and share a resource located on another
computer, e.g., a file or printer, in a transparent manner.
In the 1980s computers were controlled by monolithic kernel based
operating systems, where all services, including file and naming services,
were part of that huge piece of software.
In order to access a remote service, the whole operating system must be
located and accessed within the computer providing the service.
• There was the need to distinguish from a kernel based
operating system --
• that part of software which only provides a desired service --
and embody it into a new software entity. This entity is called
a server.
• Thus, each user process, a client, can access and use that
server, subject to possessing access rights and a compatible
interface.
• Hence, the idea of the client-server model is to build at least
that part of an operating system which is responsible for
providing services to users as a set of cooperating server
processes.
Web-Based Distributed Computing Models
• To meet the requirements of quick development of
the Internet, distributed computing may need to shift
its environment from LAN to the Internet.
• At the execution level, distributed computing
/applications may rely on the following parts:
• Processes: A typical computer operating system on a
computer host can run several processes at once.
• While it is running, a process has access to the
resources of the computer (CPU time, I/O device and
communication ports) through the operating system.
Web-Based Distributed Computing Models contd.
• Threads: Every process has at least one thread of control.
Some OS support the creation of multiple threads of control
within a single process.
• Each thread in a process can run independently from the
other threads.
• The threads may share some memory such as heap or stacks.
Usually the threads need synchronization.
• Distributed Objects: Distributed Object technologies are best
typified by the Object Management Group’s Common Object
Request Broker Architecture (CORBA), and Microsoft’s
Distributed Component Object Model (DCOM)
Web-Based Distributed Computing Models contd.
• Agents: An agent could be defined as a program that
assists users and acts on their behalf.
• From the perspective of end - to - end users, agents are
active entities that obligate the following mandatory
behavior rules:
• R1:Work to meet designer’s specifications;
• R2: Autonomous: has control over its own actions
provided this does not violate R1.
• R3: Reactive: senses changes in requirements and
environment, being able to act according to those changes
provided this does not violate R1.
Web-Based Distributed Computing Models contd.
• From the perspective of systems, an agent may
possess any of the following orthogonal
properties:
• Communication: able to communicate with
other agents.
• Mobility: can travel from one host to another.
• Reliability: able to tolerate a fault when one
occurs.
• Security: appear to be trustful to the end user.
Web-based Client-Server Computing
• Web-based client-server computing systems could be
categorized into four types: the proxy computing,
code shipping, remote computing and agent-based
computing models.
Web-based Client-Server Computing contd.
• The proxy computing (PC) model is typically used in Web-based
scientific computing. According to this model a client sends data and
programs to a server over the Web and requests the server to perform
certain computing.
The server receives the request, performs the computing using the
• programs and data supplied by the client and returns the result back to
the client.
• Typically, the server is a powerful high-performance computer or it has
some special system programs (such as special mathematical and
engineering libraries) that are necessary for the computation. The client is
mainly used for interfacing with users.
Web-based Client-Server Computing Contd.
• The code shipping (CS) model is a popular Web-based
client-server computing model.
• A typical example is the downloading and execution of Java
applets on Web browsers, such as Netscape Communicator
and Internet Explorer.
• According to this model, a client makes a request to a
server, the server then ships the program (e.g., the Java
applets) over the Web to the client and the client executes
the program (possibly) using some local data.
• The server acts as the repository of programs and the
client performs the computation and interfaces with users.
Web-based Client-Server Computing Contd.
• The remote computing (RC) model is typically used in Web-
based scientific computing and database applications
[Sandewall 1996].
• According to this model, the client sends data over the Web to
the server and the server performs the computing using
programs residing in the server. After the completion of the
computation, the server sends the result back to the client.
• Typically the server is a high-performance computing server
equipped with the necessary computing programs and/or
databases. The client is responsible for interfacing with users.
The
• NetSolve system [Casanova and Dongarra 1997] uses this model
Web-based Client-Server Computing Contd.
• The agent-based computing (AC) model is a three-tier model.
According to this model, the client sends either data or data and
programs over the Web to the agent.
• The agent then processes the data using its own programs or using
the received programs.
• After the completion of the processing, the agent will either send the
result back to the client if the result is complete, or send the
data/program/midium result to the server for further processing. In
the latter case, the server will perform the job and return the result
back to the client directly (or via the agent).
Nowadays, more and more Web-based applications have shifted to
the AC model [Chang and Scott 1996] [Ciancarini et al 1996].
The Agent-Based Computing Models
• The basic agent-based computing model has many
extensions and variations.
• However, there are two areas of distinction among
these models, which highlight their adaptability and
extensibility: one is whether the interactions among
components are preconfigured (hard-wired) and the
other is where the control for using components or
services lies (e.g., requester/client, provider/server,
mediator etc.).
The Agent-Based Computing Models contd:
Conversational Agent Model
• Conversational agent technologies model communication and cooperation
among autonomous entities through message exchange based on speech
act theory. The best-known foundation technology for developing
such systems is the Knowledge Query and Manipulation Language (KQML)
[[Link] which is often used in conjunction with
the Knowledge Interchange Format (KIF) [[Link]
• In these systems, service access control also lies with a client, which
requests a service from a service broker or name server, and then initiates
peer-to-peer communication with the provider at an address provided by
the broker.
• Although language-enriched interchanges occur, conversational agents
suffer from the same restriction as distributed objects in that the
interactions among components are hard-coded in the requester, thus
making services inflexible and difficult to reuse and extend.
Summary
• The key purposes of distributed systems can
be represented by: resource sharing,
openness, concurrency, scalability, fault-
tolerance and transparency.
• Distributed computing systems comprise the
three fundamental components: computers,
networks, and operating systems and
application software.
Issues Leading to the Client-Server Model
• Amalgamating computers and networks into one
single computing system and providing appropriate
system software has led to:
the possibility of sharing information and peripheral
resources;
improved performance of a computing system and
individual users through parallel execution of
programs, load balancing and sharing, replication of
programs and data.
enhanced availability, and increased reliability.
Challenges of distributed Systems
• Amalgamation of computing systems has generated some
serious challenges and problems such as:
How to synthesise a model of distributed computing to be used
in the development of both application and system software.
How to develop ways to hide distribution of resources and build
relevant services upon them.
The development of distributed computing systems is
complicated by the lack of a central clock and centrally available
data to manage the whole system.
Amalgamating computers and networks into one single
computing system generates a need to deal with the problems of
resource protection, communication security and authentication.
Challenges of distributed Systems contd.
• The synthesis of a distributed computing model has
been influenced by a need to deal with the issues caused
by distribution, such as locating data, programs and
peripheral resources, accessing remote data, programs
and peripheral resources, supporting cooperation and
competition between programs executing on different
computers, coordinating distributed programs executing
on different computers, maintaining the consistency of
replicated data and programs, detecting and recovering
from failures, protecting data and programs stored and
in transit, and authenticating users, etc.
The Client-Server Model in a Distributed Computing System
• A distributed computing system is a set of application
and system programs, and data dispersed across a
number of independent personal computers connected
by a communication network.
• In order to provide requested services to users the
system and relevant application programs must be
executed.
• Because services are provided as a result of executing
programs on a number of computers with data stored
on one or more locations, the whole computing activity
is called distributed computing.
MODELLING FOR DISTRIBUTED NETWORK SYSTEMS: THE CLIENT SERVER MODEL
Figure 2.1: The basic client-server model
• A more detailed client-server model has three
components:
• Service: A service is a software entity that runs on one or
more machines. It provides an abstraction of a set of
well-defined operations in response to applications’
requests.
• Server: A server is an instance of a particular service
running on a single machine.
• Client: A client is a software entity that exploits services
provided by servers. A client can but does not have to
interface directly with a human user.