CS 537 Notes, Section #4: Independent
and Cooperating Processes
Independent process: one that is independent of the rest of the universe.
Its state is not shared in any way by any other process.
Deterministic: input state alone determines results.
Reproducible.
Can stop and restart with no bad effects (only time varies). Example: program that
sums the integers from 1 to i (input).
There are many different ways in which a collection of independent processes might be
executed on a processor:
Uniprogramming: a single process is run to completion before anything else can
be run on the processor.
Multiprogramming: share one processor among several processes. If no shared
state, then order of dispatching is irrelevant.
Multiprocessing: if multiprogramming works, then it should also be ok to run
processes in parallel on separate processors.
o A given process runs on only one processor at a time.
o A process may run on different processors at different times (move state,
assume processors are identical).
o Cannot distinguish multiprocessing from multiprogramming on a very fine
grain.
How often are processes completely independent of the rest of the universe?
Cooperating processes:
Machine must model the social structures of the people that use it. People
cooperate, so machine must support that cooperation. Cooperation means shared
state, e.g. a single file system.
Cooperating processes are those that share state. (May or may not actually be
"cooperating")
Behavior is nondeterministic: depends on relative execution sequence and cannot
be predicted a priori.
Behavior is irreproducible.
Example: one process writes "ABC", another writes "CBA". Can get different
outputs, cannot tell what comes from which. E.g. which process output first "C" in
"ABCCBA"? Note the subtle state sharing that occurs here via the terminal. Not
just anything can happen, though. For example, "AABBCC" cannot occur.
When discussing concurrent processes, multiprogramming is as dangerous as
multiprocessing unless you have tight control over the multiprogramming. Also bear in
mind that smart I/O devices are as bad as cooperating processes (they share the memory).
Why permit processes to cooperate?
Want to share resources:
o One computer, many users.
o One file of checking account records, many tellers. What would happen if
there were a separate account for each teller? Could withdraw same
money many times.
Want to do things faster:
o Read next block while processing current one.
o Divide job into sub-jobs, execute in parallel.
Want to construct systems in modular fashion. (e.g. tbl | eqn | troff)
Reading: Section 2.3.1 in Tanenbaum talks about similar stuff, but uses terms a little
differently.
Basic assumption for cooperating process systems is that the order of some operations is
irrelevant; certain operations are completely independent of certain other operations.
Only a few things matter:
Example: A = 1; B = 2; has same result as B = 2; A = 1;
Another example: A = B+1; B = 2*B cannot be re-ordered.
Race conditions: Suppose A=1 and A=2 are executed in parallel? Do not know what will
happen; depends on which one goes fastest. What if they happen at EXACTLY the same
time? Cannot tell anything without more information. Could end up with A=3!
Atomic operations: Before we can say ANYTHING about parallel processes, we must
know that some operation is atomic, i.e. that it either happens in its entirety without
interruption, or not at all. Cannot be interrupted in the middle. E.g. suppose that println is
atomic -- what happens in println("ABC"); println("BCA") example?
References and assignments are atomic in almost all systems. A=B will always
get a good value for B, will always set a good value for A (not necessarily true for
arrays, records, or even floating-point numbers).
In uniprocessor systems, anything between interrupts is atomic.
If you do not have an atomic operation, you cannot make one. Fortunately, the
hardware folks give us atomic ops.
In fact, if there is true concurrency, it is very hard to make a perfect atomic operation;
most of the time we settle for things that only work "most" of the time.
If you have any atomic operation, you can use it to generate higher-level
constructs and make parallel programs work correctly. This is the approach we
will take in this class.
Copyright © 2001 Barton P. Miller