0% found this document useful (0 votes)
40 views21 pages

Distributed and Centralized Operating Systems

Centralized operating systems use the resources of a single computer, such as its CPU, memory, disk, and peripherals. They are commonly found in business environments that use powerful mainframes or minicomputers to service multiple terminals, as well as in scientific environments that use supercomputers. Centralized operating systems manage processes by allocating memory to them and executing them on the system's single processor, and they enable communication between processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
40 views21 pages

Distributed and Centralized Operating Systems

Centralized operating systems use the resources of a single computer, such as its CPU, memory, disk, and peripherals. They are commonly found in business environments that use powerful mainframes or minicomputers to service multiple terminals, as well as in scientific environments that use supercomputers. Centralized operating systems manage processes by allocating memory to them and executing them on the system's single processor, and they enable communication between processes.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 21

Operating Systems

Centralized and
Distributed
Table of contents
1) Centralized Operating Systems.......................................................................................... 4
1.1) Concept............................................................................................................................... 4
1.2) Process management.............................................................................................................. 5
1.3) Memory Management.............................................................................................................. 5
1.4) Device management.......................................................................................................... 6
1.5) File management
2) Distributed Operating Systems.............................................................................................7
2.1) Concept............................................................................................................................... 7
2.2) Advantages and Disadvantages
2.3) Design Aspects.............................................................................................................. 8
Transparency..................................................................................................................... 8
2.3.2) Flexibility.......................................................................................................................... 9
2.3.3) Reliability....................................................................................................................... 9
2.3.4) Performance......................................................................................................................... 9
2.3.5) Scalability...................................................................................................................... 9
2.4) Communication in distributed systems
2.5) Synchronization in distributed systems
2.6) Processes and processors..................................................................................................... 10
2.7) Distributed file systems........................................................................................ 11
2.8) Distributed shared memory............................................................................................ 11
2.9) Distributed Systems Models........................................................................................ 12
2.9.1) Processor Groups................................................................................................... 12
2.9.2) Client Server.................................................................................................................. 12
3) Comparison between Centralized and Distributed Operating Systems.......................................13
4) Examples.....................................................................................................................................14
4.1) Examples of Centralized Operating Systems.................................................................. 14
4.1.1) DOS
4.1.2) UNIX................................................................................................................................... 15
4.1.3) Mac OS.............................................................................................................................. 15
4.2) Examples of Distributed Operating Systems.................................................................... 16
Amoeba.............................................................................................................................. 16
4.2.2) Mach................................................................................................................................... 17
4.2.3) Chorus................................................................................................................................. 18
4.2.4) DCE..................................................................................................................................... 18
21
Conclusions..................................................................................................................................20
Centralized and Distributed Operating Systems

1) Centralized Operating Systems


1.1) Concept
If we want to give a simple definition of what a centralized operating system is,
we will say that it is about one who uses the resources of a single computer, that is,
its memory, CPU, disk, and peripherals.
Regarding the hardware, we can say that it is usually a costly and high-end computer.
power, with alphanumeric terminals directly connected. It usually consists of a
desktop-type computer, in which it is common to find a large monitor with a
keyboard and a mouse; in addition to a case to house the processing unit and the
other components.
We can find this type of operating systems in a corporate environment, in which
there can be a multi-user support. Companies, especially the older ones, use a
powerful mainframe to provide computing capacity to many terminals, or it can also
find companies with abundant minicomputers for the employees who
they need in their activities. One of the first models of interconnected computers
It was centralized, where all the processing of the organization took place in a
single computer, usually a Mainframe, and the users used simple
personal computers.

The problems with this model are that when the processing load increased it
I had to change the Mainframe hardware, which is more expensive than adding more.
personal computers clients or servers that enhance capabilities. The other
The problem that arose is the modern graphical user interfaces, which could
lead to a significant increase in media traffic and consequently
they could collapse.

Another environment where centralized architecture operating systems are found is in the
scientific environments. They aim for the efficient execution of applications and the use of
supercomputers, which are computers with computational capabilities far superior to those
commonly available desktop machines. This type of machine is usually used
for calculations that involve a large number of complex operations and many
factors.

In a family environment, one can find a computer. These have a system


centralized operation because the computer is unique and does not need to work in parallel with
no other computer, as it is not connected to any. Normally these
computers have one or two powerful and expensive processors that meet the needs
user computing.

In these systems, there is occasional use of the network, such as for transferring files or logins.
remote. Currently, almost all (if not all) operating systems
they allow file transfer. One can connect to a machine on the same
red and access the documents that it is willing to share at the user's request or
vice versa. But it is not a truly transparent transfer as the user is
aware that you are accessing files stored on a disk different from the one provided
belongs, to which is part of your computer. It is also possible to connect in a way
remote to another computer as in the case of remote assistance, but they are more than anything
utilities or added functions that the centralized operating system allows to perform, without
become what I was looking for as the main objective of the system when it was designed.
The centralized systems we have today are very well known, it is enough
with starting with those we have installed on our own computers like
Windows, Linux, Mac OS, Unix, etc.
1.2) Process Management
Regarding process management, we can cover three things: communication between processes,
synchronization and planning.

To execute a process, memory is allocated to it and it is run on the (normally) single


system processor. It is simpler than in a distributed system because it will always be
choose the local processor, so the issue is more about finding the optimal functioning of
processor that is possessed and in searching for which processor to execute a process for
take advantage of computing opportunities.

Process management in a centralized operating system deals with the mechanisms and
policies for sharing or distributing a processor among various user processes.

Processes can communicate with each other through shared memory, whether they are
shared variables or buffers, or through the tools provided by the routines of
IPC (Interprocess Communication). IPC provides a mechanism that allows processes
communicate and synchronize with each other. Usually through a low-level system of
message passing that the underlying network offers.

There are several models of process synchronization such as mutual exclusion,


traffic lights and message. In the case of a centralized operating system, it is possible to use the
shared memory to facilitate synchronization.

The operating system must decide which process to execute from those that are ready to run.
execution. The scheduler is the part of the operating system that makes the decision by the
planning algorithm. The goals are to improve equity, efficiency, time of
response, minimize the overall processing time and maximize the number of jobs
processed. The problem is that the behavior of the processes is unique and unpredictable.
Different scheduling algorithms can be used.

1.3) Memory management


Memory management involves memory allocation, logical and physical mapping, the
virtual memory and protection mechanisms. In the centralized system, only the management of
memory available in the computer where the system is installed.

It is the part of the system responsible for allocating memory to processes, it will try to distribute from
efficiently allocate memory to introduce as many processes as possible.
Memory allocation is the process of assigning memory for purposes
specific, whether at compile time or runtime. If it is at compile time
it is static, if it is at runtime it is dynamic and if they are local variables to a group of
sentences are called automatic. Generally, the memory is divided into two partitions,
one for the resident OS and another for user processes. Memory allocation is
exclusive to the process that requires it, that is to say, this is logically separate from that of
any other system process.

The management of virtual memory involves making programs believe that there is a
large main memory and exploits access locality through the hierarchy of
memory. Furthermore

memory management must meet some protection requirements such as the code of
A process must not reference the positions of other processes without permission.

1.4) Device Management


About device handling, we can discuss device drivers, buffering, and the
spooling. For it to be a centralized system, device management must be in charge.
only of the devices that belong to a single computer.

Device management has to do with drivers and controllers. Drivers or


drivers are programs that handle details that are dependent on the
devices. The controllers are electronic elements of the I/O unit, which
they provide an abstraction of what the device does because one sends the function to it
controller and he is responsible for ensuring that the device performs the necessary actions.

In a centralized system, the allocation of available resources is managed, as


we already know, by the operating system. It also takes care of applying the techniques of
buffering and spooling. To decouple the operating speeds of the devices
with other elements of the system and, therefore, increase performance, it is common to
use of intermediate storage or buffering for both input and output. The use
spooling consists of having a buffer that stores data to be sent to a device that does not
it allows interleaved operations from different sources.

1.5) File Management

When managing files, one must consider file access, file sharing
files, concurrency control, and data replication. It is the part of the system
centralized operation responsible for providing users and applications services
for the use, access, and control of access, both to files and to directories.

The basic objectives of file management are: to ensure that the information in the file
be valid, optimize access to files, provide I/O support for a wide variety of
storage devices, provide the data requested by the user, minimize or eliminate
a potential loss of data, provide a standard set of I/O routines, provide support
from I/O to multiple users.
The basic functions that must be covered are those of creating a file, deleting a file, opening a
open and close a file.

2) Distributed Operating Systems


2.1) Concept
According to Tanenbaum, a distributed system is "a collection of computers
independent ones that appear to the system users as a single computer.

From that, we can understand that machines are autonomous and users always think
that the system is like a single computer. A distributed system is characterized by
behave in front of the user as a single machine; the user is unaware of what
processor is executing its processes and where its files reside.

Let's take an example. First, let's consider a network of workstations in a


department of a company. In addition to each workstation, there could be a stack
of processors in the machine room that are not assigned to specific users
but used dynamically as needed. Maybe the system could
having a unique file system with all files accessible from all
machines of the same shape and with the same path name. Furthermore, when the
user write a command the system could find the best place to execute it, maybe
on the user's own workstation or on an inactive workstation that
belong to another person or in one of the unassigned processors in the room of
machines. If the system is seen as a whole and acts as a time-sharing system
A classic with a single processor could be considered a distributed system.

Another example is that of a huge bank with hundreds of branches around the world. Each
The office has a master computer to store local accounts and manage the
local transactions. Additionally, each computer has the ability to communicate with the
other branches and with a central computer in the headquarters. If the
transactions can be made regardless of where the customer or the account is located and the
users do not notice any difference between this system and the old centralized one that has
replaced could also be considered as a distributed system.

Network operating systems are made up of loosely coupled software in a


weakly coupled hardware. If it weren't for the shared file system to the
users would think that the system consists of several computers. It can run your
own operating system and do what it wants. The next step in evolution is that of
strongly coupled software in weakly coupled hardware, this is the emergence of the
distributed systems. The goal is to create the illusion in the minds of users that all the
The computer network is a time-sharing system, rather than diverse machines.
There must be a global communication mechanism among the processes, so that
any process can communicate with any other. Also a global system of
protection. The management of processes must be the same everywhere. The way in
the creation, destruction, and stopping of processes should not vary from one machine to another.
The file system must also have the same appearance everywhere. Like
logical consequence of having the same system call interface everywhere
parts, it is normal for identical cores to be executed on all system CPUs. This facilitates
the coordination of global activities. For example, when a process begins, everyone
the cores must cooperate in the search for the best place to execute it. The systems
distributed systems are based on the use of reliable, efficient, fast transmission systems
that allow integrating systems from different manufacturers.

Advantages and Disadvantages


Among the advantages we have:
Economic decentralization. For example, in the case of CPUs, if one is willing to
pay double to get the same CPU only with a slightly higher speed. Therefore
it is more convenient to limit oneself to a large number of cheap CPUs gathered in one place.
system. Distributed systems have the potential for a price/performance ratio.
much better than that of a centralized system.

In addition, certain applications are inherently distributed. A chain of


supermarkets could have many stores which receive items locally
make local sales and make local decisions about the vegetables that are old or
rotten and what should be done. Therefore, it makes sense to keep an inventory of each
store within a local computer while also having it centrally in the offices
from the company. After all, most requests and updates would be made in
local form. However, from time to time the central administration might try to
determine the amount of turnips he has at a certain moment. One way to achieve this
the objective is to make the entire system look like a computer for the programs of
application but implemented in a decentralized way on a store computer
As has already been described. This would then be a commercial distributed system.

Another advantage over centralized systems is reliability. By distributing the load of a


I work on many machines; the failure of a circuit would break down at most one machine and the
The remained would remain intact. There is also growth by increments, as it is not necessary.
buying an extremely expensive new mainframe when the company needs more computing power.

Among the disadvantages we have:


yThe worst problem is the software: which programming languages to use, that
applications are suitable.
yThe problem of communication networks. Messages can be lost, which...
requires special software for its management and can become overloaded. Upon
if the network saturates, it must be replaced or a second one added. In any case, it is
a great cost.
yThe fact that data can be shared can be a double-edged sword, as
they can also access data they are not supposed to see. Security is with
frequency a problem.

2.3) Design Aspects


2.3.1) Transparency
It may be the most important aspect. It consists of deceiving all people so that
think of the collection of machines as just a time-sharing system of a
processor, in the old way.

Transparency can be achieved at two different levels. The easiest thing is to hide the
distribution to users. For them, the only unusual thing is that the system's performance
it has a decent change. At a lower level it is also possible that the system is
transparent for the programs. The system call interface can be designed
in a way that the existence of multiple processors is not visible.

There are several types of transparency:


yLocation transparency: Users cannot indicate the location of the
resources.
yMigration transparency: Resources can be moved at will without changing
his names.
yTransparency of replica: Users cannot indicate the number of existing copies.
yConcurrency transparency: Multiple users can share resources in a way
automatic.
yTransparency of parallelism: Activities can occur in parallel without the
users' knowledge.

Flexibility
It is important that the system is flexible. Along with transparency, they are essential.
It's good to keep the options open. There are two schools of thought regarding the
structure of distributed systems. A school maintains that each machine must
to run a traditional kernel that provides most of the services. The other consists
that the core provides as little as possible and that the bulk of the system services
the operation is obtained from the user-level servers. These two modes are
known as the monolithic core and micronek.

2.3.3) Reliability
The idea is that if one machine fails, another takes over the work. Distributed systems
Currently, there are a number of specific servers that serve to make the whole function.
Availability refers to the fraction of time during which the system can be used. This is
can be improved through a design that does not require the simultaneous operation of a number
substantial of critical components. Another tool is redundancy, it can be duplicated.
key pieces of hardware and software, so that if one of them fails, the others can
fill your gap. Another aspect is fault tolerance. Suppose a server fails and
it suddenly restarts. If the server has tables with important information
regarding the ongoing activities, the least that will happen is that the recovery will be
difficult.

2.3.4) Performance
The construction of a transparent, flexible, and reliable distributed system will be useless if the
the system is slow. Various performance metrics can be used. The response time
it is one, but so are the performance, system usage, and quantity consumed of
network capacity.
The performance issue is complicated by the fact of communication, an essential factor in
a distributed system. Sending and receiving a response on a LAN is slow. For this reason
it is necessary to minimize the number of messages. The difficulty with this strategy is that the best
a way to improve performance is to have many activities running in parallel.
different processors, but this requires the sending of many messages.

One possibility is to pay attention to the grain of all calculations. In general, the works
which involve a large number of small calculations, particularly those that interact in
to a large extent with others, can be the cause of some problems in the systems
distributed with slow communication.

2.3.5) Scalability
Most distributed systems are designed to work with a few
hundreds of CPUs. A distributed system can start with a manageable amount of
users, but over time there may be a distributed system with dozens
of millions of users. The question is whether the methods being developed today
they can be scaled up to such large systems.

What is clear is that centralized components, tables, and algorithms should be avoided.
Well, it's not a good idea to have a single mail server for 50 million users.
In addition, the server would not handle the failures well.

2.4) Communication in distributed systems


The main difference between a distributed system and a system with a processor is the
inter-process communication. In a system with a single processor, most of the
Inter-process communication implicitly assumes the existence of memory.
shared. One process writes to a shared buffer and another process reads from it. In a
in a distributed system, there is no such shared memory, so the entire nature of the
process communication must be rethought from scratch.

For relatively slow wide area distributed systems, protocols are used.
of layers oriented towards connection such as OSI and TCP/IP, since the main problem
to be resolved is the reliable transport of bits over poor physical lines. For the
based on LAN, layer protocols are used very little. Instead, they are adopted
generally a much simpler model where the client sends a message to the server and
this sends a response back to the client. The call to is also widely used.
remote procedures (RPC). With it, a client process running on a machine
calls a procedure that runs on another machine.
2.5) Synchronization in distributed systems
It is important how the processes cooperate and synchronize with each other. For example,
the way to implement critical regions or allocate resources in a distributed system.

In systems with a CPU, issues related to critical regions, mutual exclusion and
synchronization is generally resolved through methods such as semaphores and
monitors. But these are not suitable for distributed systems, since they always
they are based on the existence of shared memory, so other techniques are needed.
Most distributed operating systems have a thread package.

Process synchronization in distributed systems is more complex than in the


centralized, because the information and processing are kept in different
nodes. A distributed system must maintain partial and consistent views of all the
cooperative and computing processes. Such views can be provided by the mechanisms
out of sync. The synchronization does not have to be exact, and it will be enough for it to be
approximately the same on all computers. It should be noted, yes, the mode
to update the time on a particular clock.

2.6) Processes and processors


Two processor organization models are commonly used: the model of
workstation and the model of the processor stack. In the first one, each user has
his own workstation and sometimes can run processes on the workstations
inactive. In the second, all computing installations are a shared resource. The
processors are assigned dynamically to users as needed and
they return to the stack after finishing work.

Given a collection of processors, an algorithm is needed to assign the processes to


the processors. Such algorithms can be deterministic or heuristic, centralized or
distributed, optimal or suboptimal, local or global, initiated by the sender or by the
receptor.

Although the processes are generally planned independently, they can be improved.
the performance through co-planning, to ensure that the processes that need to
communication is executed at the same time. About planning in distributed systems
we can say that in general each processor does its local scheduling without
worrying about what other processors do. There are also several co- algorithms.
planning, which takes into account the communication patterns between the processes during
the planning to ensure that all members of a group perform at the same
time.

Distributed file systems


A fundamental component is the file system, it is the heart of any system.
distributed. The task of the file system is to store programs and data and
to have them available when necessary.

The file service is the specification of the services that the file system offers.
to the clients. The file server is a process that runs on some machine and
help to implement the file service. There can be one or several file servers
but customers should not know how many there are or their position or function. All they know
is to call the procedures specified in the file service and the work is done
somewhere. The ideal is that it looks like a normal file system of a
processor. There can be several file servers, for example one UNIX and one in MS-
TWO.

There are several models of semantics, such as UNIX semantics, session semantics, the
immutable files and transaction semantics. The UNIX semantics is intuitive and familiar.
for most programmers but its implementation is expensive. The session semantics
It is less deterministic but more efficient. Immutable files are not families for the
the majority of people and transactions tend to be redundant. NFS illustrates how to build
a distributed file system.

2.8) Distributed shared memory


There are two types of systems with multiple processors: multiprocessors and
multicomputers. In a multiprocessor, several CPUs share a main memory.
common. In the case of a multiprocessor, each CPU has its own particular memory, nothing is
share.

Many processors cannot be used with a single shared memory because they
it creates a bottleneck. In the case of multi-computers, since there is no memory
shared must use message transfer, making input/output
it is the central abstraction. The transfer of messages brings with it several aspects
delicate like control flow, message loss, buffer usage, and blocking.
Although several solutions have been proposed, message-passing programming
It is still difficult.

Remote processing calls were also proposed. To use a remote service,


a process only calls the appropriate library procedure, which packages the
operation code and message parameters, sending it over the network and waiting for the
answer.

Three general techniques are used to provide shared memory. The first
stimulates the multiprocessor memory model, giving each process its own memory
linear paginated. The pages move from one machine to another as needed. The
Second uses ordinary programming languages with shared variables.
annotated. The third is based on high-level programming models, such as the n-layer and
the objects.
In a distributed operating system, memory becomes physically private but
logically shared. That is, a computer runs programs in its memory
own, but in case of needing more memory it will use the available resources from another
computer that is equipped and prepared within the network to share its memory.

2.9) Models of Distributed Systems


There are different topologies that distributed systems are based on. They are as follows:
2.9.1) Processor Groups
In this mode, when opening a session, the system assigns a processor to the user, allowing
do it based on the overload, proximity, or any other parameters set by the
designer. A simpler and less committed method is to have the user choose the
node it wants to connect to, but gets lost in transparency and can lead to strong
imbalances in the system.
The topology is based on several computers connected through a communication network.
Users can connect through their personal computers. It is a step in
the evolution of distributed systems but lacks a transparent file system and
globally disappearing the idea of nodes.
2.9.2) Client Server
In this, there are clients that grant users access to the services and servers that
they contain information, services, etc. A machine can be a client in a service and
server in another.

It is the most well-known and widely adopted model today. There is a set
of server processes, each acting as a resource manager for a collection
of resources of a type, and a collection of client processes, each carrying out a
task that requires access to some shared hardware and software resources. Managers
resources may in turn need to access shared resources managed by others
processes, so some processes are both clients and servers. In the model, client-
server, all shared resources are maintained and managed by the processes
servers.

The client-server model provides us with an effective and general-purpose approach to the
comparison of information and resources in distributed systems. The model can be
implemented in a wide variety of software and hardware environments. The computers that
Execute the client and server programs, which can be of many types, and there is no
need to distinguish between them.

The client-server model has been extended and used in current systems with services.
managing many different types of shared resources. But it is not possible for everyone
the resources that exist in a distributed system are managed and shared in this way
way; some types of resources must remain local to each computer facing
greater efficiency such as RAM, processor, local network interface. These key resources are
managed
4) Examples
4.1) Examples of Centralized Operating Systems
DOS
MS-DOS was developed to operate individual desktop computers for a single user.
user. When the personal computer market exploded in the 80s, it was the
standard operating system for millions of machines.
It is one of the simplest operating systems to understand. In many ways, it is a
example of the first operating systems because it manages jobs in sequence to
starting from a single user. Its advantages are its fundamental operation and clear commands
for the user. It has two disadvantages. The first is its lack of flexibility and its limited
ability to meet the needs of programmers and expert users. The second
It comes from its roots; it was written for a family of microprocessors. When these
Microprocessors dominated the personal computer market, MS-DOS as well
he did it. But with the advancement of new chips, DOS lost its market advantage in relation
with the most advanced operating systems.

Its standard input and output support includes a keyboard, monitor, printer, and unit.
of secondary storage. At the lower level of MS-DOS is BIOS, which performs the
interface with various input and output devices such as printers, keyboards, and
monitors. This system has low system requirements and little hardware support
Works with text mode commands made up of hard-to-remember orders. Only you
You can run a program at any moment. The ability to increase the space of your
hard drive, an antivirus, a new version of the data support and recovery program,
that for years was not improved, the ability to exchange data between computers by
through a cable, optimizing the use of RAM memory and other interesting options. The
DOS is neither multi-user nor multi-tasking. It cannot work with more than one user or in more
one process at a time.
4.1.2) UNIX
Unix is an interactive system designed to handle multiple processes and multiple
users at the same time. It was the first operating system written in the language of
C programming. It was designed by programmers and for programmers, to be used in a
environment in which most users have relatively advanced knowledge and are
dedicated to software projects that tend to be complex. Unix has extensive resources
to work collaboratively and share information in controlled ways.

It is a universal system, valid for all kinds of computers, large and small. For the
Unix todo is a file. It allows you to create files with names up to 255 characters long.
large. It has some characteristics of a distributed system, as there is a unique name
for all the files available on the network, and that is independent of the machine, or more
specifically, that name does not include the name of the machine on which the file
it is stored.

The core is a program written almost entirely in C language, with the exception of one
part of the interrupt handling, expressed in the processor's assembly language in
the one who operates.

The kernel makes the computer operate under Unix and allows users to share.
efficiently all resources. Each user can interact with the Operating System.
with the Command Interpreter (Shell) of your choice among which the Bourne stands out
Shell, the C Shell, the Korn Shell, the Bourne Again Shell.

Unix uses and manages memory very efficiently. In free memory, it attends to the
processes. It gives each process the exact amount of memory that it needs, from
few kilobytes to several megabytes. When memory runs out, it uses the Swap Area
What is virtual memory. This allows a program that is larger to be executed.
greater than the total RAM memory that a Unix server has.

It has the ability to interconnect processes. It also allows communication between


processes. Unix uses a hierarchical file system, with protection facilities
files, accounts, and processes.

4.1.3) Mac OS
MacOS is short for Macintosh Operating System.
Mac OS original was the first operating system with a graphical user interface to have
success. The Macintosh team included Bill Atkinson, Jef Raskin, and Andy Hertzfeld.

Apple deliberately downplayed the existence of the operating system on the


first years of Macintosh to help make the machine seem more pleasant to
user and to distance it from other systems like MS-DOS, which were a technical challenge. Apple
I wanted Macintosh
It was seen as a system that would work just by turning it on. The graphical interface
developed
by Apple called Aqua.
This operating system has a core from the Unix family, more specifically it derives from
Nexstep was an operating system whose kernel had code from the Mach kernel and from BSD.
Mac OS uses X11 as its window manager, a feature it shares with other systems.
Unix
The architecture for which it was originally designed was for the Power PC, that is
RISC type computers developed by IBM, Motorola, and Apple. That is, this last one
They made a specific operating system for hardware that they also developed.
it gave great stability and efficiency to the system as a whole.
Starting in 2006, they began migrating to Intel processors using the so-called binaries.
universals, that is, applications that contain the binary code of both platforms
for its transparent execution.
There are several versions 10.1 Puma, 10.2 Jaguar, 10.3 Panther, 10.4 Tiger, and 10.5 Leopard and 10.6
Snow Leopard and each one has been incorporating improvements.

4.2) Examples of Distributed Operating Systems


4.2.1) Amoeba
It is a distributed system that allows a collection of CPUs and I/O equipment to behave
like a computer. It also provides elements for parallel programming if
is desired.

It originated at Vrije Universiteit in Amsterdam, Holland, in 1981 as a project of


research in distributed and parallel computing. It was originally designed by
Tanenbaum and three doctoral students. In 1983 a prototype had an operational level.
The system evolved over several years, acquiring features such as emulation.
partial of UNIX, group communication and a new low-level protocol.

To avoid the enormous task of writing large amounts of application software, we


added a UNIX emulation package. The main goal of the project was to build a
distributed and transparent operating system. For a normal user, it works the same as a
sist. Traditional.

Amoeba does not have the concept of a source machine. When a user enters the system, they enter
to this as a whole and not to a specific machine. The initial shell runs in a certain
arbitrary machine, and the commands on another machine are generally not the same as that of the shell. The
system searches for the machine with the lowest load to execute the new command. Amoeba is
very transparent regarding the location. All resources belong to the system as a
everything and they are controlled by him.

Amoeba seeks to provide a test mattress for the execution of a programming.


distributed and parallel. Users are interested in experimenting with algorithms,
languages, tools, and distributed and parallel applications. It provides support for these
users by making the underlying parallelism available to those who want it
take advantage of it.

It was designed with two hypotheses regarding the hardware: the systems have a very high number
large CPU and each CPU will have tens of megabytes of memory. It is based on the model
of a stack of processors. This consists of several CPUs, each with its own local memory and
network connection. Shared memory is not needed, but if present, it is used for
optimize the message transfer when copying from one memory to another, instead of
send messages over the network.

When a user types a command, the operating system dynamically assigns one or
more processors to execute that command. When the command finishes, the
processes and resources return to the stack, waiting for the next command. The user has
access to the system through a terminal. It can also be a computer.
personal or workstation. The idea is to provide cheap terminals and concentrate the
computation cycles in a common stack to use them more efficiently.

Amoeba consists of two fundamental parts, a micronucleus that operates in each


processor and a collection of servers that provide most of the
functionality of a traditional operating system. Three mechanisms are supported.
communication: Simple RPC and FLIP for point-to-point communication and reliable communication
in a group for communication between parties. The Amoeba file system consists of
three servers: the file server for the storage of these, the directory server
to grant names and that of replicas for the file replica.

4.2.2) Mach
This distributed system is based on a microkernel. It is a systems design project.
operations started at Carnegie Mellon University in 1985. Mach was made to
compatible with UNIX, hoping to be able to use the huge amount of available software
for UNIX. The first version appeared in 1986.

The objectives of this project have varied over time. The objectives can be
summarize in:
yProvide a foundation for the construction of other operating systems
ySupport for a sparse and large address space
yAllow transparent access to network resources
yExplore the parallelism in both the system and the applications
yMake Mach transportable to a larger collection of machines.

The idea is to explore multiprocessors and distributed systems, while being able to
emulate existing systems such as UNIX, MS-DOS, and Macintosh.

The Mach microkernel was designed to emulate UNIX and other operating systems. This
is carried out through a software layer that runs outside the kernel, in the space of
user. Each emulator consists of a part that is present in the address space
of the application programs, as well as one or more servers that run in a manner
independent of application programs. Multiple emulators can be run at the same time.
time.

The Mach nucleus, like other micronuclei, provides management of


processes, memory management, communication, and I/O services. The
files, directories and other traditional operating system functions are controlled in the
user space. The idea of the kernel is to provide the necessary mechanisms for
the system works, but leaves the policy to the processes at the user level.

Mach is based on the concepts of processes, threads, ports, and messages. Mach has a
a very elaborate virtual memory system with memory objectives that can be associated or
disassociate from address spaces, backed by memory managers
external at the user level. In this way, files can be written to or read from in a manner
direct.

4.2.3) Chorus
Chorus emerged at the French research institute INRIA in 1980 as a project of
research in distributed systems. Since then, 4 versions have appeared, from 0 to
the 3.

Version 0 aimed to model distributed applications as a collection of actors, in


essence of structured processes, each of which alternated between the execution of
an atomic transaction and the execution of a communication step. Version 1 focused
in multiprocessor research. It was written in compiled Pascal instead of
interpreted and distributed in a dozen universities and companies. Version 2 was
a fundamental rewrite in C. It was designed so that system calls were
compatible with UNIX at the source code level. Version 3 marked the transition of a
research system for a commercial product. The so-called calls to procedures were introduced
remote like the usual communication model.
To make it a viable product, the capacity for UNIX emulation was increased, so that
that programs in UNIX will run without being recompiled.
The goals of Chorus are:

yHigh Performance UNIX Emulation

yUse of distributed systems

yReal-time applications

yIntegration of object-oriented programming in Chorus

It is structured in layers. At the bottom is the micronucleus that provides the


minimal management of names, processes, threads, memory, and communication. It is had
access to these services through calls to the microkernel. The processes of the layers
superiors provide the rest of the operating system.
Above the micronucleus but also operating in core mode are the processes of the
Nucleus. These are loaded and removed dynamically during the execution of the system and
they provide a way to extend the functionality of the microkernel without increasing it
permanently its size and complexity.

The micronucleus and the subsystems together provide three additional constructions: the
possibilities, the protection identifiers and the segments. The first two are used
to name and protect the resources of the subsystem. The third is the basis for allocation
in memory, both within a running process and on the disk.

4.2.4) DCE
It is a different point of view. DCE means distributed management environment.
Unlike the previous ones, this environment is built on operating systems.
existing.
The environment provided by DCE consists of a large number of tools and services, more
of an infrastructure for them to operate. The tools and services have been chosen
so that they work together in an integrated way and facilitate the development of applications
distributed. For example, DCE provides a mechanism to synchronize the clocks of
different machines and obtain the overall concept of time.

DCE runs on many different types of computers, operating systems, and networks. From
this way developers of applications easily produce portable software in
various platforms, amortizing development costs and increasing the size of
potential market.
The underlying programming model throughout DCE is the client/server model. The
user processes act as clients to access remote services
provided by the server processes. DCE supports two facilities that are used
intensely, both within the DCE itself and in the user programs: threads and RPC. The
threads allow for the existence of multiple control flows within a process. RPC is the
basic communication mechanism within DCE. It allows a client process to call
a procedure on a remote machine.

DCE supports four main services to which customers have access: the services
of time, directories, security and files. The directory service stores names and
positions of all types of resources and allows clients to search for them. The service of
Security allows clients and servers to authenticate each other and perform an RPC.
authenticated. The file system provides a namespace for all the
system files.

Conclusions
After completing this work on centralized operating systems and
distributed, the following conclusions have been reached:
yThe difference between these two types of operating systems is based on resources.
that each one manages. Centralized operating systems use the resources of
a single computer, while distributed operating systems manage
the resources of multiple computers.

yCentralized operating systems appeared first and distributed ones are


a few steps ahead in the evolution of operating systems. There is a lot
experience in centralized systems but there are still issues in the distributed ones that
require more research.

yThe operating systems we usually use in our homes or in the


universities are centralized, with the ability to connect to the network, as in these
environments we are only interested in using the machine we occupy, but in other places, already
Companies or other universities may be interested in having the computers under
a distributed system and provide, for example, a transparent use of a large
disk capacity or processing capacity.

yThe implementation of a distributed operating system is much more complex because


when having to manage several machines, the algorithms change, the way of
planning, the way processes are communicated, synchronization, already
that in this new environment it is not possible to have all the information about everyone
nodes so we need to find a possible way to cope with this situation.

yTrue parallelism is achieved with distributed operating systems, since


they have the characteristic of being able to execute multiple processes on different nodes at
same time, which improves the overall system performance. With a
system with a single processor the parallelism was false, since the technique of the
multiprogramming only gives the illusion that several processes are running at the same time,
Of course, parallelism can also be achieved with the new multi-core processors.
nucleus.

yWhile the goal of the centralized operating system is to achieve a good


management of the resources available to the machine, the objective of the system
distributed is to create transparency for the user, so that they see the whole.
of computers as one and does not realize where the limit is between its
machine and any other node.

yDistributed systems have several advantages over centralized ones, such as


economy, speed, reliability, flexibility, communication and the
growth by increments

Bibliography

Cabello, Díaz and Martínez. (1997). Operating Systems, Theory and Practice. Madrid: Díaz de
Saints.
Flynn, Ida and Mclver, Ann. (2001). Operating Systems (3rd ed.). Mexico: Thomson.

Rojo, Oscar. (2003). Introduction to Distributed Systems. Obtained on June 6th


2009, from http://www.augcyl.org/

Distributed Operating Systems

Morera Pascual, Juan and Perez-Campanero, Juan. (2002). Concepts of Operating Systems.
Madrid: Pontifical University of Comillas.

You might also like