Distributed and Centralized Operating Systems
Distributed and Centralized Operating Systems
Centralized and
Distributed
Table of contents
1) Centralized Operating Systems.......................................................................................... 4
1.1) Concept............................................................................................................................... 4
1.2) Process management.............................................................................................................. 5
1.3) Memory Management.............................................................................................................. 5
1.4) Device management.......................................................................................................... 6
1.5) File management
2) Distributed Operating Systems.............................................................................................7
2.1) Concept............................................................................................................................... 7
2.2) Advantages and Disadvantages
2.3) Design Aspects.............................................................................................................. 8
Transparency..................................................................................................................... 8
2.3.2) Flexibility.......................................................................................................................... 9
2.3.3) Reliability....................................................................................................................... 9
2.3.4) Performance......................................................................................................................... 9
2.3.5) Scalability...................................................................................................................... 9
2.4) Communication in distributed systems
2.5) Synchronization in distributed systems
2.6) Processes and processors..................................................................................................... 10
2.7) Distributed file systems........................................................................................ 11
2.8) Distributed shared memory............................................................................................ 11
2.9) Distributed Systems Models........................................................................................ 12
2.9.1) Processor Groups................................................................................................... 12
2.9.2) Client Server.................................................................................................................. 12
3) Comparison between Centralized and Distributed Operating Systems.......................................13
4) Examples.....................................................................................................................................14
4.1) Examples of Centralized Operating Systems.................................................................. 14
4.1.1) DOS
4.1.2) UNIX................................................................................................................................... 15
4.1.3) Mac OS.............................................................................................................................. 15
4.2) Examples of Distributed Operating Systems.................................................................... 16
Amoeba.............................................................................................................................. 16
4.2.2) Mach................................................................................................................................... 17
4.2.3) Chorus................................................................................................................................. 18
4.2.4) DCE..................................................................................................................................... 18
21
Conclusions..................................................................................................................................20
Centralized and Distributed Operating Systems
The problems with this model are that when the processing load increased it
I had to change the Mainframe hardware, which is more expensive than adding more.
personal computers clients or servers that enhance capabilities. The other
The problem that arose is the modern graphical user interfaces, which could
lead to a significant increase in media traffic and consequently
they could collapse.
Another environment where centralized architecture operating systems are found is in the
scientific environments. They aim for the efficient execution of applications and the use of
supercomputers, which are computers with computational capabilities far superior to those
commonly available desktop machines. This type of machine is usually used
for calculations that involve a large number of complex operations and many
factors.
In these systems, there is occasional use of the network, such as for transferring files or logins.
remote. Currently, almost all (if not all) operating systems
they allow file transfer. One can connect to a machine on the same
red and access the documents that it is willing to share at the user's request or
vice versa. But it is not a truly transparent transfer as the user is
aware that you are accessing files stored on a disk different from the one provided
belongs, to which is part of your computer. It is also possible to connect in a way
remote to another computer as in the case of remote assistance, but they are more than anything
utilities or added functions that the centralized operating system allows to perform, without
become what I was looking for as the main objective of the system when it was designed.
The centralized systems we have today are very well known, it is enough
with starting with those we have installed on our own computers like
Windows, Linux, Mac OS, Unix, etc.
1.2) Process Management
Regarding process management, we can cover three things: communication between processes,
synchronization and planning.
Process management in a centralized operating system deals with the mechanisms and
policies for sharing or distributing a processor among various user processes.
Processes can communicate with each other through shared memory, whether they are
shared variables or buffers, or through the tools provided by the routines of
IPC (Interprocess Communication). IPC provides a mechanism that allows processes
communicate and synchronize with each other. Usually through a low-level system of
message passing that the underlying network offers.
The operating system must decide which process to execute from those that are ready to run.
execution. The scheduler is the part of the operating system that makes the decision by the
planning algorithm. The goals are to improve equity, efficiency, time of
response, minimize the overall processing time and maximize the number of jobs
processed. The problem is that the behavior of the processes is unique and unpredictable.
Different scheduling algorithms can be used.
It is the part of the system responsible for allocating memory to processes, it will try to distribute from
efficiently allocate memory to introduce as many processes as possible.
Memory allocation is the process of assigning memory for purposes
specific, whether at compile time or runtime. If it is at compile time
it is static, if it is at runtime it is dynamic and if they are local variables to a group of
sentences are called automatic. Generally, the memory is divided into two partitions,
one for the resident OS and another for user processes. Memory allocation is
exclusive to the process that requires it, that is to say, this is logically separate from that of
any other system process.
The management of virtual memory involves making programs believe that there is a
large main memory and exploits access locality through the hierarchy of
memory. Furthermore
memory management must meet some protection requirements such as the code of
A process must not reference the positions of other processes without permission.
When managing files, one must consider file access, file sharing
files, concurrency control, and data replication. It is the part of the system
centralized operation responsible for providing users and applications services
for the use, access, and control of access, both to files and to directories.
The basic objectives of file management are: to ensure that the information in the file
be valid, optimize access to files, provide I/O support for a wide variety of
storage devices, provide the data requested by the user, minimize or eliminate
a potential loss of data, provide a standard set of I/O routines, provide support
from I/O to multiple users.
The basic functions that must be covered are those of creating a file, deleting a file, opening a
open and close a file.
From that, we can understand that machines are autonomous and users always think
that the system is like a single computer. A distributed system is characterized by
behave in front of the user as a single machine; the user is unaware of what
processor is executing its processes and where its files reside.
Another example is that of a huge bank with hundreds of branches around the world. Each
The office has a master computer to store local accounts and manage the
local transactions. Additionally, each computer has the ability to communicate with the
other branches and with a central computer in the headquarters. If the
transactions can be made regardless of where the customer or the account is located and the
users do not notice any difference between this system and the old centralized one that has
replaced could also be considered as a distributed system.
Transparency can be achieved at two different levels. The easiest thing is to hide the
distribution to users. For them, the only unusual thing is that the system's performance
it has a decent change. At a lower level it is also possible that the system is
transparent for the programs. The system call interface can be designed
in a way that the existence of multiple processors is not visible.
Flexibility
It is important that the system is flexible. Along with transparency, they are essential.
It's good to keep the options open. There are two schools of thought regarding the
structure of distributed systems. A school maintains that each machine must
to run a traditional kernel that provides most of the services. The other consists
that the core provides as little as possible and that the bulk of the system services
the operation is obtained from the user-level servers. These two modes are
known as the monolithic core and micronek.
2.3.3) Reliability
The idea is that if one machine fails, another takes over the work. Distributed systems
Currently, there are a number of specific servers that serve to make the whole function.
Availability refers to the fraction of time during which the system can be used. This is
can be improved through a design that does not require the simultaneous operation of a number
substantial of critical components. Another tool is redundancy, it can be duplicated.
key pieces of hardware and software, so that if one of them fails, the others can
fill your gap. Another aspect is fault tolerance. Suppose a server fails and
it suddenly restarts. If the server has tables with important information
regarding the ongoing activities, the least that will happen is that the recovery will be
difficult.
2.3.4) Performance
The construction of a transparent, flexible, and reliable distributed system will be useless if the
the system is slow. Various performance metrics can be used. The response time
it is one, but so are the performance, system usage, and quantity consumed of
network capacity.
The performance issue is complicated by the fact of communication, an essential factor in
a distributed system. Sending and receiving a response on a LAN is slow. For this reason
it is necessary to minimize the number of messages. The difficulty with this strategy is that the best
a way to improve performance is to have many activities running in parallel.
different processors, but this requires the sending of many messages.
One possibility is to pay attention to the grain of all calculations. In general, the works
which involve a large number of small calculations, particularly those that interact in
to a large extent with others, can be the cause of some problems in the systems
distributed with slow communication.
2.3.5) Scalability
Most distributed systems are designed to work with a few
hundreds of CPUs. A distributed system can start with a manageable amount of
users, but over time there may be a distributed system with dozens
of millions of users. The question is whether the methods being developed today
they can be scaled up to such large systems.
What is clear is that centralized components, tables, and algorithms should be avoided.
Well, it's not a good idea to have a single mail server for 50 million users.
In addition, the server would not handle the failures well.
For relatively slow wide area distributed systems, protocols are used.
of layers oriented towards connection such as OSI and TCP/IP, since the main problem
to be resolved is the reliable transport of bits over poor physical lines. For the
based on LAN, layer protocols are used very little. Instead, they are adopted
generally a much simpler model where the client sends a message to the server and
this sends a response back to the client. The call to is also widely used.
remote procedures (RPC). With it, a client process running on a machine
calls a procedure that runs on another machine.
2.5) Synchronization in distributed systems
It is important how the processes cooperate and synchronize with each other. For example,
the way to implement critical regions or allocate resources in a distributed system.
In systems with a CPU, issues related to critical regions, mutual exclusion and
synchronization is generally resolved through methods such as semaphores and
monitors. But these are not suitable for distributed systems, since they always
they are based on the existence of shared memory, so other techniques are needed.
Most distributed operating systems have a thread package.
Although the processes are generally planned independently, they can be improved.
the performance through co-planning, to ensure that the processes that need to
communication is executed at the same time. About planning in distributed systems
we can say that in general each processor does its local scheduling without
worrying about what other processors do. There are also several co- algorithms.
planning, which takes into account the communication patterns between the processes during
the planning to ensure that all members of a group perform at the same
time.
The file service is the specification of the services that the file system offers.
to the clients. The file server is a process that runs on some machine and
help to implement the file service. There can be one or several file servers
but customers should not know how many there are or their position or function. All they know
is to call the procedures specified in the file service and the work is done
somewhere. The ideal is that it looks like a normal file system of a
processor. There can be several file servers, for example one UNIX and one in MS-
TWO.
There are several models of semantics, such as UNIX semantics, session semantics, the
immutable files and transaction semantics. The UNIX semantics is intuitive and familiar.
for most programmers but its implementation is expensive. The session semantics
It is less deterministic but more efficient. Immutable files are not families for the
the majority of people and transactions tend to be redundant. NFS illustrates how to build
a distributed file system.
Many processors cannot be used with a single shared memory because they
it creates a bottleneck. In the case of multi-computers, since there is no memory
shared must use message transfer, making input/output
it is the central abstraction. The transfer of messages brings with it several aspects
delicate like control flow, message loss, buffer usage, and blocking.
Although several solutions have been proposed, message-passing programming
It is still difficult.
Three general techniques are used to provide shared memory. The first
stimulates the multiprocessor memory model, giving each process its own memory
linear paginated. The pages move from one machine to another as needed. The
Second uses ordinary programming languages with shared variables.
annotated. The third is based on high-level programming models, such as the n-layer and
the objects.
In a distributed operating system, memory becomes physically private but
logically shared. That is, a computer runs programs in its memory
own, but in case of needing more memory it will use the available resources from another
computer that is equipped and prepared within the network to share its memory.
It is the most well-known and widely adopted model today. There is a set
of server processes, each acting as a resource manager for a collection
of resources of a type, and a collection of client processes, each carrying out a
task that requires access to some shared hardware and software resources. Managers
resources may in turn need to access shared resources managed by others
processes, so some processes are both clients and servers. In the model, client-
server, all shared resources are maintained and managed by the processes
servers.
The client-server model provides us with an effective and general-purpose approach to the
comparison of information and resources in distributed systems. The model can be
implemented in a wide variety of software and hardware environments. The computers that
Execute the client and server programs, which can be of many types, and there is no
need to distinguish between them.
The client-server model has been extended and used in current systems with services.
managing many different types of shared resources. But it is not possible for everyone
the resources that exist in a distributed system are managed and shared in this way
way; some types of resources must remain local to each computer facing
greater efficiency such as RAM, processor, local network interface. These key resources are
managed
4) Examples
4.1) Examples of Centralized Operating Systems
DOS
MS-DOS was developed to operate individual desktop computers for a single user.
user. When the personal computer market exploded in the 80s, it was the
standard operating system for millions of machines.
It is one of the simplest operating systems to understand. In many ways, it is a
example of the first operating systems because it manages jobs in sequence to
starting from a single user. Its advantages are its fundamental operation and clear commands
for the user. It has two disadvantages. The first is its lack of flexibility and its limited
ability to meet the needs of programmers and expert users. The second
It comes from its roots; it was written for a family of microprocessors. When these
Microprocessors dominated the personal computer market, MS-DOS as well
he did it. But with the advancement of new chips, DOS lost its market advantage in relation
with the most advanced operating systems.
Its standard input and output support includes a keyboard, monitor, printer, and unit.
of secondary storage. At the lower level of MS-DOS is BIOS, which performs the
interface with various input and output devices such as printers, keyboards, and
monitors. This system has low system requirements and little hardware support
Works with text mode commands made up of hard-to-remember orders. Only you
You can run a program at any moment. The ability to increase the space of your
hard drive, an antivirus, a new version of the data support and recovery program,
that for years was not improved, the ability to exchange data between computers by
through a cable, optimizing the use of RAM memory and other interesting options. The
DOS is neither multi-user nor multi-tasking. It cannot work with more than one user or in more
one process at a time.
4.1.2) UNIX
Unix is an interactive system designed to handle multiple processes and multiple
users at the same time. It was the first operating system written in the language of
C programming. It was designed by programmers and for programmers, to be used in a
environment in which most users have relatively advanced knowledge and are
dedicated to software projects that tend to be complex. Unix has extensive resources
to work collaboratively and share information in controlled ways.
It is a universal system, valid for all kinds of computers, large and small. For the
Unix todo is a file. It allows you to create files with names up to 255 characters long.
large. It has some characteristics of a distributed system, as there is a unique name
for all the files available on the network, and that is independent of the machine, or more
specifically, that name does not include the name of the machine on which the file
it is stored.
The core is a program written almost entirely in C language, with the exception of one
part of the interrupt handling, expressed in the processor's assembly language in
the one who operates.
The kernel makes the computer operate under Unix and allows users to share.
efficiently all resources. Each user can interact with the Operating System.
with the Command Interpreter (Shell) of your choice among which the Bourne stands out
Shell, the C Shell, the Korn Shell, the Bourne Again Shell.
Unix uses and manages memory very efficiently. In free memory, it attends to the
processes. It gives each process the exact amount of memory that it needs, from
few kilobytes to several megabytes. When memory runs out, it uses the Swap Area
What is virtual memory. This allows a program that is larger to be executed.
greater than the total RAM memory that a Unix server has.
4.1.3) Mac OS
MacOS is short for Macintosh Operating System.
Mac OS original was the first operating system with a graphical user interface to have
success. The Macintosh team included Bill Atkinson, Jef Raskin, and Andy Hertzfeld.
Amoeba does not have the concept of a source machine. When a user enters the system, they enter
to this as a whole and not to a specific machine. The initial shell runs in a certain
arbitrary machine, and the commands on another machine are generally not the same as that of the shell. The
system searches for the machine with the lowest load to execute the new command. Amoeba is
very transparent regarding the location. All resources belong to the system as a
everything and they are controlled by him.
It was designed with two hypotheses regarding the hardware: the systems have a very high number
large CPU and each CPU will have tens of megabytes of memory. It is based on the model
of a stack of processors. This consists of several CPUs, each with its own local memory and
network connection. Shared memory is not needed, but if present, it is used for
optimize the message transfer when copying from one memory to another, instead of
send messages over the network.
When a user types a command, the operating system dynamically assigns one or
more processors to execute that command. When the command finishes, the
processes and resources return to the stack, waiting for the next command. The user has
access to the system through a terminal. It can also be a computer.
personal or workstation. The idea is to provide cheap terminals and concentrate the
computation cycles in a common stack to use them more efficiently.
4.2.2) Mach
This distributed system is based on a microkernel. It is a systems design project.
operations started at Carnegie Mellon University in 1985. Mach was made to
compatible with UNIX, hoping to be able to use the huge amount of available software
for UNIX. The first version appeared in 1986.
The objectives of this project have varied over time. The objectives can be
summarize in:
yProvide a foundation for the construction of other operating systems
ySupport for a sparse and large address space
yAllow transparent access to network resources
yExplore the parallelism in both the system and the applications
yMake Mach transportable to a larger collection of machines.
The idea is to explore multiprocessors and distributed systems, while being able to
emulate existing systems such as UNIX, MS-DOS, and Macintosh.
The Mach microkernel was designed to emulate UNIX and other operating systems. This
is carried out through a software layer that runs outside the kernel, in the space of
user. Each emulator consists of a part that is present in the address space
of the application programs, as well as one or more servers that run in a manner
independent of application programs. Multiple emulators can be run at the same time.
time.
Mach is based on the concepts of processes, threads, ports, and messages. Mach has a
a very elaborate virtual memory system with memory objectives that can be associated or
disassociate from address spaces, backed by memory managers
external at the user level. In this way, files can be written to or read from in a manner
direct.
4.2.3) Chorus
Chorus emerged at the French research institute INRIA in 1980 as a project of
research in distributed systems. Since then, 4 versions have appeared, from 0 to
the 3.
yReal-time applications
The micronucleus and the subsystems together provide three additional constructions: the
possibilities, the protection identifiers and the segments. The first two are used
to name and protect the resources of the subsystem. The third is the basis for allocation
in memory, both within a running process and on the disk.
4.2.4) DCE
It is a different point of view. DCE means distributed management environment.
Unlike the previous ones, this environment is built on operating systems.
existing.
The environment provided by DCE consists of a large number of tools and services, more
of an infrastructure for them to operate. The tools and services have been chosen
so that they work together in an integrated way and facilitate the development of applications
distributed. For example, DCE provides a mechanism to synchronize the clocks of
different machines and obtain the overall concept of time.
DCE runs on many different types of computers, operating systems, and networks. From
this way developers of applications easily produce portable software in
various platforms, amortizing development costs and increasing the size of
potential market.
The underlying programming model throughout DCE is the client/server model. The
user processes act as clients to access remote services
provided by the server processes. DCE supports two facilities that are used
intensely, both within the DCE itself and in the user programs: threads and RPC. The
threads allow for the existence of multiple control flows within a process. RPC is the
basic communication mechanism within DCE. It allows a client process to call
a procedure on a remote machine.
DCE supports four main services to which customers have access: the services
of time, directories, security and files. The directory service stores names and
positions of all types of resources and allows clients to search for them. The service of
Security allows clients and servers to authenticate each other and perform an RPC.
authenticated. The file system provides a namespace for all the
system files.
Conclusions
After completing this work on centralized operating systems and
distributed, the following conclusions have been reached:
yThe difference between these two types of operating systems is based on resources.
that each one manages. Centralized operating systems use the resources of
a single computer, while distributed operating systems manage
the resources of multiple computers.
Bibliography
Cabello, Díaz and Martínez. (1997). Operating Systems, Theory and Practice. Madrid: Díaz de
Saints.
Flynn, Ida and Mclver, Ann. (2001). Operating Systems (3rd ed.). Mexico: Thomson.
Morera Pascual, Juan and Perez-Campanero, Juan. (2002). Concepts of Operating Systems.
Madrid: Pontifical University of Comillas.