0% found this document useful (0 votes)
13 views4 pages

Organization of Parallel Processing

The document outlines three primary architectures for parallel processing: Symmetric Multiprocessing (SMP), Massively Parallel Processing (MPP), and Non-Uniform Memory Architecture (NUMA). SMP involves multiple processors sharing a single operating system and resources, while MPP consists of independent computers working together to solve problems. NUMA combines features of both SMP and MPP to enhance performance and coordination among nodes, making it suitable for data warehousing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
13 views4 pages

Organization of Parallel Processing

The document outlines three primary architectures for parallel processing: Symmetric Multiprocessing (SMP), Massively Parallel Processing (MPP), and Non-Uniform Memory Architecture (NUMA). SMP involves multiple processors sharing a single operating system and resources, while MPP consists of independent computers working together to solve problems. NUMA combines features of both SMP and MPP to enhance performance and coordination among nodes, making it suitable for data warehousing.
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 4

What are the architecture of Parallel

Processing?
There are three basic parallel processing hardware architectures in the server
market such as symmetric multiprocessing (SMP), massively parallel processing
(MPP), and non-uniform memory architecture (NUMA).
Symmetric Multiprocessing (SMP)
The SMP architecture is an individual device with multiple processors, all managed
by one operating system and all accessing the similar disk and memory area. An
SMP machine with 8 to 32 processors, a parallel database, large memory (two or
more gigabytes), good disk, and a good design should perform well with a medium-
sized warehouse.
The database needs to be able to run its processes in parallel, and the data
warehouse processes need to be designed to take advantage of parallel
capabilities. The processors can access shared resources (memory and disk)
rapidly, but the access path they need to get at those resources, the backplane, can
develop into a bottleneck as the system scales.
Since the SMP machine is a single entity, it also has the weakness of being a single
point of failure in the warehouse. To overcome these problems, hardware
companies have come up with techniques that allow several SMP machines to be
linked to each other, or clustered.
In a cluster, each node is an SMP machine that runs its operating system, but the
cluster includes connections and control software to allow the machines to share
disks and provide fail-over backup. In this case, if one machine fails, others in the
cluster can temporarily take over its processing load. Of course, this benefit comes
at a cost—clustering is extremely complex and can be difficult to manage. The
database technology needed to span clusters is improving.
Massively Parallel Processing (MPP)
MPP systems are a string of relatively independent computers, each with its
operating system, memory, and disk, all coordinated by passing messages back and
forth. The strength of MPP is the ability to connect hundreds of machine nodes and
apply them to a problem using a brute-force approach.
For example, if you need to do a full-table scan of a large table, spreading that table
across a 100-node MPP system and letting each node scan its 1/100th of the table
should be relatively fast. It’s the computer equivalent of “many hands make light
work.

Non-Uniform Memory Architecture (NUMA)


NUMA is a set of SMP and MPP in an attempt to merge the shared disk adaptability
of SMP with the parallel speed of MPP. This architecture is a relatively current
innovation, and it can be viable for data warehousing in the high run.
NUMA is conceptually similar to the idea of clustering SMP machines, but with
tighter connections, more bandwidth, and greater coordination among nodes. If you
can segment your warehouse into relatively independent usage groups and place
each group on its node, the NUMA architecture may be effective for you.
For the purpose of increasing the computational speed of computer system,
the term ‘parallel processing‘ employed to give simultaneous data-
processing operations is used to represent a large class. In addition, a
parallel processing system is capable of concurrent data processing to
achieve faster execution times.
As an example, the next instruction can be read from memory, while an
instruction is being executed in ALU. The system can have two or more
ALUs and be able to execute two or more instructions at the same time. In
addition, two or more processing is also used to speed up computer
processing capacity and increases with parallel processing, and with it, the
cost of the system increases. But, technological development has reduced
hardware costs to the point where parallel processing methods are
economically possible.
Parallel processing derives from multiple levels of complexity. It is
distinguished between parallel and serial operations by the type of registers
used at the lowest level. Shift registers work one bit at a time in a serial
fashion, while parallel registers work simultaneously with all bits of
simultaneously with all bits of the word. At high levels of complexity, parallel
processing derives from having a plurality of functional units that perform
separate or similar operations simultaneously. By distributing data among
several functional units, parallel processing is installed.
As an example, arithmetic, shift and logic operations can be divided into
three units and operations are transformed into a teach unit under the
supervision of a control unit.
One possible method of dividing the execution unit into eight functional units
operating in parallel is shown in figure. Depending on the operation specified
by the instruction, operands in the registers are transferred to one of the
units, associated with the operands. In each functional unit, the operation
performed is denoted in each block of the diagram. The arithmetic operations
with integer numbers are performed by the adder and integer multiplier.
Floating-point operations can be divided into three circuits operating in
parallel. Logic, shift, and increment operations are performed concurrently on
different data. All units are independent of each other, therefore one number
is shifted while another number is being incremented. Generally, a multi-
functional organization is associated with a complex control unit to
coordinate all the activities between the several components.
The main advantage of parallel processing is that it provides better utilization
of system resources by increasing resource multiplicity which overall system
throughput.

You might also like