0% found this document useful (0 votes)
31 views22 pages

Distributed Computing Systems Revision Sheet 2025

The document provides a comprehensive overview of distributed computing systems, including their definitions, components, advantages, and disadvantages. It discusses various topics such as centralized vs. distributed processing, transparency in distribution, middleware roles, and virtualization types. Additionally, it covers specific applications and examples, such as the Cambridge Distributed Processing System and Bank of America's redundant systems, while addressing challenges in distributed data processing and data center requirements.

Uploaded by

zizohossam06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views22 pages

Distributed Computing Systems Revision Sheet 2025

The document provides a comprehensive overview of distributed computing systems, including their definitions, components, advantages, and disadvantages. It discusses various topics such as centralized vs. distributed processing, transparency in distribution, middleware roles, and virtualization types. Additionally, it covers specific applications and examples, such as the Cambridge Distributed Processing System and Bank of America's redundant systems, while addressing challenges in distributed data processing and data center requirements.

Uploaded by

zizohossam06
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 22

Distributed Computer Systems Revision Sheet 2025

1. Define distributed computing systems and mention their basic components?


DPS or DCS is : Group of independent computers connected together via a network,
providing the view of being a single system.
Examples of DCS :
1. Internet & WWW
2. ATM machines
3. DBMS
4. Mobiles
DCS is :
1. Group of autonomous hosts, each host executes components and operates distribution
applications.
2. Hosts are Geographically separated over (LAN, WAN …)
3. Hosts Connected via a network.
4. The network is used to: transfer messages and mail, and execute applications.
Advantages: Disadvantages:
1. Resource sharing 1. Complexity
2. Openness 2. Manageability
3. Scalability 3. Security
4. Concurrency 4. Unpredictability
5. Fault tolerance
2. Compare between centralized and distributed processing systems?

Advantages of Distribution Advantages of Centralization

1. Technological changes 1. Economy of scale

2. User needs 2. More powerful capabilities

3. Modularity and simpler software 3. Operating costs

4. Flexibility and Extension 4. Staff Satisfaction

5. Availability and integrity 5. Local programming and development

6. Performance 6. Communication subsystem

7. Local control 7. Stretching the state of the art

3. Explain the main motivation for developing distributed computer systems?


1. Development of PCs
2. Development of powerful microprocessors, microcomputers, and minicomputers
3. Development of high-speed networks
4. The need for multimedia for ef cient decision making

1

fi
4. Define DCS computing resources and draw and explain example of resources sharing.

System resources include: CPU power, CPU working memory, System software,
Applications software, databases, printing, backup, and network
The motivation for distributed systems stems
from the need to share resources, both
hardware (disks, laser printers etc.), software
(programs), and data ( les, databases and
other data objects). It is the ability to use any
hardware, software or data anywhere in the
system.
Resource manager controls access, provides
naming scheme and controls concurrency. Resource sharing model (e.g. client/server or
object-based) describing how resources are provided, they are used and provider and user
interact with each other.
Resources in a distributed system are physically encapsulated within computers and can only
be accessed from other computers by means of communication. For effective sharing, each
resource must be managed by a program that offers a communication interface enabling the
resource to be accessed and updated reliably and consistently.

5. Draw example of centralized, decentralized, and distributed systems and compare


between their advantages and disadvantages.
System Advantages Disadvantages Drawing

• Easy to physically
secure.
• Smooth and elegant • Highly dependent on the
personal experience. network connectivity.
• Dedicated resources. • No graceful degradation
• More cost-ef cient for of the system.
Centralized
small systems up to a • Less possibility of data
certain limit. backup.
• Quick updates are • Dif cult server
possible. maintenance.
• Easy detachment of a
node from the system.

2

fi
fi
fi
System Advantages Disadvantages Drawing

• Minimal problem of
• Dif cult to achieve global
performance
big tasks.
bottlenecks occurring.
• Dif cult to know which
Decentralized • High availability.
node failed.
• More autonomy and
• Dif cult to know which
control over
node responded.
resources.

• Low latency than a


• Dif cult to
centralized system.
achieve synchronization.
• Concurrency of
• The conventional way of
components.
Distributed logging events by
• Lack of a global clock.
absolute time they occur
• Independent failure of
is not possible here
components.
• Scaling.

6. Explain what is meant by (distribution) transparency, and give examples of different


types of transparency.
Transparency hides implementation details from users.
Four main types:
1. Distribution transparency
2. Transaction transparency
3. Performance transparency
4. DBMS transparency
Distribution Transparency is the phenomenon by which distribution aspects in a system
are hidden from users and applications.
Examples include access transparency, location transparency, migration transparency,
relocation transparency, replication transparency, concurrency transparency, failure
transparency, and persistence transparency.

7. What is the role of middleware in a distributed system?


Middleware is a class of software technologies designed to
help manage the complexity and heterogeneity inherent in
distributed systems.
It is de ned as a layer of software above the operating system
but below the application program, that provides a common
programming abstraction across a distributed system.

3

fi
fi
fi
fi
fi
Middleware services:
• Facilities for inter application communication.
• Security services.
• Accounting services.
• Masking of and recovery from failures.

8. Draw and explain Enslow model for distributed processing systems

Enslow Theory: Fully distributed system is :


Fully distributed in Control +
Fully distributed in Data +
Fully distributed in Hardware.

9. Utilize the Cambridge Distributed Processing System as an example to illustrate the


concept of security, email and database research applications in distributed computer
systems.
The Cambridge Distributed Computer System:
The system was designed to provide the services usually
associated with a large conventional 'time-sharing' system,
except that it would employ small, self-contained PCs and
terminals for user access, and mid-size servers for those
shared facilities (e.g. disks, printers, large memory and
processing power).
The physical system components:
1. 50 heterogeneous machines.
2. Terminals connected via concentrators
3. 1 km Cambridge ber optics ring for communication. This provides a train of xed-
size message slots endlessly circulating around the ring.

10. Differentiate between thin using the PC as diskless PC, Thin client, normal PC
Workstation, and Thick client?

1. Diskless PC: No local storage; relies on network-based storage and booting.


2. Thin Client: Minimal local resources; relies on a server for processing and applications.
3. Normal PC Workstation: Standard personal computer with local storage and processing.
4. Thick Client: Powerful local resources with independent processing capabilities; may still
connect to servers for additional resources.

4

fi
fi
11. Define the meaning of A4 (authentication, Authorization, Accounting, and Auditing?

1. Authentication: Veri es the identity of a user or system using methods like passwords,
biometrics, or tokens.
2. Authorization: Determines the permissions and access levels of an authenticated user to
resources and actions.
3. Accounting: Tracks and logs user activities and system operations for monitoring and
compliance.
4. Auditing: Reviews and analyzes logs to ensure policy adherence, detect breaches, and
enhance security.
12. Use Drawing and explain the functions of the BOOT Server (Boot Strap BOOTP)
protocol for booting diskless PC?

The Bootstrap Protocol (BOOTP) is a client/server protocol that con gures a diskless computer
or a computer that is booted for the rst time. BOOTP provides: IP address, net mask, the
address of a default router, and the address of a name server.
The BOOTP server can be on the same
network as the BOOTP client or on different
networks.
The BOOTP protocol allows a diskless PC to
receive an IP address and boot con guration
from a BOOTP server. It encapsulates requests
within UDP packets, enabling a diskless
machine to boot over the network.

13. Use Drawing and explain the functions of the KDC (Key Distribution Center)
authentication Server

Key Distribution Center (KDC) Authentication


Server
The KDC issues secure keys to authenticated
users and services, enabling encrypted
communication within the DCS. This ensures that
only authorized entities access sensitive resources.

5

fi
fi
fi
fi
14. Use Drawing and explain the functions of the DYNAMIC HOST protocol?

DHCP dynamically assigns IP addresses and other


network con gurations to devices on a network:
• The client sends a request to the DHCP server.
• The server assigns an IP address from its pool.
• The con guration is leased to the client
temporarily.

15. Explain the functions of the Network


Management System?
Network management is the procedure of maintaining and
organizing the active and passive network elements. It will
support the services to maintain network elements and network
performance monitoring and management.
It recognizes the fault, Investigate, Troubleshoot, Con guration
Management and OS changes to ful ll all the user
requirements. It allows computers in a network to communicate
with each other, control networks and allow troubleshooting or
performance enhancements.
The SNMP includes the following functions: Alarm
Monitoring, Con guration Management, Fault
management, and Security Management.

16. Utilize the underground monitoring and control in coal


mines as an example to sensors data sharing.

The main characteristics of such a system are that it would be


very large (hundreds of computer stations and sensors),
physically distributed (>=1 km), and real-time.

17. Draw and explain the Sensors networking Organizing a sensor network database,
while storing and processing data a) only at the operator’s site or b) only at the sensors

• Temperature, humidity, (vehicular) movement, lightning


condition, pressure, soil make-up, noise Levels
• Presence or absence of certain kinds of objects
• Mechanical stress levels on attached objects
• Current characteristics such as speed, direction, and size of
an object

6

fi
fi
fi
fi
fi
18. Utilize the bank of America as an example of fully redundant distributed system.

All transactions were logged for overnight


processing and data base update.
They avoided many of the dif cult problems of
consistency in replicated data bases in an
environment of concurrent, real-time access
and update.
The Bank of America system is implemented
using IBM general purpose systems to
support transaction processing between
autonomous computer systems.
DBMS is IMS (Information Management System) which coordinates transactions dealing with
data base enquiries and updates, and integrates with a system for inter-program
communication SNA, Systems Network Architecture products of IBM.

19. Define virtualization, its importance, and explain its main types?

Virtualization is the ability to run multiple OSs on a physical system and virtually share the
hardware, software, storage, backup, printing, and network resources
Virtualization is a technique of how to separate a service from the underlying physical delivery
of that service. It is the process of creating a virtual version of something like computer
hardware.
With the help of Virtualization, multiple operating systems and applications can run on same
machine and its same hardware at the same time, increasing the utilization and exibility of
hardware.
One of the main cost effective, hardware reducing, and energy saving techniques used by
cloud providers is virtualization
Main Types of Virtualization:
1. Application Virtualization: Enables remote access to applications stored on a server,
allowing them to run on a local device via the internet without installation.
2. Network Virtualization: Creates multiple virtual networks over a physical network,
enabling isolated, exible, and ef cient network management.
3. Desktop Virtualization: Stores user OS remotely on a server, allowing virtual desktop
access from any device, enhancing mobility and management.
4. Storage Virtualization: Pools physical storage from multiple devices into a single virtual
storage system managed centrally for ef cient use.
5. Server Virtualization: Divides a physical server into multiple virtual servers, each
running independently, to enhance performance and reduce costs.
6. Data Virtualization: Integrates data from various sources into a uni ed virtual view,
enabling remote access through cloud services without concern for technical details.

7

fl
fi
fi
fi
fi
fl
20. Explain the difference between horizontal, vertical, and functional data processing
systems?

Vertical Topology:
There are three levels of processing at the host, at the remote site, and in intelligent
terminals.
1. Transactions enter and leave at the lowest levels.
2. Processing not possible at the intelligent terminal
is passed to the remote-site processor.
3. Processing not possible at the remote site is
passed to the host or primary site.
4. Each level may support its own database, which
may or may not be shared with other levels.

Horizontal Topology:
Horizontal topology is characterized by peer (equal)
relationships among nodes.
Every node in the network can communicate with
every other node without having to consult a central
node.

Functional Topology:
This topology involves the separation of processing
functions into separate nodes.
Typical functional separations are:
1. Database processing,
2. Backup processing,
3. Application processing,
4. Communication processing, and transaction
processing.

21. Draw and explain the basic components of distributed system architecture?

1. Presentation Logic PM/PL: responsible for formatting and


presenting data on the user’s screen (or other output device) and
managing user input from keyboard (or other input device)
2. Business Logic BL: handles data processing logic (validation
and identi cation of processing errors), business rules logic, and
data management logic (identi es the data necessary for processing
the transaction or query)
3. Data Management/Storage DM: responsible for data storage and retrieval
from the physical storage devices – DBMS activities occur here.

8

fi
fi
22. Compare between centralized, Partitioned, and replicated database processing
systems, mention the main problem that faces the development of distributed data
processing systems?

The main problem that faces the development of distributed data processing systems:
1) Data Recovery : Solution – Transaction management must track transaction to ensure
the correct termination of the transaction.
2) Concurrency Control : Solution – Control access of several transactions on the same
data
3) Deadlock /Lockup Handling : Solution –To prevent transactions from waiting each
other!s
23. What are the different tiers in 3-tiers architecture? Discuss the
functions of these tiers?
To improve performance, the three-tier architecture adds another
server layer either by a middleware server or an application server.
Three-Tier Layers are :
"# PC clients
%# backend database server
&# either a middleware or an application server.
Although the three-tier architecture addresses performance degradations of the two-tier
architecture, it does not address division-of-processing concerns.
The additional server software can reside on a separate computer.
Alternatively, the additional server software can be distributed between the database
server and PC clients.

9

24. Define a data center and mention its Key characteristics?
A data center: is a “hardened” facility that is dedicated to provide uninterrupted service to
business-critical data processing operations.
A data center (sometimes called a server farm): is a centralized repository for the storage,
management, and dissemination of data and information.
Key characteristics of a data center include:
1. Power and Cooling Systems: Redundant power sources, backup generators, and
advanced cooling systems to prevent overheating.
2. Security: Physical and cyber security measures to protect data and systems from
unauthorized access.
3. Redundancy and Reliability: Built-in redundancy for high availability and minimal
downtime.
4. Scalability: Ability to expand capacity as data and computing needs grow.
5. Network Connectivity: High-speed networking for fast data transfer within the data
center and to external networks.

25. What are the Requirements of a modern data center


Key requirements of classical data centers:
1. Availability: All data center elements should be designed to ensure accessibility. The
inability of users to access data can have a negative impact on a business.
2. Performance: All the elements of the data center should provide optimal performance
and service all processing requests at high speed.
3. Scalability / Flexibility: Data center operations should be able to allocate additional
processing capabilities or storage space on demand, without interrupting business
operations.
4. Security / Data integrity: It is important to establish policies, procedures, and proper
integration of the data center elements to prevent unauthorized access to information.
5. Manageability: Manageability can be achieved through automation and reduction of
manual intervention in common tasks.

26. Mention examples of Data Centers components?

Data centers consist of several essential components that support their functionality,
security, and ef ciency. Key examples include:

1. Servers: The core computing units, where data processing and applications are
hosted.
2. Storage Systems: Devices such as hard drives, SSDs, or storage arrays used to store
large volumes of data securely.
3. Networking Equipment: Routers, switches, rewalls, and load balancers that enable
connectivity and secure data ow.

10

fi
fl
fi
4. Power Systems: Includes uninterruptible power supplies (UPS), backup generators,
power distribution units (PDUs), and power cables, ensuring continuous power supply.
5. Cooling and HVAC Systems: Air conditioning, fans, and other cooling mechanisms to
maintain optimal temperatures and prevent equipment from overheating.
6. Racks and Cabinets: Physical frameworks for mounting and organizing servers and
other hardware.
7. Cabling: Fiber optic and Ethernet cables for internal and external data transfer.
8. Monitoring and Management Tools: Software and hardware to monitor system
performance, energy usage, and physical conditions (e.g., temperature, humidity).
9. Security Systems: Physical security elements like surveillance cameras, access control
(biometric scanners, key cards), and re suppression systems.
10.Environmental Controls: Systems that monitor and adjust environmental factors like
temperature, humidity, and air ow.
These components work together to ensure data center performance, scalability, and security

27. Define Cloud Computing and explain Cloud Computing types?


Cloud Computing: is a type of computing that delivers
convenient, on-demand, pay-as-you-go access for multiple
customers to a shared pool of con gurable computing
resources.
Cloud is a pool of virtualized computer resources.

11

fl
fi
fi
28. Define cloud computing systems, their basic layered architecture, and their basic
service models?

SaaS: Deliver software applications over the


Internet, on-demand.

PaaS: Get on-demand environment for


development, testing and management of software
applications, servers, storage, network, OS,
databases, etc.

IaaS: Rent IT infrastructure – servers and virtual


machines (VMs), storage, network, rewall, and
security

29. What is the importance of Virtualization of data centers

Virtualization: refers to the act of creating a virtual (rather than actual) version of
something, including (but not limited to) a virtual computer hardware platform, operating
system (OS), storage device, or computer network resources.
1. Hypervisor/Virtual Machine Monitor
2. Host Operating System
3. Guest Operating System
4. Every Virtual Machine is given a set of virtual hardware.
5. Involves many software and hardware architectural modi cations. (Memory
Management, CPU Management)

30. Define computer clusters and mention its advantages and basic types?

A computer cluster is a group of linked computers, working together closely thus in many
respects forming a single computer.
A cluster is: a type of parallel and distributed computing system which consists of a collection
of interconnected computing resources working together as a single integrated computing
resources (CPU Power, CPU working memory, systems software, applications software,
database, backup, printing, and printing)
Advantages of cluster computing:
1. High Performance: Enhances computational power for complex tasks.
2. Scalability: Easily adds nodes to handle growing workloads.
3. Fault Tolerance: Ensures system reliability and minimizes downtime.
4. Cost-Effective: Uses commodity hardware to achieve powerful performance.
5. Resource Sharing: Allows ef cient utilization of computing resources.

12

fi
fi
fi
Types of Clustered computers:
1. High Availability Cluster
2. Load Balancing Cluster
3. High Performance Computing (HPC) Cluster
4. Fault Tolerant Clusters

31. Explain how does a computer cluster work?

A cluster typically has one or two head nodes and more computing nodes. The head node is
where users log in, compile code, assign tasks, coordinate jobs, and monitor traf c across all
nodes.
The computing nodes handle performance computing. They execute tasks, follow
instructions, and function collectively as a powerful single system.
Tasks automatically move from the head system to the computing nodes, and excellent tools
can help with workload scheduling.

32. What are the basic types computer cluster types?

• High-Performance Computing (HPC) Clusters: Used for tasks requiring immense


computational power, such as scienti c simulations, big data analytics, and AI training,
enabling faster processing and ef ciency.
• High-Availability (HA) Clusters: Designed to ensure continuous service by seamlessly
transferring workloads to other nodes during failures, preventing downtime for critical
applications.
• Load-Balancing Clusters: Optimizes performance by evenly distributing traf c and
workloads across multiple nodes, ensuring scalability and preventing server overload during
peak demand.

33. Explain High performance (HP) clusters

High Performance (HP) Clusters: enhance performance by


splitting tasks across multiple nodes, making them ideal for
scienti c computing, large-scale problems, and time-sensitive
solutions.
They act as compute farms, managing resources, queuing jobs,
and executing them ef ciently. Users can run numerous jobs
simultaneously or sequentially, depending on resource availability.
These clusters are commonly used for repetitive computations
with varying parameters, often requiring high-speed le system
access. Enterprises now adopt large-scale, shared HPC clusters
for broader accessibility and optimized throughput.

13

fi
fi
fi
fi
fi
fi
fi
34. Explain Load-balancing clusters

Load-balancing Clusters: distribute user requests across


multiple nodes running the same programs or hosting the same
content, ensuring optimal performance and fault tolerance. If a
node fails, requests are seamlessly redirected to other nodes,
preventing downtime. Implementations can be software-based,
using algorithms like round-robin or server af nity, or hardware-
based, with integrated load-balancing devices. These clusters
are widely used in web hosting to handle high traf c ef ciently
and maintain reliable service.

35. Explain High Availability (HA) Clusters?

High Availability (HA) Clusters: ensure continuous


availability of services by automatically failing over
applications to another node if one fails. Nodes can be taken
of ine for maintenance without disrupting service, though
performance may temporarily decrease. These clusters are
ideal for mission-critical applications like databases, mail,
and web servers. HA and load-balancing clusters can be
combined to enhance reliability, availability, and scalability
for widely deployed services.
Both the high availability and load-balancing cluster
technologies can be combined to increase the reliability,
availability, and scalability of application and data resources
that are widely deployed for web, mail, news, or FTP
services.

36. What are the functions of the Presentation Tier, Application Tier, and Data and
Database Tier?
1. Presentation Logic PM/PL Tier : responsible for formatting and presenting data on the
user’s screen (or other output device) and managing user input from keyboard (or other input
device)
2. (Application) Business Logic BL Tier: handles data processing logic (validation and
identi cation of processing errors), business rules logic, and data management logic (identi es
the data necessary for processing the transaction or query)
3. Data Management/Storage DM Tier: responsible for data storage and retrieval from the
physical storage devices – DBMS activities occur here.

14

fl
fi
fi
fi
fi
fi
37. Explain the 2 tiers and 3 tiers architectures for data centers and compare between
them?

Two-Tier Architecture: A two-tier architecture includes a line of portal


web and mail servers that provide customers with a web- based
interface and a back-end line of servers or databases that hold data and
process the requests.
Either the two tiers are within a DMZ, or the back-end database is
protected by another rewall.
Two-tier architecture consists of a server farm and back-end databases.

Three-Tier Architecture: In the three-tier architecture, the rst line


consists of a server farm that presents portal web pages to customers
and accepts requests. The farm is usually clustered and redundant, to
enable it to handle a heavy load of connections and also balance that
load between servers.
The back-end tier is basically the same as in the two-tier setup, which
has database(s) or host systems. This is where sensitive customer
information is held and maintained. The middle tier, absent in the two-
tier setup, provides the most interesting functionality. In many cases,
this is where the business logic lives and the actual processing of data
and requests happens.

38. Differentiate between the different types of Storage media?

DAS NAS SAN


Storage Type sectors shared les blocks

Data Transmission IDE/SCSI TCP/IP, Ethernet Fiber Channel

Access Mode clients or servers clients or servers servers


Capacity (bytes) 109 109 - 1012 more than 1012

Complexity Easy Moderate Dif cult


Management Cost
High Moderate Low
(per GB)

39. Explain the different types of storage


technologies used with modern application
and database servers? And compare between
the NAS and SAN?

+ The comparison in the previous question (38).

15

fi
fi
fi
fi
40. Write a short account on DAS (Direct Attached Storage), NAS Network Associated
Storage, and Storage Area Network SAN?

DAS (Direct Attached Storage):


DAS connects directly to a server via internal or adjacent
external drives, offering low startup costs and simplicity for
small-to-medium businesses. Storage is dedicated to a
speci c server, managed individually, and connected using
technologies like PCI-based RAID controllers and SAS. While
cost-effective, DAS has limited expansion capabilities, short
cable ranges, and potential single points of failure due to
reliance on the server’s RAID controller.

NAS (Network Attached Storage):


NAS uses a dedicated server or appliance to share storage across multiple clients via an
existing Ethernet network, making it cost-effective for small-to-medium businesses. It employs
le-level transfers using protocols like NFS and CIFS and can implement storage as iSCSI
targets. NAS is easy to deploy, leverages existing infrastructure, and supports multiple
operating systems, but it may have lower performance compared to block-level storage
solutions like SAN.

SAN (Storage Area Network):


SANs are high-performance, scalable storage networks designed for medium-to-large
businesses, requiring dedicated infrastructure like SAN switches, ber cables, and HBAs. They
provide block-level storage with centralized management, redundancy, and high availability.
SANs are ideal for environments needing high throughput and shared storage for multiple
servers, but they come with high initial costs and greater management complexity.

41. What is the Difference between SAN and NAS?


The basic difference between SAN and NAS: SAN is
Fabric based and NAS is Ethernet based.
SAN - Storage Area Network: It accesses data on block
level and produces space to host in form of disk. NAS
NAS - Network attached Storage: It accesses data on le
level and produces space to host in form of shared network
folder.

16
fi

fi
fi
fi
42. What Are the Advantages of Raid?

1. Large Storage: RAID arrays combine multiple disks, providing more storage space than a
single drive. Additional drives can be added for convenient scalability.
2. Fault Tolerance: Most RAID levels offer data redundancy through parity, ensuring system
resilience against drive failures. While not a complete substitute for backups, RAID
enhances reliability.
3. Continuous System Running: RAID allows systems to keep running even if a drive fails,
providing users time to replace the faulty disk without immediate downtime.
4. Parity Check: Modern RAID systems include parity checks to detect potential issues,
helping prevent system crashes and allowing proactive maintenance.
5. Fast Speed: RAID enhances performance by enabling simultaneous data read/write
operations, signi cantly improving data transmission rates compared to single drives.

43. Draw a diagram of RAID 10 (stripping and mirroring) and explain its applications?

1. High-Performance Requirements: RAID 10


combines the performance bene ts of RAID 0
(striping) with the redundancy of RAID 1
(mirroring). This makes it ideal for applications
where both speed and fault tolerance are
crucial.
2. Data Redundancy: With mirroring, RAID 10
ensures that every piece of data is stored on
two drives, providing robust data redundancy.
This makes it suitable for critical systems that
cannot afford data loss.
3. Database Servers: Database servers need high read and write speeds to handle
numerous simultaneous transactions, as well as data redundancy to prevent data loss in the
event of hardware failure. RAID 10 ful lls both requirements effectively.
4. Financial Systems: Financial applications demand not only fast access to data but also
high reliability to ensure the integrity of transaction records. RAID 10 offers the performance
and fault tolerance needed in such environments.
5. Video Editing and Streaming: Video editing and streaming require high throughput to
handle large video les ef ciently and continuously. RAID 10's performance and
redundancy ensure smooth operation and protection against drive failures.
6. Virtualization Environments: Virtualized environments often involve high I/O workloads
and demand a balance of performance and redundancy. RAID 10 supports this by providing
fast disk access and minimizing downtime risks, making it ideal for hosting virtual machines.

17

fi
fi
fi
fi
fi
44. Draw a diagram of RAID 5 and explain its applications?

RAID 5: This level is based on block-level striping with parity.


• The parity information is striped across each drive,
allowing the array to function even if one drive were to fail.
• The array's architecture allows read and write operations
to span multiple drives.
• This results in performance that is usually better than that
of a single drive, but not as high as that of a RAID 0 array.

45. Explain the components of Intelligent storage system and mention its advantage?

An intelligent storage system consists of four


key components: Front-end, Cache, Back-end,
and Physical Disks.
• An I/O request received from the compute system at the
front-end port is processed through cache and the back
end to enable storage and retrieval of data from the
physical disk.
• A read request can be serviced directly from cache if the
requested data is found in cache.

46. Explain the advantages of fault-tolerant systems?

Fault Tolerance: is de ned as the ability of the system to function properly even in the
presence of any failure.
Fault tolerance in DCS is the capability to continue operating smoothly despite failures or
errors in one or more of its components.
Fault tolerance is crucial for maintaining system reliability, availability, and consistency.
Additional bene ts of improving fault tolerance across your organization may include:
• Increased reliability: By reducing the likelihood and potential impact of system failures, fault
tolerance boosts the reliability of your assets.
• Reduced downtime: Automated fault detection and recovery systems ensure that backup
resources can be used to reduce unexpected downtime and minimize its direct and indirect
costs.
• More secure data: Fault-tolerant systems can eliminate the risk of critical data loss or
corruption by storing crucial information in backup locations and responding in the event of
data breaches or hardware failures.
• Enhanced performance: Ensuring workloads are distributed for maximum ef ciency, fault-
tolerant systems can reduce bottlenecks to improve overall system performance.
• Fault-tolerant systems contribute to a more resilient organization and play an important role
in business continuity as a whole.

18

fi
fi
fi
47. Define: Failure, Error, and Fault?

• Fault: A defect or aw in the system's design, code, or hardware.


• Error: An incorrect state within the system caused by a fault during execution.
• Failure: A deviation of the system from expected behavior, observable by the user.

48. Define: Reliability (T), availability, Mean time between failure (MTBF), and Mean
Time to Repair (MTTR)?

Reliability: Reliability is de ned as the property where the system can work continuously
without any failure for the duration (0, T) under normal operating conditions.
Availability: Availability of the module is the percentage of time when system is operational.
Availability of a hardware/software module can be obtained by the formula given below.

MTTR: Mean Time to Repair (MTTR), is the time taken to repair a failed hardware module.
In an operational system, repair generally means replacing the hardware module.
MTBF: Mean Time Between Failures (MTBF), as the name suggests, is the average time
between failure of hardware modules. It is the average time a manufacturer estimates before a
failure occurs in a hardware module.

49. Consider a static webpage during an observation window of 24 hours, the service
sustains 3 periods of downtime. The first outage takes 15 minutes to repair, the second
lasts 30 min, the third 1hrs. calculate the MTTR, MTBF, and the availability of the
site.
Observation Period = 24h = 1440 min
DownTimes: D1= 15min, D2 = 30min, D3 = 1h = 60min
Total Down Time = 15 + 30 + 60 = 105 min

MTTR = Down Time / No. of Failures


= 105 / 3 = 35 min
MTBF = (Observation Period - Down Time) / No. of Failures
= (1440-105)/3 = 445min
Availability = MTBF / (MTBF + MTTR)
= 445/(445+35) = 0.927 = 92.7%

19

fl
fi
50. Calculate the resultant probability of failure (F) and
failure-free operation (R) for a combined series-
parallel system. Assume that the components are
independent. The failure probabilities of individual
elements are: F1 = 0.08, F2 = 0.30, F3 = 0.20, and F4 =
0.10.

R1 = 1 - F1 = 1 - 0.08 = 0.92 R3 = 1 - F3 = 1 - 0.20 = 0.80


R2 = 1 - F2 = 1 - 0.30 = 0.70 R4 = 1 - F4 = 1 - 0.10 = 0.90

R2 , R3 are connected in series —> Rx = R2 x R3 = 0.70 x 0.80 = 0.56


Rx , R4 are connected in parallel —> Ry = 1 - [(1 - Rx) x (1 - R4)]
= 1 - [(1 - 0.56) x (1 - 0.90)] = 0.956
Ry, R1 are connected on series —> Rs = Ry x R1 = 0.956 x 0.92 = 0.87952

Probability of Failure-Free (R) = 0.87952


Probability of Failure (F) = 1- R = 1 - 0.87952 = 0.12048

51. Explain why distributed systems are not secured? What are the effects of insecurity in
distributed systems?
Main reasons for distributed systems insecurity:
1. Networks (especially WAN/Internet networks) are not secured. Packets can be intercepted
and modi ed at network layer
2. Servers and Clients components are not secured.
3. Clients (IP destination) components may not be as claimed to be. (IP spoo ng)
4. Calling (IP Source) components may not be as claimed to be. (IP spoo ng)
Effects of Insecurity:
1. Con dential Data may be stolen
2. Data may be altered
3. Loss of con dence in information systems information: above effects may reduce con dence
in computerized systems.
4. Claims for damages or loss of data: legal developments may allow someone to request if
data on computer has not been guarded according to best practice.
5. Loss of data privacy: data legally stored on a computer may well be private to the person
concerned record.

20

fi
fi
fi
fi
fi
fi
52. Define Cyber space and the different effects of cyber threats?
Cyberspace: is a domain characterized by human use of device with electronics and the
electromagnetic spectrum to store, modify, and exchange data via networked systems and
associated physical infrastructures.
Effects of Cyber Threats:
• Personal Impact: Cyber Security issues now impact every individual who uses a
computer. millions of people worldwide are the victims of cyber-crimes.
• Business Vulnerability: Every business today is dependent on information and
vulnerable to one or more type of Cyber attacks (even those w/o online sites).
• National and Military Implications: The next Cold War. Cyber operations are also
becoming increasing integrated into active con icts.
• Cyber Espionage: The government itself is involved in cyber trespassing to keep eye on
other person/network/country for politically, economically, socially motivated.

53. Define Cyber security and mention its importance?


Cybersecurity is the practice of protecting internet-connected systems, including
hardware, software, and data, from cyberattacks. It encompasses technologies, processes,
and practices designed to safeguard networks, computers, and programs from
unauthorized access, damage, or attacks.
Importance of Cybersecurity:
Cybersecurity is crucial in today's digital world because:
• Financial Impact: Cyberattacks can cause signi cant nancial losses for businesses.
• Reputational Damage: Data breaches can harm the reputation of individuals and
organizations.
• Increasing Sophistication: Cybercriminals are adopting more advanced and
destructive attack techniques.
• Regulatory Compliance: Organizations must comply with regulations to protect
personal data.
• Business Continuity: Proper cybersecurity measures ensure businesses can respond
effectively and recover from cyber incidents.
54. Explain Cyber security CIA Confidentiality, Integrity, and Availability objectives?
• Con dentiality: Ensures data is accessible only to authorized parties, preventing
unauthorized access or disclosure.
Measures: Data encryption, two-factor authentication, biometric veri cation, and security
tokens.
• Integrity: Protects data from being altered by unauthorized parties, maintaining its accuracy
and reliability.
Measures: Cryptographic checksums, le permissions, power backups, and data backups.
• Availability: Ensures information is accessible to authorized users when needed.
Measures: Data backups, rewalls, redundant systems, and backup power supplies.

21

fi
fi
fi
fl
fi
fi
fi
55. Define Defense in Depth security strategy and mention some of the system security
techniques?

Defense in Depth Strategy: Defense-in-Depth is a layered security approach using multiple


redundant measures to protect physical, technical, and administrative aspects of a network. It
ensures security even if one layer fails.
Security Techniques:
• Physical Controls: Locked doors, security guards.
• Technical Controls: Firewalls, antivirus, IDS/IPS.
• Administrative Controls: Policies, user training.
• Access Measures: VPN, biometrics, multi-factor authentication.
• Workstation Defenses: Antivirus, anti-spam tools.
• Data Protection: Encryption, secure backups.
• Monitoring: Logging, audits, vulnerability scanners.
This strategy ensures robust system security through layered defenses

********************** G O O D L U C K IN FINAL EXAM ***********************

22

You might also like