0% found this document useful (0 votes)
43 views16 pages

Unit-5 CloudComputing Complete Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
43 views16 pages

Unit-5 CloudComputing Complete Notes

Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as DOCX, PDF, TXT or read online on Scribd
You are on page 1/ 16

UNIT-V MANAGING THE CLOUD:

Managing IaaS

Infrastructure as a Service (IaaS) is a cloud computing model that provides virtualized computing
resources over the internet. Managing IaaS involves handling the infrastructure that includes virtual
machines (VMs), storage, networks, and other fundamental resources.

In this section, two IaaS Systems, namely

1.HP Cloud System Matrix


2. Amazon EC2 are used as examples to explain the management of iaas.

1.HP Cloud System Matrix is an enterprise-level cloud platform designed to simplify the deployment
and management of private and hybrid cloud environments. It enables organizations to build
infrastructure-as-a-service (IaaS) solutions by integrating hardware, virtualization, and
management software.

IaaS Administrator of CloudSystem Matrix

Some of the available administrative capabilities include:

1. Resource Provisioning & Management:


o Admins are responsible for provisioning compute, storage, and network resources
through the Matrix Operating Environment.
2. Automation & Orchestration:
o Using Cloud Maps, administrators automate the deployment of complex
applications . They also manage cloud bursting to scale workloads into external
clouds when internal resources are maxed out, ensuring that demand surges are
handled smoothly.
3. Monitoring & Optimization:
o The Matrix dashboard allows IaaS administrators to monitor the health of the
entire cloud infrastructure in real-time. Performance monitoring and capacity
planning are integral to maintaining efficient operations.
4. Security & Compliance:
o Admins manage security protocols using HP tools like TippingPoint for intrusion
detection and security policies.
5. Self-Service Enablement:
o Administrators configure and maintain self-service portals that allow end-users or
departments to request resources and manage their own infrastructure, while
enforcing governance and usage policies through chargeback mechanisms.
6. Hybrid Cloud Management:
o IaaS admins manage both on-premises private clouds and hybrid environments.
They handle tasks such as migrating workloads between private and public clouds
and optimizing resource use across different platforms.

Self-service monitoring in a cloud environment, like HP CloudSystem Matrix, enables users to


independently monitor and manage their allocated resources without relying on the IaaS
administrator for constant oversight. This feature gives end-users control over their workloads while
still maintaining governance and policies set by the administrators.

Some of the available capabilities include:


Dashboard Access: Users are provided with a portal where they can view real-time data on resource
usage, including CPU, memory, storage, and network bandwidth. This allows them to keep track of
their consumption and ensure they stay within allocated limits.

Alerts and Notifications: The system can be configured to send alerts to users when they approach
usage thresholds, allowing proactive management of resources. This feature reduces downtime and
prevents unexpected costs.

Performance Metrics: Self-service monitoring provides detailed performance data, such as


application response times, database query times, and other key performance indicators. This
helps users optimize the performance of their workloads.

Cost Tracking: Users can also monitor the cost associated with their resource usage through
chargeback or showback mechanisms, providing transparency and encouraging responsible usage
of cloud resources.

Provisioning and Resource Management: In some systems, users can request additional resources
or scale back unused ones directly through the self-service portal, helping maintain efficient
resource utilization.

2.Amazon CloudWatch

Amazon CloudWatch is a comprehensive monitoring and observability service designed to help you
manage resources and applications running on AWS, including EC2 (Elastic Compute Cloud)
instances.

It provides real-time metrics, logs, alarms, and dashboards to track performance, optimize
operations, and react to changes in your EC2 infrastructure.

Amazon CloudWatch CloudWatch provides monitoring for Amazon EC2 instances, Amazon EBS
Volumes, Elastic Load Balancers, and RDS database instances.

It provides access to a wide range of metrics such as CPU utilization, disk reads and writes, network
traffic, latency, and client-request count.

It also provides statistics such as minimum, maximum, sum, and average of various parameters.

Use of Amazon CloudWatch provides customers with a visibility into resource utilization,
operational performance, and overall demand patterns for their instances

MANAGING PAAS

Similarly to an IaaSsystem, aPaaSsystem needs to maintain SLAs and to provide appropriate runtime
administration features. In this section, Windows Azure will be used as an example to explain the
management of a typical PaaS system.

Management of Windows Azure Service Level Agreements (SLAs) define the guaranteed
availability, performance, and uptime that Microsoft commits to providing for its cloud services.
These SLAs ensure customers receive a certain level of service, and if Azure fails to meet these
commitments, customers may be eligible for service credits.

 Availability Guarantee: Azure guarantees a specific percentage of service availability,


usually expressed as a percentage of uptime over a monthly billing cycle.
 Service Credits: If Azure fails to meet the agreed SLA, customers can claim service credits as
compensation. These credits are calculated based on the amount of downtime beyond the
SLA.
 99.9% Availability: Commonly provided for standard services (e.g., App Service,
Virtual Machines with a single instance).
 99.99% Availability: Provided for services configured with high-availability setups (e.g.,
Virtual Machines spread across multiple Availability Zones, Azure SQL Database Premium
Tier).

Managing applications in Microsoft Azure involves ensuring high availability, monitoring


performance, and handling administrative tasks effectively. Below is an overview of the key aspects:

1. Availability

 High Availability (HA) in Azure is achieved by distributing applications across multiple Azure
Regions and Availability Zones. This minimizes downtime due to hardware failure, natural
disasters, or network issues.
 Load Balancing: Azure offers load balancers and application gateways to distribute traffic
evenly across resources, ensuring better fault tolerance and optimized performance.
 Auto-scaling: Azure's auto-scaling feature helps maintain application availability by
automatically adjusting resources based on real-time demand.
2. Monitoring

 Azure Monitor: A comprehensive tool for monitoring applications, infrastructure, and


networks. It collects and analyzes telemetry data (metrics, logs, and traces) to give insights
into performance.
o Application Insights: A feature of Azure Monitor specifically for applications. It tracks
performance issues, exceptions, and user behaviors, helping developers optimize
their apps.
o Alerts: Custom alerts can be set based on resource thresholds or failures, notifying
admins via email, SMS, or integrations with incident management systems.
 Azure Log Analytics: Allows administrators to query and analyze logs for deeper insights
into the health of the infrastructure.
3. Common Administrative Functions

 Resource Management: Admins use the Azure Resource Manager (ARM) to organize
resources into groups, apply policies, and manage access controls. ARM templates help
automate and standardize resource deployments.
 Role-Based Access Control (RBAC): Azure provides RBAC to control who can manage or
access specific resources in an application. This helps ensure security and proper
governance.
 Backup and Disaster Recovery: Azure Backup and Azure Site Recovery provide tools for
backing up data and ensuring business continuity in case of outages. These tools automate
data replication and disaster recovery procedures.
 Cost Management: Azure provides cost analysis tools to monitor spending, optimize
resource usage, and prevent budget overruns through alerts and policies.
These features ensure that applications running in Azure are scalable, highly available, secure, and
cost-efficient. They allow administrators to efficiently manage the lifecycle of cloud applications while
maintaining performance and reliability

MANAGING SAAS
The monitoring of Salesforce.com is used as an example of how SaaS environments are managed.

Two example solutions follow– those provided by Netcharts and Nimsoft, respectively. These
solutions help to identify when Salesforce encounters slowdowns or outages and allows businesses
to react on time

Monitoring Force.com: Netcharts

NetCharts is an application that provides useful performance information in a well-integrated


manner for Salesforce.com.

It provides an up-to-date dashboard view of the key performance indicators (KPIs).

Dashboards can be shared across Salesforce users within an organization.

The dashboard provides powerful analytics, helping users make optimal decisions and increase
operational efficiency.

Key relationships and anomalies can be identified, and business trends can be predicted as well

Netcharts application monitoring Salesforce.com.

Monitoring Force.com: Nimsoft

In the context of Salesforce's Force.com platform, Nimsoft could be used for monitoring the
performance and availability of cloud services, including Salesforce applications.

Here’s how Nimsoft (Nimsoft Monitor) can help monitor Force.com:

1. Application Performance Monitoring (APM): Nimsoft can track the performance of


Salesforce applications, detecting slow response times or downtime. This helps
administrators ensure that Salesforce is running optimally.
2. Infrastructure Monitoring: It monitors underlying infrastructure such as servers, networks,
and databases that could impact Force.com performance.
3. Synthetic Transactions: Nimsoft can simulate user actions in Salesforce to monitor the
application’s responsiveness. This helps identify issues before real users are affected.
4. Dashboards & Alerts: It provides real-time dashboards and alerting mechanisms.
Administrators can get insights into metrics like API call usage, data storage limits, and more
within Salesforce.
5. Service Level Agreements (SLA) Monitoring: Nimsoft helps track SLA compliance by
monitoring the availability and performance of Force.com services.
OTHER CLOUD-SCALE MANAGEMENT SYSTEMS

1.HP Cloud Assure is a cloud management offering delivered via HP Software-as a-Service, and is a
suite of solutions for assessing security, performance, and avail ability of cloud services. It is a
turnkey solution applicable to IaaS, PaaS, and SaaS.

• Security is assessed by performing security risk assessment, common security policy definitions,
automated security tests, centralized permissions control, and web access to security information.
Further assessment is done by scanning networks, operating systems and web applications and
performing automated penetration testing.

• Performance is assessed by testing for bandwidth, connectivity, scalability, and end-user


experience. HP Cloud Assure offers a comprehensive performance testing service to make sure the
cloud providers meet end-user bandwidth and connectivity requirements and that the cloud
applications scale to support peak usage.

• Availability is assessed by testing and monitoring web-based application business processes and
identifying and analysing performance issues and trends. HP Cloud Assure monitors the cloud-
based applications, isolates problems, and identifies root causes with ability to drill-down into
specific application components.

2.RightScale

RightScale provides automated solutions for cloud management, and has support for managing
interactions with multiple cloud infrastructures. Figure gives the key modules of RightScale, which
are briefly described here:

• Cloud Management Environment provides the Management Dashboard similar to the one
provided by Amazon CloudWatch and Matrix. It is also a place where the administrator can get
access to the server templates and other deployment information,

• Cloud-Ready Server Templates provide pre-packaged solutions based on best practices for
common application scenarios to speed up deployments on the cloud.
Adaptable Automation Engine adapts resource allocation as required by system demand, system
failures . Tools are provided for managing multi-server deployments over the entire lifecycle

. • Multi-Cloud Engine interacts with the cloud infrastructure APIs of multiple cloud providers. This
eliminates lock-in to any single cloud vendor and allows deployment across multiple clouds,
including ability to move applications from one cloud to another.

3.Compuware offers a cloud monitoring suite of products that directly measure the performance
experienced by end users and customers.

It allows for detecting and diagnosing performance problems experienced by a cloud application,
prioritizing the problems in terms of business impact, and helping resolution of the problem.

For detecting performance problems, Compuware offers the following methods:

• Real-user monitoring: data can be collected from access devices of actual users to detect
performance problems in the cloud application

• Synthetic monitoring: The cloud application detects performance problems by monitoring the
Compuserve’s network of servers that reside on the Internet backbone

Server Virtualization

Server virtualization is a technology that allows multiple virtual instances (called virtual machines
or VMs) of operating systems to run on a single physical server. This process abstracts the physical
hardware of a server and creates virtual environments, each running its own OS and applications
independently.

This results in better resource utilization, reduced costs, and simplified management of IT
infrastructure.

Software virtualization can indeed be broadly categorized into system virtualization and process
virtualization. Here’s a detailed explanation of each category:

1. System Virtualization

System virtualization allows multiple operating systems to run concurrently on a single physical
machine by abstracting the hardware resources. This category is mainly concerned with virtualizing
the entire hardware system, enabling the creation of multiple virtual machines (VMs) that can
operate independently
2. Process Virtualization

Process virtualization, on the other hand, focuses on virtualizing specific applications or processes
rather than the entire operating system. This approach is often associated with containerization,
where applications run in isolated environments (containers) that share the same OS kernel.

Hypervisor-based virtualization is a core technology that allows multiple virtual machines (VMs) to
run on a single physical server by utilizing a software layer known as a hypervisor or virtual
machine monitor (VMM). This technology abstracts the underlying hardware and enables the
efficient allocation and management of resources among different VMs.

Key Components of Hypervisor-Based Virtualization

1. Hypervisor:
o The hypervisor is the software that creates and manages virtual machines by
interfacing between the physical hardware and the VMs.
o It allocates CPU, memory, storage, and networking resources to each VM.
o There are three main types of hypervisors:
Type 1 Hypervisor (Bare-Metal Hypervisor):

o Runs directly on the physical hardware without a host operating system.


o Provides better performance, scalability, and resource utilization as it has direct
access to the hardware.
o Examples:
 VMware ESXi
 Microsoft Hyper-V
 Xen
 KVM (Kernel-based Virtual Machine)
Type 2 Hypervisor (Hosted Hypervisor):

o Runs on top of a host operating system. The hypervisor relies on the host OS for
resource management, which can lead to slightly reduced performance compared to
Type 1.
o Examples:
 VMware Workstation
 Oracle VirtualBox
 Parallels Desktop

Type 3 A hybrid hypervisor refers to a system that combines features of both Type 1 (bare-
metal) and Type 2 (hosted) hypervisors, allowing for more flexibility and use-case
adaptability. The goal is to benefit from the high performance and resource efficiency of
Type 1 hypervisors while also leveraging the ease of use and flexibility of Type 2
hypervisors.

There are different techniques used for hypervisor-based virtualization. Trap and emulate
virtualization is a basic technique used from the days of the earliest hypervisors.

Trap and Emulate Virtualization is a fundamental technique used in hypervisor-based virtualization,


allowing guest operating systems (OS) to run on a hypervisor while maintaining compatibility and
control over hardware resources. Here’s a detailed overview of how this technique works, its
advantages, disadvantages, and practical applications.

Two Popular Hypervisors


virtualization is a complex technology involving techniques for virtualizing CPU, memory and I/O. In
the following subsection,

two well known hypervisors

1.VMWare

2.XenServer

VMware and XenServer are two leading virtualization platforms that offer robust solutions for server
virtualization. While both provide similar core functionality, they have distinct differences in terms of
architecture, features, and usability. Below is a detailed comparison of VMware and XenServe

1. Overview

VMware

 Type: VMware is a proprietary virtualization platform known for its enterprise-


level(Bussiness-Level) capabilities. It offers a wide range of products
 Target Audience: Primarily targeted towards large enterprises, offering advanced features
such as high availability, distributed resource scheduling, and robust management tools.

XenServer

 Type: Citrix XenServer is an open-source hypervisor (Type 1) based on the Xen Project, with
commercial support offered by Citrix. It is known for its flexibility and open-source roots.
 Target Audience: Geared towards organizations seeking cost-effective, open-source
virtualization solutions with solid performance in small to medium-sized data centers.

2. Core Hypervisor Technology

VMware
 VMware ESXi: A Type 1 hypervisor running directly on physical hardware, offering strong
isolation, resource management, and security features.
 Proprietary: VMware ESXi is a closed-source solution but offers extensive support,
integration, and compatibility with various enterprise products.
XenServer

 Xen Hypervisor: XenServer uses the Xen hypervisor, a Type 1 hypervisor. Xen originated from
the open-source Xen Project, making it highly customizable and suitable for different
environments.
 Open-Source: While Citrix offers a commercial version of XenServer, its open-source nature
appeals to users looking for cost-effective or flexible virtualization solutions.
3. Management and User Interface

VMware

 vCenter: VMware offers vCenter Server for centralized management of multiple ESXi hosts
and VMs. vCenter provides a rich GUI and web client, allowing administrators to manage
virtual infrastructure efficiently.
 Automation Tools: VMware provides automation capabilities through VMware vSphere
PowerCLI, VMware Orchestrator, and vRealize Suite for advanced infrastructure
management.
XenServer

 XenCenter: XenServer provides XenCenter, a Windows-based graphical management tool. It


is simpler than VMware's vCenter but still offers basic VM creation, management, and
monitoring features.
 Open-Source Tools: Administrators can also manage XenServer using third-party open-
source tools like Xen Orchestra, which enhances its usability for small to medium-sized
deployments.
4. Advanced Features

VMware

1. vMotion: Allows for live migration of VMs between hosts without downtime.
2. Distributed Resource Scheduler (DRS): Balances workloads across hosts for optimal
performance.
3. Fault Tolerance (FT): Provides continuous availability with no data loss in case of a hardware
failure.
4. Snapshots and Cloning: Comprehensive support for snapshot management and VM cloning.
5. Storage vMotion: Migrates VMs across different datastores without downtime.
XenServer

1. XenMotion: Similar to VMware's vMotion, it enables live migration of VMs without


downtime.
2. Dynamic Memory Control: Automatically adjusts VM memory allocation based on
workload requirements.
3. Cross-Platform Support: Supports integration with both Linux and Windows environments
for increased flexibility.
4. Snapshots: Offers snapshot and cloning functionality, though with fewer capabilities
compared to VMware.

5. Performance and Scalability


VMware

 Scalability: VMware is known for its ability to scale across large environments with
thousands of virtual machines. It can manage a high volume of resources through
distributed infrastructure.
 Performance: VMware consistently offers excellent performance, especially in
environments requiring high uptime and resource optimization.
XenServer

 Scalability: XenServer scales well but is more suited to small and medium-sized
environments. It may not handle the extreme scaling of VMware's infrastructure as
effectively.
 Performance: XenServer performs well, especially in CPU and memory virtualization, but
may lack some advanced optimization features offered by VMware.

6. Licensing and Cost

VMware

 Licensing: VMware follows a proprietary licensing model, which can be relatively expensive
for large deployments. It offers different tiers, including vSphere Standard, Enterprise, and
Enterprise Plus, each with varying features.
XenServer

 Licensing: Citrix XenServer offers both a free, open-source version and a paid enterprise
edition. The open-source version provides many of the features available in commercial
hypervisors, making it a cost-effective solution for organizations on a budget.

7. Support and Community

VMware

 Commercial Support: VMware provides extensive support packages, including 24/7 technical
support, training, and professional services.
 Community: VMware has a large, active community with a wealth of resources, tutorials,
and third-party tools to support its users.
XenServer

 Support Options: Citrix offers paid support for XenServer, including troubleshooting,
patches, and updates. The open-source community provides additional support through
forums and contributions.
 Community: XenServer benefits from the backing of the open-source Xen Project
community, offering flexibility and community-driven development.

Storage Virtualization:

Storage virtualization in cloud computing is a technology that abstracts physical storage resources
into a virtualized environment. It enables the aggregation of multiple physical storage devices,
making them appear as a single, unified storage resource, which can be managed more efficiently
and flexibly. This process enhances the management of storage in cloud environments, allowing
cloud providers to offer scalable and dynamic storage solutions to users.

Broadly, there are two categories of storage virtualizations: file level and block level.
File-level and block-level virtualization are two types of storage virtualization, each offering different
methods for abstracting and managing storage resources. Here’s a comparison between the two:

1. File-Level Virtualization:

File-level virtualization operates at the file system layer, abstracting the physical storage so that users
and applications interact with a logical file system instead of directly with physical storage devices.
This is commonly used in network-attached storage (NAS) systems.

Uses:

Network Attached Storage (NAS): In NAS, multiple users and applications can access the same files
over a network using standard protocols like NFS or SMB/CIFS.

How it Works:

 Virtualizes files, presenting them as a unified namespace, regardless of their physical location
on different storage devices.
 Redirects file requests from users to the appropriate physical storage location.
 It allows dynamic movement of files between storage systems without disrupting access to
them.
2. Block-Level Virtualization:

Block-level virtualization abstracts storage at the block level. It presents a logical block storage
interface to users, while physical blocks may reside on different storage devices. This is common in
storage area networks (SANs).

Uses:

Storage Area Networks (SAN): In SAN environments, block-level storage is provided to servers and
applications. The servers treat the storage as if it were directly attached, even though it’s spread
across many physical devices.

How it Works:

 Data is broken down into blocks, and each block is stored independently across different
storage devices.
 A virtualized block interface allows operating systems to interact with storage without being
aware of the physical storage arrangement.
 It is typically used for applications that require raw block storage, such as databases or virtual
machines.
Lustre File System

Lustre is a high-performance distributed file system commonly used in large-scale computing


environments, such as supercomputers and data centers, where there is a need for massive parallel
access to data. In Lustre, the architecture features a centralized metadata management system,
much like other distributed file systems, but it has some unique characteristics that make it highly
suitable for environments that require scalability, high throughput, and low latency.

The architecture of Lustre includes the following three main functional components

Metadata Server (MDS): The MDS manages the metadata (such as file names, directory structures,
permissions, ownership, etc.) for the file system. It handles file creation, deletion, and open requests.
Object Storage Servers (OSS): The OSSs are responsible for storing the actual file data. They manage
the Object Storage Targets (OSTs), which store the file contents as objects. Files are split into chunks
or stripes and distributed across multiple OSTs to allow parallel access and redundancy.

Lustre Client Nodes:Role: Client nodes interact with both the MDS and OSSs. When a client needs to
access a file, it first contacts the MDS to retrieve the metadata (e.g., where the file’s data blocks are
stored), and then it communicates directly with the OSSs to read or write the actual data.

META DATA TARGET(MDT):MDT Stores one or more Meta data servers

The Gluster File System

The Gluster File System (GlusterFS) is an open-source, scalable, distributed file system designed to
handle large amounts of data and provide high availability and performance. Unlike systems like
Lustre, which have centralized metadata servers, GlusterFS is completely decentralized and does not
rely on a central metadata server. This key architectural difference makes GlusterFS highly flexible,
fault-tolerant, and easy to scale.

1.Architecture :

GlusterFS is based on a modular, stackable design. It aggregates storage from multiple servers,
allowing them to be accessed as a single global namespace. The system is highly customizable,
allowing different configurations depending on the use case.

Decentralized Metadata: In GlusterFS, there is no dedicated metadata server. Instead, each file's
metadata is distributed alongside the data itself. This eliminates single points of failure and
bottlenecks that centralized metadata servers can create.

Brick: A brick is the basic unit of storage in GlusterFS. It is simply a directory on a server that
GlusterFS uses to store data. Multiple bricks can be aggregated to form a larger volume.

Volume: A volume is a logical collection of bricks. Clients interact with volumes to read and write
data. Volumes can be configured in various ways depending on the use case (e.g., distributed,
replicated, striped, or a combination of these).
Translator: GlusterFS uses a component called a translator, which is a modular piece of code that
implements file system operations. Translations can be stacked to add functionality such as
replication, striping, or caching.

2. Key Components:

Gluster Daemon (Glusterd): This daemon manages the storage resources and handles operations like
volume creation, brick management, and overall configuration.

Clients: GlusterFS clients mount the Gluster volume using the FUSE (Filesystem in Userspace) module

Servers: Storage servers in GlusterFS manage the bricks and serve data to the clients. Each server in
the system contributes storage and bandwidth to the global volume.

Block Virtualization:

Block virtualization in cloud computing refers to abstracting physical storage resources and
presenting them as virtualized block storage devices to users or applications. This process allows
cloud service providers to manage storage in a flexible, scalable, and efficient manner, while users
can provision and manage virtual block storage without needing to know the underlying hardware
specifics.

Block-level storage virtualization can be categorized into three main levels: Host-based, Network-
based, and Array-based. These levels represent different approaches to abstracting and managing
block storage in virtualized environments, each with unique benefits and use cases.

1. Host-Based Virtualization: This type of block-level virtualization occurs directly on the host or
server where the operating system or applications are running.

Key Components:

Logical Volume Managers (LVMs): Host-based tools such as LVMs on Linux or Volume Managers on
Windows manage physical storage devices and create virtual block devices.
Hypervisors: Virtualization platforms like VMware, Hyper-V, or KVM allow virtual machines to access
virtualized block storage that abstracts the underlying physical storage on the host.

2. Network-Based Virtualization (Storage Area Network - SAN):

This level of virtualization occurs within a network where storage resources are centralized and
accessed by multiple servers. Storage resources are presented as block devices over the network.

3. Array-based virtualization

Array-based virtualization is integrated directly into the storage array itself. The virtualization
capabilities are built into the storage hardware, allowing for efficient management of physical
storage resources.

Grid computing

Grid computing is a form of distributed computing where multiple computers, often located in
different locations, work together to solve a complex computational problem. These computers, or
"nodes," are connected by a network (typically the internet) and contribute their processing power,
storage, or other resources to complete tasks as if they were part of a single supercomputer. The
idea is to harness idle or underutilized resources from multiple systems to tackle large-scale
problems more efficiently.

a grid refers to a network of geographically dispersed, interconnected computing resources

Three Fundamental Characteristics of a Grid

1. coordinated resource sharing


2. problem solving in dynamic
3. problem solving in multi-institutional virtual organizations

Grid architecture is the framework that defines the components, layers, and interactions in a grid
computing system. It allows distributed resources such as computing power, storage, and
applications to be shared and coordinated across geographically dispersed locations to solve large-
scale problems. The architecture of grid computing consists of several layers and components, each
serving a specific function to ensure efficient resource sharing and task execution.

First of all, grid computing defines a notion of a virtual organization to enable flexible, co-ordinated,
secure resource sharing among participating entities. A virtual organization (VO) is basically a
dynamic collection of individuals . A VO forms a basic unit for enabling access to shared resources
with specific resource-sharing policies applicable for users from a particular VO (Figure 9.13). The key
technical problem addressed by grid technologies is to enable resource sharing among mutually
distrustful participants of a VO and enable them to solve a common task.
Key Components of Grid Architecture:

Fabric Layer: The fabric layer provides access to the physical resources within the grid, such as
processors, storage, networks, and sensors.

Connectivity Layer: The connectivity layer manages the communication protocols necessary for the
interaction between distributed resources and users within the grid.

Resource Layer: This layer is responsible for managing the individual resources within the grid, such
as processing units, storage devices, and software services.

Collective Layer: The collective layer coordinates multiple resources to handle specific grid-wide
operations like scheduling, load balancing, and data replication.

Application Layer: The application layer is the interface through which users interact with the grid,
submitting tasks and accessing results.

Other Cloud-Related Technologies.

distributed computing system

A distributed computing system is a model in which multiple computers or systems work together to
achieve a common goal by sharing resources, data, and tasks across a network. These systems are
physically dispersed, often geographically, but they communicate and collaborate to solve large-scale
problems. Unlike centralized systems where all computing happens on a single machine, distributed
systems divide the workload across several independent nodes (machines), ensuring that tasks are
completed faster, more reliably, and with better fault tolerance.

Utility computing

Utility computing is a model of computing in which computing resources (such as processing power,
storage, and networking) are provided to users on demand, similar to how utilities like electricity,
water, and gas are supplied. In this model, computing resources are offered as a metered service,
where users pay only for the resources they consume, without the need to own or manage the
infrastructure themselves.

Autonomic Computing

Autonomic Computing is a computing model that aims to create self-managing computing systems
capable of operating with minimal human intervention. The goal is to make systems that can
configure, optimize, protect, and heal themselves autonomously. This concept is inspired by the
human body's autonomic nervous system, which regulates essential functions like heart rate and
breathing without conscious thought.

Application Service Providers

Application Service Providers (ASPs) are businesses that deliver software applications and related
services to customers over a network, typically the internet. Instead of purchasing and installing
software on local machines, customers can access these applications on-demand through the
provider’s infrastructure. This model was an early precursor to today’s cloud-based Software as a
Service (SaaS) offerings.

You might also like