0% found this document useful (0 votes)
60 views23 pages

Cloud Computing IMR QnA

The document discusses cloud computing characteristics and components of a market-oriented cloud architecture. It defines the key characteristics of cloud computing according to NIST, including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. It also describes the three primary cloud service models - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Finally, it outlines the main components of a market-oriented cloud architecture, including users/brokers, an SLA resource allocator, pricing, accounting, and more.

Uploaded by

sumiipwra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
60 views23 pages

Cloud Computing IMR QnA

The document discusses cloud computing characteristics and components of a market-oriented cloud architecture. It defines the key characteristics of cloud computing according to NIST, including on-demand self-service, broad network access, resource pooling, rapid elasticity, and measured service. It also describes the three primary cloud service models - Infrastructure as a Service (IaaS), Platform as a Service (PaaS), and Software as a Service (SaaS). Finally, it outlines the main components of a market-oriented cloud architecture, including users/brokers, an SLA resource allocator, pricing, accounting, and more.

Uploaded by

sumiipwra
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 23

Cloud Computing IMR Questions And Answer

Q 1).Explain Characteristics of cloud computing as per NIST


Ans :-
The National Institute of Standards and Technology (NIST) in the United States has defined cloud
computing with a set of characteristics, which are commonly referred to as the "essential
characteristics." These characteristics help provide a clear understanding of what constitutes cloud
computing. According to NIST, cloud computing exhibits the following key characteristics:

1. On-Demand Self-Service: - Cloud resources can be provisioned and managed without requiring
human intervention from the service provider. Users can unilaterally provision computing capabilities,
such as server time and network storage, as needed.

2. Broad Network Access: - Cloud services are available over the network and can be accessed
through standard mechanisms and platforms. This means that cloud services are accessible over the
internet using a variety of devices, such as laptops, smartphones, and tablets.

3. Resource Pooling: - Cloud providers pool computing resources to serve multiple customers.
These resources are dynamically assigned and reassigned based on demand. Customers typically have
no control or knowledge over the exact location of the resources, but they may specify certain
parameters like location or type of service.

4. Rapid Elasticity: - Cloud resources can be rapidly and elastically provisioned to quickly scale up
or down based on demand. This ensures that users have access to the computing resources they need
when they need them and can scale down when demand decreases.

5. Measured Service: - Cloud computing resources are metered, and usage can be monitored,
controlled, and reported. This characteristic enables providers to measure and charge users for their
actual usage of resources, promoting transparency and efficiency.

6. Service Models: - Cloud computing can be categorized into three primary service models:

- Infrastructure as a Service (IaaS): Provides virtualized computing resources over the internet.

- Platform as a Service (PaaS): Offers a platform allowing customers to develop, run, and
manage applications without dealing with the complexity of infrastructure.

- Software as a Service (SaaS): Delivers software applications over the internet, eliminating the
need for users to install, maintain, and manage the software locally.

7. Deployment Models:- Cloud computing can also be deployed in different ways:

- Public Cloud: Resources are owned and operated by a third-party cloud service provider and are
made available to the general public.

- Private Cloud: Cloud infrastructure is operated solely for a single organization, and it may be
managed by the organization or a third party.

- Hybrid Cloud: Combines elements of both public and private clouds, allowing data and
applications to be shared between them.

These characteristics provide a comprehensive framework for understanding the nature and features
of cloud computing services. Organizations and users can leverage these characteristics to assess,
adopt, and manage cloud-based solutions effectively.

Q 2).What are the layers/Types of cloud Services?


Ans:-
Cloud services are typically categorized into three main layers or types, known as service models.
These service models represent different levels of abstraction and responsibility for users. The three
primary types of cloud services are:

Cloud Computing Page 1


1. Infrastructure as a Service (IaaS):

- In IaaS, the cloud provider delivers virtualized computing resources over the internet. These
resources can include virtual machines, storage, and networking. IaaS provides the foundational
infrastructure that allows users to deploy and run their applications. Users have control over the
operating systems, applications, and some network components, but the provider is responsible for
managing the underlying infrastructure, such as data centers, servers, and storage.

Examples of IaaS providers include Amazon Web Services (AWS) Elastic Compute Cloud (EC2),
Microsoft Azure Virtual Machines, and Google Cloud Compute Engine.

2. Platform as a Service (PaaS):

- PaaS offers a more abstracted layer than IaaS, providing a platform that allows users to develop,
run, and manage applications without the complexities of managing the underlying infrastructure.
With PaaS, users focus on building and deploying applications, while the provider takes care of the
underlying infrastructure, runtime, and middleware. This allows for faster development and
deployment cycles.

Examples of PaaS offerings include Google App Engine, Microsoft Azure App Service, and Heroku.

3. Software as a Service (SaaS):

- SaaS delivers software applications over the internet, eliminating the need for users to install,
maintain, and manage the software locally. Users access the software through a web browser, and the
provider handles all aspects of software maintenance, including updates, patches, and security. SaaS is
typically designed for end-users, and it covers a wide range of applications, from productivity tools to
business applications.

Examples of SaaS applications include Salesforce, Google Workspace (formerly G Suite), and
Microsoft 365.

These layers represent a spectrum of abstraction, with IaaS providing more control and customization
at the infrastructure level, PaaS abstracting away infrastructure details to focus on application
development, and SaaS delivering fully functional applications to end-users without requiring them to
manage any part of the underlying infrastructure. Organizations can choose the appropriate service
model based on their specific needs and the level of control they require over the computing
environment.

Q 3) Write as short note on Global Cloud Market Place?


Ans:-
A cloud marketplace is an online storefront where customers can purchase software and services that
easily integrate with or are built on the cloud provider’s offerings. It also offers cloud-native
applications that customers can purchase and manage on the platform. Major software and cloud
service providers–including Red Hat, Amazon Web Services (AWS), Google Cloud Platform (GCP),
and Microsoft Azure–have cloud marketplaces

Why use a cloud marketplace?

Cloud marketplaces offer several benefits for the enterprise. One of the biggest advantages of using a
cloud marketplace is committed spend. This is a predetermined annual spend with the service provider
that is often negotiated with a discount. According to Tackle, nearly half–43%–of buyers say taking
advantage of their committed spend with cloud providers is their top reason for purchasing through a
cloud marketplace, up from 20% in 2020.

Organizations can look at the services available in the provider’s marketplace and use the allocated
funds to make purchases that will best integrate with their hybrid cloud strategies.

Cloud Computing Page 2


These marketplaces can help simplify the procurement process. All purchases are made through a
single cloud vendor, rather than routing thousands of products through different vendors for approval.

Cloud marketplaces also provide a single source of billing and invoicing, which can be especially
helpful for those looking to centralize budgets. The cloud marketplace provider bills the customer and
is then billed by the third-party vendor for use of the product. This minimizes administration around
procurement, saving time and professional resources that can be assigned to other high-value
workloads.

Q 4). Explain the Component Of Market Oriented Cloud Architecture?


Ans:-
Cloud Computing provides low cost unlimited computing/storage service for the computing world.
Market oriented cloud is a business technique for cloud customers for providing economical and low
computing service based on customer demand. Market oriented cloud only focus on the business class
users.

1. Users/Brokers:

Operating Users or brokers send service requests to the Data Center and Cloud from anywhere in the
world for processing.

2. SLA Resource Allocator:

The SLA Resource Allocator serves as a liaison between the Data Center/Cloud service provider and
external users/brokers. To support SLA-oriented resource management, the following systems must
interact:

-Service Request Examiner and Admission Control: When a service request is initially made, the
Service Request Examiner and Admission Control mechanism evaluates the supplied request for QoS
criteria before deciding whether to accept or refuse the request. As a result, it prevents resource
overload, in which numerous service requests are unable to be delivered properly due to a lack of
available resources. It also requires up-to-date status information on resource availability from the
VM Monitor and workload processing from the Service Request Monitor to make appropriate

Cloud Computing Page 3


resource allocation choices. It then distributes requests to VMs and determines resource entitlements
for assigned

- VMs. Pricing: The Pricing mechanism determines how service requests are billed. Requests, for
example, might be charged depending on submission time (peak/off-peak), price rates
(fixed/changing), or resource availability (supply/demand). Pricing serves as a foundation for
regulating the supply and demand for computing resources inside the Data Center and aids in the
proper prioritization of resource allocations.

-Accounting: The Accounting mechanism keeps track of the real utilization of resource requests so
that the ultimate cost may be calculated and charged to the consumer Furthermore, the Service
Request Examiner and Admission Control mechanism may use the stored previous usage data to
optimize resource allocation choices.

-VM Monitor: The VM Monitor mechanism monitors the availability of VMs as well as resource
entitlements.

- Dispatcher: The Dispatcher mechanism begins executing accepted service requests assigned VMs.
Service Request Monitor. The Service Request Monitor mechanism monitors the status of service
requests as they are being executed.

3. VMs:

Multiple VMs may be started and terminated dynamically on a single physical system to satisfy
accepted service requests, enabling maximum flexibility to design multiple partitions of resources on
the same physical system to match particular service request needs. Furthermore, because each VM is
separated from one another on the same physical computer, many VMs can execute applications
based on various operating system environments on the same physical computer at the same time.

4. Physical Machines:

The Data Center is made up of several computer servers that supply resources to satisfy service
demands.

Q 5).Explain Advantages And Disadvantages Of cloud computing


Ans
Advantages Disadvantages
1) Back-up and restore data 1) Internet Connectivity
Once the data is stored in the cloud, it is easier to As you know, in cloud computing, every data
get back-up and restore that data using the cloud (image, audio, video, etc.) is stored on the cloud,
and we access these data through the cloud by
using the internet connection. If you do not have
good internet connectivity, you cannot access
these data. However, we have no any other way to
access data from the cloud.
2) Improved collaboration 2) Vendor lock-in
Cloud applications improve collaboration by Vendor lock-in is the biggest disadvantage of
allowing groups of people to quickly and easily cloud computing. Organizations may face
share information in the cloud via shared storage. problems when transferring their services from
one vendor to another. As different vendors
provide different platforms, that can cause
difficulty moving from one cloud to another.

Cloud Computing Page 4


3) Excellent accessibility 3) Limited Control
Cloud allows us to quickly and easily access store As we know, cloud infrastructure is completely
information anywhere, anytime in the whole owned, managed, and monitored by the service
world, using an internet connection. An internet provider, so the cloud users have less control over
cloud infrastructure increases organization the function and execution of services within a
productivity and efficiency by ensuring that our cloud infrastructure.
data is always accessible.
4) Low maintenance cost 4) Security
Cloud computing reduces both hardware and Although cloud service providers implement the
software maintenance costs for organizations. best security standards to store important
information. But, before adopting cloud
technology, you should be aware that you will be
sending all your organization's sensitive
information to a third party, i.e., a cloud
computing service provider. While sending the
data on the cloud, there may be a chance that your
organization's information is hacked by Hackers.
5) Mobility 5). Hidden Costs:
Cloud computing allows us to easily access all While cloud computing can be cost-effective,
cloud data via mobile. organizations must be mindful of potential hidden
costs, such as data transfer fees, storage costs, and
charges for additional services.
6) Cost Efficiency: 6) Technical Issues
Cloud computing allows organizations to reduce Cloud technology is always prone to an outage
capital expenditures on IT infrastructure. With a and other technical issues. Even, the best cloud
pay-as-you-go model, users only pay for the service provider companies may face this type of
resources they consume, leading to cost savings trouble despite maintaining high standards of
maintenance
7) Innovation and Agility: 7) Lower Bandwidth
Cloud computing enables rapid deployment of Many cloud storage service providers limit
new applications and services. Organizations can bandwidth usage of their users. So, in case if your
experiment with new ideas, innovate quickly, and organization surpasses the given allowance, the
respond faster to market changes additional charges could be significantly costly

Q 6).What is meaning of load Balancing and Virtualization how load Balancing is useful in
cloud?
Ans:
Load Balancing :

Load balancing in cloud computing distributes traffic and workloads to ensure that no single server or
machine is under-loaded, overloaded, or idle. Load balancing optimizes various constrained
parameters such as execution time, response time, and system stability to improve overall cloud
performance. Load balancing architecture in cloud computing consists of a load balancer that sits
between servers and client devices to manage traffic.

Cloud Computing Page 5


Virtualization:

Virtualization is a technology that enables the creation of virtual instances or representations of


computing resources, such as servers, storage, or networks. Instead of relying on physical hardware,
virtualization allows multiple virtual machines (VMs) to run on a single physical server, each
operating independently with its own operating system and applications.

Load Balancing in Cloud Computing:

Load balancing plays a crucial role in cloud computing environments, where resources are
dynamically provisioned and scaled based on demand.

load balancing is useful in the cloud:

Optimizing Resource Utilization:

Load balancing helps distribute incoming requests or workloads across multiple servers or virtual
machines, ensuring that each resource is efficiently utilized. This prevents overloading individual
servers while others remain underutilized.

Scalability and Elasticity:

Cloud environments often experience fluctuating workloads. Load balancing enables automatic
scaling by distributing incoming traffic among available resources. When demand increases, new
instances can be added, and load balancers ensure a balanced distribution of requests.

High Availability:

Load balancing contributes to high availability by redirecting traffic away from unhealthy or
overloaded servers. In the event of a server failure, traffic is automatically rerouted to healthy servers,
minimizing downtime and ensuring continuous service availability.

Improved Performance and Response Time:

By evenly distributing workloads, load balancing helps optimize performance and reduce response
times. Users experience faster and more reliable access to applications and services hosted in the
cloud.

Fault Tolerance:

Load balancers enhance fault tolerance by detecting and isolating faulty servers. They can route traffic
away from servers experiencing issues, preventing disruptions to the overall service

Q 7).What are the different security issuses involves in cloud Adoption?


Ans

Cloud Computing Page 6


Adoption of Cloud Computing refers to moving to or implementing cloud computing in an
organization. This can involve transitioning from on-premises infrastructure to the cloud or using the
cloud in addition to on-premises infrastructure

1).Data Loss –

Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we
know that our sensitive data is in the hands of Somebody else, and we don’t have full control over our
database. So, if the security of cloud service is to break by hackers then it may be possible that
hackers will get access to our sensitive data or personal files.

2). Interference of Hackers and Insecure API’s –

As we know, if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is
important to protect the Interface’s and API’s which are used by an external user. But also in cloud
computing, few services are available in the public domain which are the vulnerable part of Cloud
Computing because it may be possible that these services are accessed by some third parties. So, it
may be possible that with the help of these services hackers can easily hack or harm our data.

3).User Account Hijacking –

Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of
User or an Organization is hijacked by a hacker then the hacker has full authority to perform
Unauthorized Activities.

4).Changing Service Provider –

Vendor lock-In is also an important Security issue in Cloud Computing. Many organizations will face
different problems while shifting from one vendor to another. For example, An Organization wants to
shift from AWS Cloud to Google Cloud Services then they face various problems like shifting of all
data, also both cloud services have different techniques and functions, so they also face problems
regarding that. Also, it may be possible that the charges of AWS are different from Google Cloud, etc.

5).Lack of Skill –

While working, shifting to another service provider, need an extra feature, how to use a feature, etc.
are the main problems caused in IT Company who doesn’t have skilled Employees. So it requires a
skilled person to work with Cloud Computing.

6).Denial of Service (DoS) attack –

This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in
large organizations such as the banking sector, government sector, etc. When a DoS attack occurs,
data is lost. So, in order to recover data, it requires a great amount of money as well as time to handle
it.

Q 8).Explain Life Cycle of Virtual machine?


Ans
A virtual machine (VM) is a software emulation of a physical machine, which allowsmultiple
operating systems to run on a single physical machine. The life cycle of avirtual machine includes
several stages, from creation to deletion.

Cloud Computing Page 7


Creation:

You create a new virtual machine using a virtualization management tool, allocating 2 v CPUs, 4 GB
of RAM, and a 50 GB disk. You name the virtual machine "Web App VM" and choose to use Ubuntu
20.04 as the operating system

Configuration:

You configure the virtual machine by setting up the network, assigning an IP address, and installing
necessary packages such as Apache PHP, and MySQL. You also configure the firewall and set up
SSH access for remote administration

Installation:

You install Ubuntu 20.04 from an ISO image and install the required packages using the command
line interface

Deployment:

You start the virtual machine and launch the web application.

Operation:

You monitor the performance of the virtual machine and the web application, ensuring that they are
running optimally. You also monitor resource usage, network traffic, and other performance metrics.

Maintenance:

You apply security patches and updates to the operating system and the web application. You also
perform regular backups of the virtual machine's data to prevent data loss in the event of a failure or
disaster

Q 9) Explain Live Migration Technique with XEN Hypervisor Algorithm?


Ans
Live Migration Technique with Xen Hypervisor Algorithm Explained

Live migration is a technique for moving a virtual machine (VM) from one physical host to another
while the VM remains operational and users are unaware of the process. Xen, a popular open-source
hypervisor, uses a multi-pass algorithm for live migration, which involves several key steps:

1. Pre-copy Phase:

Cloud Computing Page 8


Memory Page Tracking: The hypervisor on the source host tracks which pages in the VM's memory
are being accessed actively.

Dirty Page Transfer: The hypervisor identifies pages that have been modified since the migration
started ("dirty pages"). These pages are copied to the destination host in batches.

Pre-copy Buffering: Some frequently accessed pages ("hot pages") can be proactively copied to the
destination host before they are modified, further reducing downtime during the final cut-over.

2. Cut-over Phase:

Guest Pause: The VM on the source host is briefly paused (typically for milliseconds) to ensure
consistency of the memory state.

Final Page Transfer: Any remaining dirty pages are transferred to the destination host.

Guest Resume: The VM on the destination host is resumed, and it continues execution as if nothing
happened.

Xen Hypervisor Algorithm:

The specific algorithm used by Xen for live migration involves several optimizations:

Multi-pass Approach: The pre-copy and cut-over phases are repeated multiple times, progressively
transferring more memory pages until only a small amount remains for the final cut-over. This
minimizes the total downtime.

Page Dirty Bit Tracking: Xen uses a hardware feature called the "dirty bit" to efficiently identify
which pages have been modified and need to be copied.

Network Bandwidth Optimization: Xen utilizes techniques like compression and delta encoding to
reduce the amount of data transferred during the migration.

Benefits of Live Migration:

Minimizes Downtime: Users and applications are unaffected during the migration process,
improving service availability.

Load Balancing: VMs can be dynamically moved to different hosts to balance resource utilization
and prevent overload.

Maintenance and Upgrades: VMs can be migrated to other hosts for maintenance or upgrades
without downtime.

live migration with the Xen hypervisor algorithm is a powerful tool for improving the flexibility and
manageability of virtualized environments. However, it's important to consider the limitations and
ensure adequate resources to ensure successful implementation.

Q 10).Write a note on Service Level Agreement?


Ans:-
A service level agreement (SLA) is an outsourcing and technology vendor contract that outlines a
level of service that a supplier promises to deliver to the customer. It outlines metrics such as uptime,
delivery time, response time, and resolution time. An SLA also details the course of action when
requirements are not met, such as additional support or pricing discounts. SLAs are typically agreed
upon between a client and a service provider, although business units within the same company can
also make SLAs with each other.

some common types of service level agreements (SLAs).

Cloud Computing Page 9


Customer-level SLA

A customer-based SLA is an agreement that covers all of the services used by a customer. A customer
service level agreement covers specific details of services, provisions of service availability, an
outline of responsibilities, escalation procedures, and terms for cancellation.

Service-level SLA

A service-level SLA is a contract that details an identical service offered to multiple customers. For
example, if a service provider had multiple clients using its virtual help desk, the same service-based
SLA would be issued to all clients.

Multi-level SLA

This type of agreement is split into multiple levels that integrate several conditions into the same
system. This approach is suitable for providers that have many customers using their product at
different price ranges or service levels. These differing service levels can be built into a multi-level
SLA.

>Number of common elements that you can include in a service level agreement (SLA).

Agreement overview: An agreement overview includes the start and end dates of an SLA, details of
the parties involved, and an overview of the services included.

Description of services: A description of services outlines all services provided within an SLA. It
details information such as turnaround times, technologies and applications, maintenance schedules,
and processes and procedures.

Exclusions:- This section describes all exclusions and exemptions that are agreed upon by both
parties.

Service level objective:- A service level objective (SLO) is an agreement within an SLA about a
specific metric like response time or uptime. Both parties agree to key service performance metrics
backed by data.

Security standards:-Both the service provider and the client use security standards to demonstrate
the security measures and protocols in place. This section also commonly includes non-disclosure
agreements (NDAs) and anti-poaching agreements.

Disaster recovery process:- An SLA will often detail the process of disaster recovery and outline the
mechanisms and processes to follow in case of service failure of the vendor. This section also includes
information on the restarting process, including restart times and alerts.

Service tracking and reporting agreement:- In this section, performance metrics are agreed upon
by both parties. Most customers closely track their service performance. A reasonable baseline for this
tracking would be before and after using a new service provider.

Penalties:- This section clearly states the penalties, financial or otherwise, that either side incurs if
they fail to live up to their SLA obligations.

Q 11).Explain Different Component of SLA Resource?


Ans:-
A Service Level Agreement (SLA) typically includes various components that collectively define the
terms and conditions of service delivery between a service provider and a customer. These
components help establish clear expectations, performance standards, and responsibilities. Here are
the key components of an SLA:
1. Service Scope: - Defines the scope of the services covered by the SLA. It outlines the specific
features, functionalities, and deliverables that the service provider commits to providing to the
customer.

Cloud Computing Page 10


2. Service Levels and Metrics: - Specifies measurable performance indicators and service levels
that the provider agrees to achieve. Common metrics include uptime, response time, resolution time,
and availability. These metrics are crucial for assessing the quality and efficiency of the services.

3. Roles and Responsibilities: - Clearly outlines the responsibilities of both the service provider and
the customer. This section defines who is responsible for specific tasks, such as maintenance, support,
reporting, and compliance with agreed-upon standards.

4. Performance Monitoring and Reporting: - Describes the methods and frequency of performance
monitoring and reporting. This includes how the provider will measure and report on key performance
indicators (KPIs) and the mechanisms for addressing any discrepancies or issues identified during
monitoring.

5. Escalation Procedures: - Details the procedures to be followed in the event of service disruptions,
outages, or failures. This section often includes a hierarchical escalation process for resolving issues
promptly and effectively.

6. Availability and Downtime:- Specifies the expected availability of the service and any planned
downtime for maintenance or upgrades. It may also define compensation or credits in case of service
unavailability exceeding agreed-upon thresholds.

7. Security and Compliance: - Outlines security measures, data protection protocols, and
compliance requirements relevant to the services provided. This section ensures that the service aligns
with industry standards and regulatory obligations.

8.Customer Support and Communication: - Describes the customer support channels, response
times, and communication protocols. This section ensures that the customer knows how to seek
support and how the provider will communicate with them regarding service-related matters.

9.Performance Benchmarks:- Establishes benchmarks or target levels for each service metric,
providing a basis for evaluating performance and determining compliance with the SLA.

These components collectively form a comprehensive SLA, serving as a vital document for both the
service provider and the customer. A well-structured SLA helps build trust, ensures clarity, and
provides a basis for effective communication and collaboration between the parties involved.

Q 12).What is benefit of integration of Private and Public Clouds?


Ans:-
The integration of private and public clouds, often referred to as a hybrid cloud approach, offers
several benefits for organizations seeking a flexible and scalable IT infrastructure. Here are some key
advantages of integrating private and public clouds:

1. Scalability and Flexibility: Hybrid cloud environments provide the ability to scale resources
dynamically. Organizations can use the public cloud for handling variable workloads and scaling up
or down as needed, while keeping sensitive or critical workloads in the more controlled environment
of a private cloud.

2.Cost Efficiency:- By leveraging the public cloud for peak workloads or non-sensitive data,
organizations can optimize costs. The pay-as-you-go model of public clouds allows for cost savings,
while private clouds provide cost predictability and control over specific resources.

3.Resource Optimization:- Hybrid cloud allows organizations to allocate resources based on specific
workload requirements. Critical or sensitive applications can run in the private cloud, while less
critical workloads can use public cloud resources. This optimization leads to efficient resource
utilization.

4. Data Security and Compliance: Private clouds offer a higher level of control and security for
sensitive data and applications. Organizations can keep critical data on-premises or in a private cloud

Cloud Computing Page 11


while utilizing the public cloud for less sensitive tasks. This helps address data security and
compliance concerns.

5.Disaster Recovery and Business Continuity: Hybrid cloud architectures enhance disaster recovery
capabilities. Critical data and applications can be backed up or replicated in a public cloud, providing
a resilient solution for business continuity and disaster recovery planning.

6.Agility and Innovation: The hybrid cloud model allows organizations to innovate and deploy new
applications more rapidly. Development and testing can take place in the public cloud environment,
and production workloads can be deployed in the private cloud, ensuring a seamless and agile
development cycle.

7.Geographic Expansion: - For organizations with a global presence, a hybrid cloud approach
enables them to deploy resources closer to end-users or comply with data residency requirements.
This helps improve performance and ensures compliance with regional regulations.

the integration of private and public clouds offers organizations a strategic approach to IT
infrastructure that combines the benefits of both deployment models. This hybrid cloud approach
enables organizations to achieve greater flexibility, efficiency, and agility in meeting their business
objectives.

Q 13).What are Different types of cloud?


Ans
Cloud computing is Internet-based computing in which a shared pool of resources is available over
broad network access, these resources can be provisioned or released with minimum management
efforts and service-provider interaction

1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
5. Multicloud
1)Public cloud:

Are managed by third parties which provide cloud services over the internet to the public, these
services are available as pay-as-you-go billing models. They offer solutions for minimizing IT
infrastructure costs and become a good option for handling peak loads on the local infrastructure.
Public clouds are the go-to option for small enterprises, which can start their businesses without large
upfront investments by completely relying on public infrastructure for their IT needs.

2) Private Cloud:

Cloud Computing Page 12


Private clouds are distributed systems that work on private infrastructure and provide the users with
dynamic provisioning of computing resources. Instead of a pay-as-you-go model in private clouds,
there could be other schemes that manage the usage of the cloud and proportionally billing of the
different departments or sections of an enterprise. Private cloud providers are HP Data Centers,
Ubuntu, Elastic-Private cloud, Microsoft, etc.

3) Hybird Cloud:

A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public
cloud and private cloud. For this reason, they are also called heterogeneous clouds. A major drawback
of private deployments is the inability to scale on-demand and efficiently address peak loads. Here
public clouds are needed. Hence, a hybrid cloud takes advantage of both public and private clouds.

4).Community Cloud:

Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing
responsibilities among the organizations is difficult. In the community cloud, the infrastructure is
shared between organizations that have shared concerns or tasks. An organization or a third party may
manage the cloud.

Cloud Computing Page 13


5).Multicloud

Multicloud is the use of multiple cloud computing services from different providers, which allows
organizations to use the best-suited services for their specific needs and avoid vendor lock-in.This
allows organizations to take advantage of the different features and capabilities offered by different
cloud providers.

Q 14).What is Amazon Elastic cloud Compute?


Ans
Amazon Elastic Compute Cloud (Amazon EC2) provides on-demand, scalable computing capacity
in the Amazon Web Services (AWS) Cloud. Using Amazon EC2 reduces hardware costs so you can
develop and deploy applications faster. You can use Amazon EC2 to launch as many or as few virtual
servers as you need, configure security and networking, and manage storage. You can add capacity
(scale up) to handle compute-heavy tasks, such as monthly or yearly processes, or spikes in website
traffic. When usage decreases, you can reduce capacity (scale down) again.

The following diagram shows a basic architecture of an Amazon EC2 instance deployed within an
Amazon Virtual Private Cloud (VPC). In this example, the EC2 instance is within an Availability
Zone in the Region. The EC2 instance is secured with a security group, which is a virtual firewall that
controls incoming and outgoing traffic.

A private key is stored on the local computer and a public key is stored on the instance. Both keys are
specified as a key pair to prove the identity of the user. In this scenario, the instance is backed by an
Amazon EBS volume. The VPC communicates with the internet using an internet gateway. For more
information about Amazon VPC

Q 15).Write a short note on hosting Massively Multiplayer Games on Cloud?


Ans
Hosting Massively Multiplayer Online (MMO) Games on the cloud offers numerous advantages,
addressing the unique challenges and requirements of such gaming environments. Here's a short note
on hosting MMO games on the cloud:

Scalability and Elasticity:

Hosting MMO games on the cloud provides unparalleled scalability. Cloud infrastructure allows game
developers to dynamically scale resources up or down based on player demand. This ensures a
seamless gaming experience, even during peak usage periods, without the need for significant upfront
investments in hardware.

Global Reach:

Cloud Computing Page 14


Cloud hosting enables MMO games to have a global presence. Cloud providers offer data centers in
multiple regions, allowing game developers to deploy servers closer to players worldwide. This
minimizes latency and enhances the gaming experience for users in different geographical locations.

Resource Optimization:

Cloud environments allow for efficient resource utilization. Game developers can optimize server
configurations, allocate resources on-demand, and adjust capacity to match varying player loads. This
flexibility ensures that resources are used efficiently, leading to cost savings and improved
performance.

Reliability and Redundancy:

Cloud providers offer high levels of reliability and redundancy. Hosting MMO games on the cloud
involves deploying servers across multiple Availability Zones (AZs) to ensure fault tolerance. In the
event of hardware failures or other issues, players can seamlessly transition to alternative servers
without service interruption.

Content Delivery and Edge Computing:

Cloud-based Content Delivery Networks (CDNs) facilitate the efficient distribution of game assets,
reducing latency for content delivery. Edge computing capabilities in the cloud enable the processing
of game logic closer to players, minimizing latency and enhancing real-time interactions in MMO
games.

Dynamic Scaling and Auto-Scaling:

Cloud platforms support dynamic scaling and auto-scaling features, allowing MMO game servers to
adapt to fluctuating player loads. Auto-scaling mechanisms automatically adjust the number of servers
based on predefined criteria, ensuring optimal performance and cost-effectiveness.

Development and Deployment Efficiency:

Cloud environments provide development teams with the tools and services needed to streamline
game development and deployment processes. Continuous integration and continuous deployment
(CI/CD) practices are easily facilitated in cloud-based development pipelines.

hosting Massively Multiplayer Online games on the cloud brings unparalleled advantages, including
scalability, global reach, resource optimization, reliability, and cost efficiency. This approach
empowers game developers to focus on creating engaging and immersive gaming experiences while
leveraging the flexible and dynamic nature of cloud infrastructure.

Q 16).Explain Following terms


a).High Availability & Disaster Recovery
b).Cloud Governance
Ans:-
a) High Availability & Disaster Recovery:

High Availability (HA):

High Availability refers to a system or component's ability to remain operational and accessible for a
significantly high percentage of time. In the context of IT infrastructure and services, high availability
is achieved through redundancy and fault-tolerant design. The goal is to minimize downtime and
ensure that services are continuously available, even in the face of hardware failures, software errors,
or other unexpected issues.

Components of achieving high availability include:

- Redundancy: Having duplicate or backup components, such as servers, networks, or data centers, to
take over if the primary components fail.

Cloud Computing Page 15


- Load Balancing: Distributing incoming network traffic across multiple servers or resources to
prevent overloading any single component.

- Failover: Automatically switching to a backup or standby system when a failure is detected.

- Monitoring and Alerts: Constantly monitoring the health and performance of systems and
triggering alerts or actions if anomalies or failures are detected.

Disaster Recovery (DR):

Disaster Recovery focuses on the processes and tools used to recover and restore IT infrastructure and
data after a disruptive event. Disruptions can include natural disasters, cyberattacks, equipment
failures, or any event that leads to significant system outages. The primary aim of disaster recovery is
to minimize data loss and downtime and restore normal operations as quickly as possible.

Components of an effective disaster recovery plan include:

- Data Backups: Regularly backing up critical data and systems to offsite locations to ensure data
integrity and availability for recovery.

- Replication: Replicating data and systems to a secondary location or cloud environment to facilitate
rapid recovery.

- Testing and Drills: Regularly testing and conducting drills to ensure that the disaster recovery plan
is effective and can be executed efficiently when needed.

-Offsite Storage: Storing backups and recovery resources in geographically separate locations to
mitigate risks associated with localized disasters.

-Documentation: Documenting and maintaining a comprehensive plan that includes procedures,


contact information, and step-by-step instructions for recovery.

b) Cloud Governance:

Cloud Governance refers to the set of policies, processes, and controls that organizations put in place
to manage and optimize their cloud resources effectively. As businesses adopt cloud computing, it
becomes crucial to establish governance practices to ensure compliance, security, cost control, and
overall efficiency in cloud operations.

Key aspects of cloud governance include:

-Compliance Management: Ensuring that cloud deployments adhere to regulatory requirements,


industry standards, and internal policies.

- Security Controls: Implementing security measures to protect data, applications, and infrastructure
in the cloud. This includes identity and access management, encryption, and network security.

-Cost Management: Optimizing cloud costs by monitoring resource usage, implementing cost
controls, and selecting the most cost-effective services and pricing models.

-Resource Lifecycle Management: Managing the entire lifecycle of cloud resources, from
provisioning and deployment to scaling, monitoring, and decommissioning.

- Policy Enforcement: Defining and enforcing policies related to resource usage, configurations, and
compliance, often through automation and policy-as-code practices.

- Cloud Provider Relationships: Managing relationships with cloud service providers, including
service level agreements (SLAs), support agreements, and vendor management.

Cloud Computing Page 16


Effective cloud governance helps organizations maintain control, mitigate risks, and derive maximum
value from their cloud investments. It aligns cloud activities with business objectives while addressing
concerns related to security, compliance, and operational efficiency.

Q 17).What is Hypervisor? Explain 2 Types of Hypervisor?


Ans
Virtualization requires the use of a hypervisor, which was originally called a virtual machine monitor
or VMM. A hypervisor abstracts operating systems and applications from their underlying hardware.
The physical hardware that a hypervisor runs on is typically referred to as a host machine, whereas the
VMs the hypervisor creates and supports are collectively called guest machines.

1).Type 1 Hypervisor (Bare-metal Hypervisor):

A Type 1 hypervisor runs directly on the hardware of the physical host system, without the need for
an underlying operating system. This type of hypervisor is also known as a bare-metal hypervisor
because it operates directly on the "bare metal" hardware. Type 1 hypervisors are generally more
efficient and provide better performance compared to Type 2 hypervisors.

Key Characteristics:

1. Installed directly on the hardware.


2. Manages and allocates resources to virtual machines.
3. Typically used in enterprise environments, data centers, and cloud infrastructure.
4. Examples include VMware ESXi, Microsoft Hyper-V Server, and Xen.

2).Type 2 Hypervisor (Hosted Hypervisor):

A Type 2 hypervisor runs on top of an existing operating system (host OS) on the physical hardware.
It relies on the host operating system to manage hardware resources and provides a virtualization layer
for running guest operating systems. Type 2 hypervisors are often used for development, testing, or
scen

Key Characteristics:

1. Installed as a software application on top of an existing operating system.


2. Relies on the host operating system for resource management.
3. Commonly used on desktops or laptops for testing and development.

Cloud Computing Page 17


4. Examples include Oracle Virtual Box, VMware Workstation, and Microsoft Hyper-V (when
installed on a Windows OS).

Q 18).Write a Note On Fabric Services Of ANEKA in Detail?


Ans
Fabric Services of Aneka: The Foundation of Scalable Cloud Computing

Aneka, a platform for developing and deploying cloud applications, relies heavily on its **Fabric
Services**. These fundamental services form the lowest level of the software stack, acting as the
backbone of Aneka's infrastructure management capabilities. Understanding Fabric Services is crucial
for anyone looking to leverage the power of Aneka for their cloud-based projects.

Key functions of Fabric Services:

Hardware Profiling and Dynamic Resource Provisioning: Fabric Services directly interact with
nodes through the Platform Abstraction Layer (PAL). They perform hardware profiling, gathering
information about the available resources on each node. Based on this data, Fabric Services
dynamically provision resources to running virtual machines, ensuring optimal utilization and
preventing resource bottlenecks.

Monitoring and Heartbeat Services: Fabric Services constantly monitor the health and performance
of nodes and virtual machines. This includes tracking resource utilization, CPU and memory usage,
and network activity. If any anomalies are detected, Fabric Services can trigger alerts and initiate
corrective actions like resource rebalancing or VM migration.

Job Management and Scheduling: Fabric Services play a crucial role in managing and scheduling
jobs in Aneka. They handle tasks like job submission, queuing, execution on available nodes, and
monitoring completion. This ensures efficient utilization of resources and prevents job starvation.

Container Management: Fabirc Services serve as a containerization layer for Aneka. They manage
the creation, deployment, and lifecycle of containers within the Aneka environment. This allows for
flexible and efficient application deployment and scaling.

Communication and Coordination: Fabric Services facilitate communication and coordination


between different components of the Aneka platform. This includes communication between nodes,
virtual machines, and the Aneka management system.

Benefits of Fabric Services:

Scalability and Elasticity: Fabric Services enable Aneka to dynamically scale resources up or down
based on demand. This allows for flexible cloud applications that can handle fluctuating workloads
efficiently.

Resource Utilization: Fabric Services optimize resource utilization by dynamically provisioning


resources and monitoring their usage. This minimizes wasted resources and reduces costs.

High Availability and Fault Tolerance: The monitoring and heartbeat services of Fabric Services
help ensure high availability and fault tolerance. If a node or VM fails, Fabric Services can
automatically migrate the workload to another node, minimizing downtime.

Flexibility and Manageability: Fabric Services provide a platform for building and managing
complex cloud applications with ease. The containerization and communication features simplify
application deployment and monitoring.

Cloud Computing Page 18


Fabric Services are the cornerstone of Aneka's powerful and scalable cloud infrastructure. By
efficiently managing resources, monitoring system health, and facilitating communication, Fabric
Services enable Aneka to deliver a robust and flexible platform for developing and deploying cloud
applications.

Q 19).What Difference Between Private Cloud and Public Cloud?


Ans :-
Public Cloud Private Cloud
Cloud Computing infrastructure is shared with Cloud Computing infrastructure is shared with
the public by service providers over the internet. private organizations by service providers over
It supports multiple customers i.e, enterprises. the internet. It supports one enterprise.
Multi-Tenancy i.e, Data of many enterprises are Single Tenancy i.e, Data of a single enterprise is
stored in a shared environment but are isolated. stored.
Data is shared as per rule, permission, and
security.
Cloud service provider provides all the possible Specific services and hardware as per the need of
services and hardware as the user-base is the the enterprise are available in a private cloud
world. Different people and organizations may
need different services and hardware. Services
provided must be versatile.
It is hosted at the Service Provider site. It is hosted at the Service Provider site or
enterprise.
It is connected to the public internet. It only supports connectivity over the private
network.
Scalability is very high, and reliability is Scalability is limited, and reliability is very high.
moderate.
Cloud service provider manages the cloud and Managed and used by a single enterprise.
customers use them.
It is cheaper than the private cloud. It is costlier than the public cloud

Example: Amazon web service (AWS) and Example: Microsoft KVM, HP, Red Hat &
Google AppEngine etc. VMWare etc.

Q 20). Explain Fault Tolerance System in detail?


Ans :-
Fault tolerance in cloud computing means creating a blueprint for ongoing work whenever some parts
are down or unavailable. It helps enterprises evaluate their infrastructure needs and requirements and
provides services in case the respective device becomes unavailable for some reason.

It does not mean that the alternative system can provide 100% of the entire service. Still, the concept
is to keep the system usable and, most importantly, at a reasonable level in operational mode. It is
important if enterprises continue growing in a continuous mode and increase their productivity levels.

Cloud Computing Page 19


Replication:

Fault-tolerant systems work on running multiple replicas for each service. Thus, if one part of the
system goes wrong, other instances can be used to keep it running instead. For example, take a
database cluster that has 3 servers with the same information on each. All the actions like data entry,
update, and deletion are written on each. Redundant servers will remain idle until a fault tolerance
system demands their availability.

Redundancy:

When a system part fails or goes downstate, it is important to have a backup type system. The server
works with emergency databases that include many redundant services. For example, a website
program with MS SQL as its database may fail midway due to some hardware fault. Then the
redundancy concept has to take advantage of a new database when the original is in offline mode.

Techniques for Fault Tolerance in Cloud Computing

 Priority should be given to all services while designing a fault tolerance system. Special preference
should be given to the database as it powers many other entities.
 After setting the priorities, the Enterprise has to work on mock tests. For example, Enterprise has a
forums website that enables users to log in and post comments. When authentication services fail
due to a problem, users will not be able to log in.

Existence of Fault Tolerance in Cloud Computing

 System Failure: This can either be a software or hardware issue. A software failure results in a
system crash or hangs, which may be due to Stack Overflow or other reasons. Any improper
maintenance of physical hardware machines will result in hardware system failure.
 Incidents of Security Breach: There are many reasons why fault tolerance may arise due to
security failures. The hacking of the server hurts the server and results in a data breach. Other
reasons for requiring fault tolerance in the form of security breaches include ransomware,
phishing, virus attacks, etc.

Q 21). Difference Between Grid and Cluster Computing?


Ans
Cluster Computing:

A Computer Cluster is a local network of two or more homogeneous computers.A computation


process on such a computer network i.e. cluster is called Cluster Computing.

Grid Computing:

Grid Computing can be defined as a network of homogeneous or heterogeneous computers working


together over a long distance to perform a task that would rather be difficult for a single machine.

Cloud Computing Page 20


Cluster Computing Grid Computing
Nodes must be homogeneous i.e. they should Nodes may have different Operating systems and
have same type of hardware and operating hardwares. Machines can be homogeneous or
system. heterogeneous.
Computers in a cluster are dedicated to the same Computers in a grid contribute their unused
work and perform no other task. processing resources to the grid computing
network.
Computers are located close to each other. Computers may be located at a huge distance
from one another.
Computers are connected by a high speed local Computers are connected using a low speed bus
area network bus. or the internet.
Computers are connected in a centralized Computers are connected in a distributed or de-
network topology. centralized network topology.
Scheduling is controlled by a central server. It may have servers, but mostly each node
behaves independently.
Whole system has a centralized resource Every node manages it’s resources
manager. independently.
Whole system functions as a single system. Every node is autonomous, and anyone can opt
out anytime.
Cluster computing is used in areas such as Grid computing is used in areas such as
WebLogic Application Servers, Databases, etc. predictive modeling, Automation, simulations,
etc.
It has Centralized Resource management. It has Distributed Resource Management.

Q 22). What is VM? Explain Difference between TYPE 1 Hypervisor and TYPE 2 Hypervisor.
Ans
Virtual Machine:
A VM, or Virtual Machine, is a software program that emulates a physical computer system.
Imagine it as a computer within a computer, running its own operating system and applications
independently of the underlying hardware. This allows multiple VMs to share the resources of a
single physical machine, increasing utilization and flexibility

Cloud Computing Page 21


Q 23). What is GrepTheWeb? Explain in details.
Ans:-

GrepTheWeb allows developers to do some pretty specialized searches like selecting documents that
have a particular HTML tag or META tag.

The output of the Million Search Results Service, which is a sorted list of links and gzipped
(compressed using the Unix gzip utility) in a single file, is given to GrepTheWeb as input. It takes a
regular expression as a second input.

It then returns a filtered subset of document links sorted and gzipped into a single file. Since the
overall process is asynchronous, developers can get the status of their jobs by calling GetStatus() to
see whether the execution is completed.

GrepTheWeb Architecture

Cloud Computing Page 22


GrepTheWeb Architecture - Level 2

Amazon S3 for retrieving input datasets and for storing output dataset. Amazon SQS for buffering
requests acting as a "glue" between controllers. Amazon SimpleDB for storing intermediate status,
log, and for user data about tasks. Amazon EC2 for running a large distributed processing Hadoop
cluster on-demand. Hadoop for distributed processing, automatic parallelization, and job scheduling

Phases of GrepTheWeb Architecture

Launch phase: is responsible for validating and initiating the processing GrepTheWeb request,
instantiating Amazon EC2 instances, launching Hadoop cluster on them and starting all job
processes.

Monitor phase: is responsible for monitoring the EC2 cluster, maps, reduces, and checking for
success and failure.

Shutdown: phase is responsible for billing and shutting down all Hadoop processes and Amazon
EC2 instances,

Cleanup phase: deletes Amazon SimpleDB transient data.

Cloud Computing Page 23

You might also like