Cloud Computing IMR QnA
Cloud Computing IMR QnA
1. On-Demand Self-Service: - Cloud resources can be provisioned and managed without requiring
human intervention from the service provider. Users can unilaterally provision computing capabilities,
such as server time and network storage, as needed.
2. Broad Network Access: - Cloud services are available over the network and can be accessed
through standard mechanisms and platforms. This means that cloud services are accessible over the
internet using a variety of devices, such as laptops, smartphones, and tablets.
3. Resource Pooling: - Cloud providers pool computing resources to serve multiple customers.
These resources are dynamically assigned and reassigned based on demand. Customers typically have
no control or knowledge over the exact location of the resources, but they may specify certain
parameters like location or type of service.
4. Rapid Elasticity: - Cloud resources can be rapidly and elastically provisioned to quickly scale up
or down based on demand. This ensures that users have access to the computing resources they need
when they need them and can scale down when demand decreases.
5. Measured Service: - Cloud computing resources are metered, and usage can be monitored,
controlled, and reported. This characteristic enables providers to measure and charge users for their
actual usage of resources, promoting transparency and efficiency.
6. Service Models: - Cloud computing can be categorized into three primary service models:
- Infrastructure as a Service (IaaS): Provides virtualized computing resources over the internet.
- Platform as a Service (PaaS): Offers a platform allowing customers to develop, run, and
manage applications without dealing with the complexity of infrastructure.
- Software as a Service (SaaS): Delivers software applications over the internet, eliminating the
need for users to install, maintain, and manage the software locally.
- Public Cloud: Resources are owned and operated by a third-party cloud service provider and are
made available to the general public.
- Private Cloud: Cloud infrastructure is operated solely for a single organization, and it may be
managed by the organization or a third party.
- Hybrid Cloud: Combines elements of both public and private clouds, allowing data and
applications to be shared between them.
These characteristics provide a comprehensive framework for understanding the nature and features
of cloud computing services. Organizations and users can leverage these characteristics to assess,
adopt, and manage cloud-based solutions effectively.
- In IaaS, the cloud provider delivers virtualized computing resources over the internet. These
resources can include virtual machines, storage, and networking. IaaS provides the foundational
infrastructure that allows users to deploy and run their applications. Users have control over the
operating systems, applications, and some network components, but the provider is responsible for
managing the underlying infrastructure, such as data centers, servers, and storage.
Examples of IaaS providers include Amazon Web Services (AWS) Elastic Compute Cloud (EC2),
Microsoft Azure Virtual Machines, and Google Cloud Compute Engine.
- PaaS offers a more abstracted layer than IaaS, providing a platform that allows users to develop,
run, and manage applications without the complexities of managing the underlying infrastructure.
With PaaS, users focus on building and deploying applications, while the provider takes care of the
underlying infrastructure, runtime, and middleware. This allows for faster development and
deployment cycles.
Examples of PaaS offerings include Google App Engine, Microsoft Azure App Service, and Heroku.
- SaaS delivers software applications over the internet, eliminating the need for users to install,
maintain, and manage the software locally. Users access the software through a web browser, and the
provider handles all aspects of software maintenance, including updates, patches, and security. SaaS is
typically designed for end-users, and it covers a wide range of applications, from productivity tools to
business applications.
Examples of SaaS applications include Salesforce, Google Workspace (formerly G Suite), and
Microsoft 365.
These layers represent a spectrum of abstraction, with IaaS providing more control and customization
at the infrastructure level, PaaS abstracting away infrastructure details to focus on application
development, and SaaS delivering fully functional applications to end-users without requiring them to
manage any part of the underlying infrastructure. Organizations can choose the appropriate service
model based on their specific needs and the level of control they require over the computing
environment.
Cloud marketplaces offer several benefits for the enterprise. One of the biggest advantages of using a
cloud marketplace is committed spend. This is a predetermined annual spend with the service provider
that is often negotiated with a discount. According to Tackle, nearly half–43%–of buyers say taking
advantage of their committed spend with cloud providers is their top reason for purchasing through a
cloud marketplace, up from 20% in 2020.
Organizations can look at the services available in the provider’s marketplace and use the allocated
funds to make purchases that will best integrate with their hybrid cloud strategies.
Cloud marketplaces also provide a single source of billing and invoicing, which can be especially
helpful for those looking to centralize budgets. The cloud marketplace provider bills the customer and
is then billed by the third-party vendor for use of the product. This minimizes administration around
procurement, saving time and professional resources that can be assigned to other high-value
workloads.
1. Users/Brokers:
Operating Users or brokers send service requests to the Data Center and Cloud from anywhere in the
world for processing.
The SLA Resource Allocator serves as a liaison between the Data Center/Cloud service provider and
external users/brokers. To support SLA-oriented resource management, the following systems must
interact:
-Service Request Examiner and Admission Control: When a service request is initially made, the
Service Request Examiner and Admission Control mechanism evaluates the supplied request for QoS
criteria before deciding whether to accept or refuse the request. As a result, it prevents resource
overload, in which numerous service requests are unable to be delivered properly due to a lack of
available resources. It also requires up-to-date status information on resource availability from the
VM Monitor and workload processing from the Service Request Monitor to make appropriate
- VMs. Pricing: The Pricing mechanism determines how service requests are billed. Requests, for
example, might be charged depending on submission time (peak/off-peak), price rates
(fixed/changing), or resource availability (supply/demand). Pricing serves as a foundation for
regulating the supply and demand for computing resources inside the Data Center and aids in the
proper prioritization of resource allocations.
-Accounting: The Accounting mechanism keeps track of the real utilization of resource requests so
that the ultimate cost may be calculated and charged to the consumer Furthermore, the Service
Request Examiner and Admission Control mechanism may use the stored previous usage data to
optimize resource allocation choices.
-VM Monitor: The VM Monitor mechanism monitors the availability of VMs as well as resource
entitlements.
- Dispatcher: The Dispatcher mechanism begins executing accepted service requests assigned VMs.
Service Request Monitor. The Service Request Monitor mechanism monitors the status of service
requests as they are being executed.
3. VMs:
Multiple VMs may be started and terminated dynamically on a single physical system to satisfy
accepted service requests, enabling maximum flexibility to design multiple partitions of resources on
the same physical system to match particular service request needs. Furthermore, because each VM is
separated from one another on the same physical computer, many VMs can execute applications
based on various operating system environments on the same physical computer at the same time.
4. Physical Machines:
The Data Center is made up of several computer servers that supply resources to satisfy service
demands.
Q 6).What is meaning of load Balancing and Virtualization how load Balancing is useful in
cloud?
Ans:
Load Balancing :
Load balancing in cloud computing distributes traffic and workloads to ensure that no single server or
machine is under-loaded, overloaded, or idle. Load balancing optimizes various constrained
parameters such as execution time, response time, and system stability to improve overall cloud
performance. Load balancing architecture in cloud computing consists of a load balancer that sits
between servers and client devices to manage traffic.
Load balancing plays a crucial role in cloud computing environments, where resources are
dynamically provisioned and scaled based on demand.
Load balancing helps distribute incoming requests or workloads across multiple servers or virtual
machines, ensuring that each resource is efficiently utilized. This prevents overloading individual
servers while others remain underutilized.
Cloud environments often experience fluctuating workloads. Load balancing enables automatic
scaling by distributing incoming traffic among available resources. When demand increases, new
instances can be added, and load balancers ensure a balanced distribution of requests.
High Availability:
Load balancing contributes to high availability by redirecting traffic away from unhealthy or
overloaded servers. In the event of a server failure, traffic is automatically rerouted to healthy servers,
minimizing downtime and ensuring continuous service availability.
By evenly distributing workloads, load balancing helps optimize performance and reduce response
times. Users experience faster and more reliable access to applications and services hosted in the
cloud.
Fault Tolerance:
Load balancers enhance fault tolerance by detecting and isolating faulty servers. They can route traffic
away from servers experiencing issues, preventing disruptions to the overall service
1).Data Loss –
Data Loss is one of the issues faced in Cloud Computing. This is also known as Data Leakage. As we
know that our sensitive data is in the hands of Somebody else, and we don’t have full control over our
database. So, if the security of cloud service is to break by hackers then it may be possible that
hackers will get access to our sensitive data or personal files.
As we know, if we are talking about the cloud and its services it means we are talking about the
Internet. Also, we know that the easiest way to communicate with Cloud is using API. So it is
important to protect the Interface’s and API’s which are used by an external user. But also in cloud
computing, few services are available in the public domain which are the vulnerable part of Cloud
Computing because it may be possible that these services are accessed by some third parties. So, it
may be possible that with the help of these services hackers can easily hack or harm our data.
Account Hijacking is the most serious security issue in Cloud Computing. If somehow the Account of
User or an Organization is hijacked by a hacker then the hacker has full authority to perform
Unauthorized Activities.
Vendor lock-In is also an important Security issue in Cloud Computing. Many organizations will face
different problems while shifting from one vendor to another. For example, An Organization wants to
shift from AWS Cloud to Google Cloud Services then they face various problems like shifting of all
data, also both cloud services have different techniques and functions, so they also face problems
regarding that. Also, it may be possible that the charges of AWS are different from Google Cloud, etc.
5).Lack of Skill –
While working, shifting to another service provider, need an extra feature, how to use a feature, etc.
are the main problems caused in IT Company who doesn’t have skilled Employees. So it requires a
skilled person to work with Cloud Computing.
This type of attack occurs when the system receives too much traffic. Mostly DoS attacks occur in
large organizations such as the banking sector, government sector, etc. When a DoS attack occurs,
data is lost. So, in order to recover data, it requires a great amount of money as well as time to handle
it.
You create a new virtual machine using a virtualization management tool, allocating 2 v CPUs, 4 GB
of RAM, and a 50 GB disk. You name the virtual machine "Web App VM" and choose to use Ubuntu
20.04 as the operating system
Configuration:
You configure the virtual machine by setting up the network, assigning an IP address, and installing
necessary packages such as Apache PHP, and MySQL. You also configure the firewall and set up
SSH access for remote administration
Installation:
You install Ubuntu 20.04 from an ISO image and install the required packages using the command
line interface
Deployment:
You start the virtual machine and launch the web application.
Operation:
You monitor the performance of the virtual machine and the web application, ensuring that they are
running optimally. You also monitor resource usage, network traffic, and other performance metrics.
Maintenance:
You apply security patches and updates to the operating system and the web application. You also
perform regular backups of the virtual machine's data to prevent data loss in the event of a failure or
disaster
Live migration is a technique for moving a virtual machine (VM) from one physical host to another
while the VM remains operational and users are unaware of the process. Xen, a popular open-source
hypervisor, uses a multi-pass algorithm for live migration, which involves several key steps:
1. Pre-copy Phase:
Dirty Page Transfer: The hypervisor identifies pages that have been modified since the migration
started ("dirty pages"). These pages are copied to the destination host in batches.
Pre-copy Buffering: Some frequently accessed pages ("hot pages") can be proactively copied to the
destination host before they are modified, further reducing downtime during the final cut-over.
2. Cut-over Phase:
Guest Pause: The VM on the source host is briefly paused (typically for milliseconds) to ensure
consistency of the memory state.
Final Page Transfer: Any remaining dirty pages are transferred to the destination host.
Guest Resume: The VM on the destination host is resumed, and it continues execution as if nothing
happened.
The specific algorithm used by Xen for live migration involves several optimizations:
Multi-pass Approach: The pre-copy and cut-over phases are repeated multiple times, progressively
transferring more memory pages until only a small amount remains for the final cut-over. This
minimizes the total downtime.
Page Dirty Bit Tracking: Xen uses a hardware feature called the "dirty bit" to efficiently identify
which pages have been modified and need to be copied.
Network Bandwidth Optimization: Xen utilizes techniques like compression and delta encoding to
reduce the amount of data transferred during the migration.
Minimizes Downtime: Users and applications are unaffected during the migration process,
improving service availability.
Load Balancing: VMs can be dynamically moved to different hosts to balance resource utilization
and prevent overload.
Maintenance and Upgrades: VMs can be migrated to other hosts for maintenance or upgrades
without downtime.
live migration with the Xen hypervisor algorithm is a powerful tool for improving the flexibility and
manageability of virtualized environments. However, it's important to consider the limitations and
ensure adequate resources to ensure successful implementation.
A customer-based SLA is an agreement that covers all of the services used by a customer. A customer
service level agreement covers specific details of services, provisions of service availability, an
outline of responsibilities, escalation procedures, and terms for cancellation.
Service-level SLA
A service-level SLA is a contract that details an identical service offered to multiple customers. For
example, if a service provider had multiple clients using its virtual help desk, the same service-based
SLA would be issued to all clients.
Multi-level SLA
This type of agreement is split into multiple levels that integrate several conditions into the same
system. This approach is suitable for providers that have many customers using their product at
different price ranges or service levels. These differing service levels can be built into a multi-level
SLA.
>Number of common elements that you can include in a service level agreement (SLA).
Agreement overview: An agreement overview includes the start and end dates of an SLA, details of
the parties involved, and an overview of the services included.
Description of services: A description of services outlines all services provided within an SLA. It
details information such as turnaround times, technologies and applications, maintenance schedules,
and processes and procedures.
Exclusions:- This section describes all exclusions and exemptions that are agreed upon by both
parties.
Service level objective:- A service level objective (SLO) is an agreement within an SLA about a
specific metric like response time or uptime. Both parties agree to key service performance metrics
backed by data.
Security standards:-Both the service provider and the client use security standards to demonstrate
the security measures and protocols in place. This section also commonly includes non-disclosure
agreements (NDAs) and anti-poaching agreements.
Disaster recovery process:- An SLA will often detail the process of disaster recovery and outline the
mechanisms and processes to follow in case of service failure of the vendor. This section also includes
information on the restarting process, including restart times and alerts.
Service tracking and reporting agreement:- In this section, performance metrics are agreed upon
by both parties. Most customers closely track their service performance. A reasonable baseline for this
tracking would be before and after using a new service provider.
Penalties:- This section clearly states the penalties, financial or otherwise, that either side incurs if
they fail to live up to their SLA obligations.
3. Roles and Responsibilities: - Clearly outlines the responsibilities of both the service provider and
the customer. This section defines who is responsible for specific tasks, such as maintenance, support,
reporting, and compliance with agreed-upon standards.
4. Performance Monitoring and Reporting: - Describes the methods and frequency of performance
monitoring and reporting. This includes how the provider will measure and report on key performance
indicators (KPIs) and the mechanisms for addressing any discrepancies or issues identified during
monitoring.
5. Escalation Procedures: - Details the procedures to be followed in the event of service disruptions,
outages, or failures. This section often includes a hierarchical escalation process for resolving issues
promptly and effectively.
6. Availability and Downtime:- Specifies the expected availability of the service and any planned
downtime for maintenance or upgrades. It may also define compensation or credits in case of service
unavailability exceeding agreed-upon thresholds.
7. Security and Compliance: - Outlines security measures, data protection protocols, and
compliance requirements relevant to the services provided. This section ensures that the service aligns
with industry standards and regulatory obligations.
8.Customer Support and Communication: - Describes the customer support channels, response
times, and communication protocols. This section ensures that the customer knows how to seek
support and how the provider will communicate with them regarding service-related matters.
9.Performance Benchmarks:- Establishes benchmarks or target levels for each service metric,
providing a basis for evaluating performance and determining compliance with the SLA.
These components collectively form a comprehensive SLA, serving as a vital document for both the
service provider and the customer. A well-structured SLA helps build trust, ensures clarity, and
provides a basis for effective communication and collaboration between the parties involved.
1. Scalability and Flexibility: Hybrid cloud environments provide the ability to scale resources
dynamically. Organizations can use the public cloud for handling variable workloads and scaling up
or down as needed, while keeping sensitive or critical workloads in the more controlled environment
of a private cloud.
2.Cost Efficiency:- By leveraging the public cloud for peak workloads or non-sensitive data,
organizations can optimize costs. The pay-as-you-go model of public clouds allows for cost savings,
while private clouds provide cost predictability and control over specific resources.
3.Resource Optimization:- Hybrid cloud allows organizations to allocate resources based on specific
workload requirements. Critical or sensitive applications can run in the private cloud, while less
critical workloads can use public cloud resources. This optimization leads to efficient resource
utilization.
4. Data Security and Compliance: Private clouds offer a higher level of control and security for
sensitive data and applications. Organizations can keep critical data on-premises or in a private cloud
5.Disaster Recovery and Business Continuity: Hybrid cloud architectures enhance disaster recovery
capabilities. Critical data and applications can be backed up or replicated in a public cloud, providing
a resilient solution for business continuity and disaster recovery planning.
6.Agility and Innovation: The hybrid cloud model allows organizations to innovate and deploy new
applications more rapidly. Development and testing can take place in the public cloud environment,
and production workloads can be deployed in the private cloud, ensuring a seamless and agile
development cycle.
7.Geographic Expansion: - For organizations with a global presence, a hybrid cloud approach
enables them to deploy resources closer to end-users or comply with data residency requirements.
This helps improve performance and ensures compliance with regional regulations.
the integration of private and public clouds offers organizations a strategic approach to IT
infrastructure that combines the benefits of both deployment models. This hybrid cloud approach
enables organizations to achieve greater flexibility, efficiency, and agility in meeting their business
objectives.
1. Public cloud
2. Private cloud
3. Hybrid cloud
4. Community cloud
5. Multicloud
1)Public cloud:
Are managed by third parties which provide cloud services over the internet to the public, these
services are available as pay-as-you-go billing models. They offer solutions for minimizing IT
infrastructure costs and become a good option for handling peak loads on the local infrastructure.
Public clouds are the go-to option for small enterprises, which can start their businesses without large
upfront investments by completely relying on public infrastructure for their IT needs.
2) Private Cloud:
3) Hybird Cloud:
A hybrid cloud is a heterogeneous distributed system formed by combining facilities of the public
cloud and private cloud. For this reason, they are also called heterogeneous clouds. A major drawback
of private deployments is the inability to scale on-demand and efficiently address peak loads. Here
public clouds are needed. Hence, a hybrid cloud takes advantage of both public and private clouds.
4).Community Cloud:
Community clouds are distributed systems created by integrating the services of different clouds to
address the specific needs of an industry, a community, or a business sector. But sharing
responsibilities among the organizations is difficult. In the community cloud, the infrastructure is
shared between organizations that have shared concerns or tasks. An organization or a third party may
manage the cloud.
Multicloud is the use of multiple cloud computing services from different providers, which allows
organizations to use the best-suited services for their specific needs and avoid vendor lock-in.This
allows organizations to take advantage of the different features and capabilities offered by different
cloud providers.
The following diagram shows a basic architecture of an Amazon EC2 instance deployed within an
Amazon Virtual Private Cloud (VPC). In this example, the EC2 instance is within an Availability
Zone in the Region. The EC2 instance is secured with a security group, which is a virtual firewall that
controls incoming and outgoing traffic.
A private key is stored on the local computer and a public key is stored on the instance. Both keys are
specified as a key pair to prove the identity of the user. In this scenario, the instance is backed by an
Amazon EBS volume. The VPC communicates with the internet using an internet gateway. For more
information about Amazon VPC
Hosting MMO games on the cloud provides unparalleled scalability. Cloud infrastructure allows game
developers to dynamically scale resources up or down based on player demand. This ensures a
seamless gaming experience, even during peak usage periods, without the need for significant upfront
investments in hardware.
Global Reach:
Resource Optimization:
Cloud environments allow for efficient resource utilization. Game developers can optimize server
configurations, allocate resources on-demand, and adjust capacity to match varying player loads. This
flexibility ensures that resources are used efficiently, leading to cost savings and improved
performance.
Cloud providers offer high levels of reliability and redundancy. Hosting MMO games on the cloud
involves deploying servers across multiple Availability Zones (AZs) to ensure fault tolerance. In the
event of hardware failures or other issues, players can seamlessly transition to alternative servers
without service interruption.
Cloud-based Content Delivery Networks (CDNs) facilitate the efficient distribution of game assets,
reducing latency for content delivery. Edge computing capabilities in the cloud enable the processing
of game logic closer to players, minimizing latency and enhancing real-time interactions in MMO
games.
Cloud platforms support dynamic scaling and auto-scaling features, allowing MMO game servers to
adapt to fluctuating player loads. Auto-scaling mechanisms automatically adjust the number of servers
based on predefined criteria, ensuring optimal performance and cost-effectiveness.
Cloud environments provide development teams with the tools and services needed to streamline
game development and deployment processes. Continuous integration and continuous deployment
(CI/CD) practices are easily facilitated in cloud-based development pipelines.
hosting Massively Multiplayer Online games on the cloud brings unparalleled advantages, including
scalability, global reach, resource optimization, reliability, and cost efficiency. This approach
empowers game developers to focus on creating engaging and immersive gaming experiences while
leveraging the flexible and dynamic nature of cloud infrastructure.
High Availability refers to a system or component's ability to remain operational and accessible for a
significantly high percentage of time. In the context of IT infrastructure and services, high availability
is achieved through redundancy and fault-tolerant design. The goal is to minimize downtime and
ensure that services are continuously available, even in the face of hardware failures, software errors,
or other unexpected issues.
- Redundancy: Having duplicate or backup components, such as servers, networks, or data centers, to
take over if the primary components fail.
- Monitoring and Alerts: Constantly monitoring the health and performance of systems and
triggering alerts or actions if anomalies or failures are detected.
Disaster Recovery focuses on the processes and tools used to recover and restore IT infrastructure and
data after a disruptive event. Disruptions can include natural disasters, cyberattacks, equipment
failures, or any event that leads to significant system outages. The primary aim of disaster recovery is
to minimize data loss and downtime and restore normal operations as quickly as possible.
- Data Backups: Regularly backing up critical data and systems to offsite locations to ensure data
integrity and availability for recovery.
- Replication: Replicating data and systems to a secondary location or cloud environment to facilitate
rapid recovery.
- Testing and Drills: Regularly testing and conducting drills to ensure that the disaster recovery plan
is effective and can be executed efficiently when needed.
-Offsite Storage: Storing backups and recovery resources in geographically separate locations to
mitigate risks associated with localized disasters.
b) Cloud Governance:
Cloud Governance refers to the set of policies, processes, and controls that organizations put in place
to manage and optimize their cloud resources effectively. As businesses adopt cloud computing, it
becomes crucial to establish governance practices to ensure compliance, security, cost control, and
overall efficiency in cloud operations.
- Security Controls: Implementing security measures to protect data, applications, and infrastructure
in the cloud. This includes identity and access management, encryption, and network security.
-Cost Management: Optimizing cloud costs by monitoring resource usage, implementing cost
controls, and selecting the most cost-effective services and pricing models.
-Resource Lifecycle Management: Managing the entire lifecycle of cloud resources, from
provisioning and deployment to scaling, monitoring, and decommissioning.
- Policy Enforcement: Defining and enforcing policies related to resource usage, configurations, and
compliance, often through automation and policy-as-code practices.
- Cloud Provider Relationships: Managing relationships with cloud service providers, including
service level agreements (SLAs), support agreements, and vendor management.
A Type 1 hypervisor runs directly on the hardware of the physical host system, without the need for
an underlying operating system. This type of hypervisor is also known as a bare-metal hypervisor
because it operates directly on the "bare metal" hardware. Type 1 hypervisors are generally more
efficient and provide better performance compared to Type 2 hypervisors.
Key Characteristics:
A Type 2 hypervisor runs on top of an existing operating system (host OS) on the physical hardware.
It relies on the host operating system to manage hardware resources and provides a virtualization layer
for running guest operating systems. Type 2 hypervisors are often used for development, testing, or
scen
Key Characteristics:
Aneka, a platform for developing and deploying cloud applications, relies heavily on its **Fabric
Services**. These fundamental services form the lowest level of the software stack, acting as the
backbone of Aneka's infrastructure management capabilities. Understanding Fabric Services is crucial
for anyone looking to leverage the power of Aneka for their cloud-based projects.
Hardware Profiling and Dynamic Resource Provisioning: Fabric Services directly interact with
nodes through the Platform Abstraction Layer (PAL). They perform hardware profiling, gathering
information about the available resources on each node. Based on this data, Fabric Services
dynamically provision resources to running virtual machines, ensuring optimal utilization and
preventing resource bottlenecks.
Monitoring and Heartbeat Services: Fabric Services constantly monitor the health and performance
of nodes and virtual machines. This includes tracking resource utilization, CPU and memory usage,
and network activity. If any anomalies are detected, Fabric Services can trigger alerts and initiate
corrective actions like resource rebalancing or VM migration.
Job Management and Scheduling: Fabric Services play a crucial role in managing and scheduling
jobs in Aneka. They handle tasks like job submission, queuing, execution on available nodes, and
monitoring completion. This ensures efficient utilization of resources and prevents job starvation.
Container Management: Fabirc Services serve as a containerization layer for Aneka. They manage
the creation, deployment, and lifecycle of containers within the Aneka environment. This allows for
flexible and efficient application deployment and scaling.
Scalability and Elasticity: Fabric Services enable Aneka to dynamically scale resources up or down
based on demand. This allows for flexible cloud applications that can handle fluctuating workloads
efficiently.
High Availability and Fault Tolerance: The monitoring and heartbeat services of Fabric Services
help ensure high availability and fault tolerance. If a node or VM fails, Fabric Services can
automatically migrate the workload to another node, minimizing downtime.
Flexibility and Manageability: Fabric Services provide a platform for building and managing
complex cloud applications with ease. The containerization and communication features simplify
application deployment and monitoring.
Example: Amazon web service (AWS) and Example: Microsoft KVM, HP, Red Hat &
Google AppEngine etc. VMWare etc.
It does not mean that the alternative system can provide 100% of the entire service. Still, the concept
is to keep the system usable and, most importantly, at a reasonable level in operational mode. It is
important if enterprises continue growing in a continuous mode and increase their productivity levels.
Fault-tolerant systems work on running multiple replicas for each service. Thus, if one part of the
system goes wrong, other instances can be used to keep it running instead. For example, take a
database cluster that has 3 servers with the same information on each. All the actions like data entry,
update, and deletion are written on each. Redundant servers will remain idle until a fault tolerance
system demands their availability.
Redundancy:
When a system part fails or goes downstate, it is important to have a backup type system. The server
works with emergency databases that include many redundant services. For example, a website
program with MS SQL as its database may fail midway due to some hardware fault. Then the
redundancy concept has to take advantage of a new database when the original is in offline mode.
Priority should be given to all services while designing a fault tolerance system. Special preference
should be given to the database as it powers many other entities.
After setting the priorities, the Enterprise has to work on mock tests. For example, Enterprise has a
forums website that enables users to log in and post comments. When authentication services fail
due to a problem, users will not be able to log in.
System Failure: This can either be a software or hardware issue. A software failure results in a
system crash or hangs, which may be due to Stack Overflow or other reasons. Any improper
maintenance of physical hardware machines will result in hardware system failure.
Incidents of Security Breach: There are many reasons why fault tolerance may arise due to
security failures. The hacking of the server hurts the server and results in a data breach. Other
reasons for requiring fault tolerance in the form of security breaches include ransomware,
phishing, virus attacks, etc.
Grid Computing:
Q 22). What is VM? Explain Difference between TYPE 1 Hypervisor and TYPE 2 Hypervisor.
Ans
Virtual Machine:
A VM, or Virtual Machine, is a software program that emulates a physical computer system.
Imagine it as a computer within a computer, running its own operating system and applications
independently of the underlying hardware. This allows multiple VMs to share the resources of a
single physical machine, increasing utilization and flexibility
GrepTheWeb allows developers to do some pretty specialized searches like selecting documents that
have a particular HTML tag or META tag.
The output of the Million Search Results Service, which is a sorted list of links and gzipped
(compressed using the Unix gzip utility) in a single file, is given to GrepTheWeb as input. It takes a
regular expression as a second input.
It then returns a filtered subset of document links sorted and gzipped into a single file. Since the
overall process is asynchronous, developers can get the status of their jobs by calling GetStatus() to
see whether the execution is completed.
GrepTheWeb Architecture
Amazon S3 for retrieving input datasets and for storing output dataset. Amazon SQS for buffering
requests acting as a "glue" between controllers. Amazon SimpleDB for storing intermediate status,
log, and for user data about tasks. Amazon EC2 for running a large distributed processing Hadoop
cluster on-demand. Hadoop for distributed processing, automatic parallelization, and job scheduling
Launch phase: is responsible for validating and initiating the processing GrepTheWeb request,
instantiating Amazon EC2 instances, launching Hadoop cluster on them and starting all job
processes.
Monitor phase: is responsible for monitoring the EC2 cluster, maps, reduces, and checking for
success and failure.
Shutdown: phase is responsible for billing and shutting down all Hadoop processes and Amazon
EC2 instances,