Virtualization in Cloud Computing
Virtualization is the "creation of a virtual (rather than actual) version of something, such as a server, a
desktop, a storage device, an operating system or network resources".
In other words, Virtualization is a technique, which allows to share a single physical instance of a
resource or an application among multiple customers and organizations. It does by assigning a logical
name to a physical storage and providing a pointer to that physical resource when demanded.
What is the concept behind the Virtualization?
Creation of a virtual machine over existing operating system and hardware is known as Hardware
Virtualization. A Virtual machine provides an environment that is logically separated from the underlying
hardware.
The machine on which the virtual machine is going to create is known as Host Machine and that virtual
machine is referred as a Guest Machine
Types of Virtualization:
Hardware Virtualization.
Operating system Virtualization.
Server Virtualization.
Storage Virtualization.
1) Hardware Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the
hardware system is known as hardware virtualization.
The main job of hypervisor is to control and monitoring the processor, memory and other hardware
resources.
After virtualization of hardware system we can install different operating system on it and run different
applications on those OS.
Usage:
Hardware virtualization is mainly done for the server platforms, because controlling virtual machines is
much easier than controlling a physical server.
2) Operating System Virtualization:
When the virtual machine software or virtual machine manager (VMM) is installed on the Host operating
system instead of directly on the hardware system is known as operating system virtualization.
Usage:
Operating System Virtualization is mainly used for testing the applications on different platforms of OS.
3) Server Virtualization:
When the virtual machine software or virtual machine manager (VMM) is directly installed on the Server
system is known as server virtualization.
Usage:
Server virtualization is done because a single physical server can be divided into multiple servers on the
demand basis and for balancing the load.
4) Storage Virtualization:
Storage virtualization is the process of grouping the physical storage from multiple network storage
devices so that it looks like a single storage device.
Storage virtualization is also implemented by using software applications.
Usage:
Storage virtualization is mainly done for back-up and recovery purposes.
How does virtualization work in cloud computing?
Virtualization plays a very important role in the cloud computing technology, normally in the cloud
computing, users share the data present in the clouds like application etc, but actually with the help of
virtualization users shares the Infrastructure.
The main usage of Virtualization Technology is to provide the applications with the standard versions
to their cloud users, suppose if the next version of that application is released, then cloud provider has to
provide the latest version to their cloud users and practically it is possible because it is more expensive.
To overcome this problem we use basically virtualization technology, By using virtualization, all severs
and the software application which are required by other cloud providers are maintained by the third party
people, and the cloud providers has to pay the money on monthly or annual basis.
Key Cloud Security Benefits with Virtualization
By adopting virtualization in their cloud environment, organizations can realize the following security
benefits.
1. Flexibility
Organizations have the flexibility to share systems without essentially having to share critical information
or data across the systems.
2. Data Protection
They can prevent loss or damage to critical data, in cases where the system is compromised owing to
malicious activities.
3. Security against attacks
They have the ability to reduce the risk of multiple attacks in case of an exposure by methodically
isolating applications and virtual machines.
4. Cost Effectiveness
It improves the physical security of organizations by reducing hardware requirements, thereby leading to
fewer data centers.
5. Better Access Control
A higher level of access control is offered to system and network administrators, which separates
responsibilities and improves the system‟s efficiency.
A key consideration that organizations must take into account is that their system must be appropriately
set up or configured to leverage virtualization for cloud security effectively. Modern organizations must
safeguard their virtual environments against the growing plethora of threats.
Some of the key considerations in protecting a virtual environment include keeping software updated,
following configuration best practices, and utilizing AV software. While some risk of threat remains even
with some defenses, it is essential for organizations to implement security tools to track changes and
maintain throughput security.
Difference between Full Virtualization and Paravirtualization
1. Full Virtualization: Full Virtualization was introduced by IBM in the year 1966. It is the first software
solution for server virtualization and uses binary translation and direct approach techniques. In full
virtualization, guest OS is completely isolated by the virtual machine from the virtualization layer and
hardware. Microsoft and Parallels systems are examples of full virtualization.
2. Paravirtualization: Paravirtualization is the category of CPU virtualization which uses hypercalls for
operations to handle instructions at compile time. In paravirtualization, guest OS is not completely
isolated but it is partially isolated by the virtual machine from the virtualization layer and hardware.
VMware and Xen are some examples of paravirtualization.
The difference between Full Virtualization and Paravirtualization are as follows:
S.No. Full Virtualization Paravirtualization
In Full virtualization, virtual machines In paravirtualization, a virtual machine does not
permit the execution of the instructions implement full isolation of OS but rather
with the running of unmodified OS in an provides a different API which is utilized when
1. entirely isolated way. OS is subjected to alteration.
While the Paravirtualization is more secure than
2. Full Virtualization is less secure. the Full Virtualization.
Full Virtualization uses binary translation
and a direct approach as a technique for While Paravirtualization uses hypercalls at
3. operations. compile time for operations.
Full Virtualization is slow than Paravirtualization is faster in operation as
4. paravirtualization in operation. compared to full virtualization.
5.
Full Virtualization is more portable and Paravirtualization is less portable and
S.No. Full Virtualization Paravirtualization
compatible. compatible.
Examples of full virtualization are Examples of paravirtualization are Microsoft
6. Microsoft and Parallels systems. Hyper-V, Citrix Xen, etc.
It supports all guest operating systems The guest operating system has to be modified
7. without modification. and only a few operating systems support it.
The guest operating system will issue Using the drivers, the guest operating system will
8. hardware calls. directly communicate with the hypervisor.
It is less streamlined compared to para-
9. virtualization. It is more streamlined.
It provides less isolation compared to full
10. It provides the best isolation. virtualization.
Hypervisor
A hypervisor is a form of virtualization software used in Cloud hosting to divide and allocate the
resources on various pieces of hardware. The program which provides partitioning, isolation, or
abstraction is called a virtualization hypervisor. The hypervisor is a hardware virtualization technique that
allows multiple guest operating systems (OS) to run on a single host system at the same time. A
hypervisor is sometimes also called a virtual machine manager(VMM).
Types of Hypervisor –
TYPE-1 Hypervisor:
The hypervisor runs directly on the underlying host system. It is also known as a “Native Hypervisor” or
“Bare metal hypervisor”. It does not require any base server operating system. It has direct access to
hardware resources. Examples of Type 1 hypervisors include VMware ESXi, Citrix XenServer, and
Microsoft Hyper-V hypervisor.
Pros & Cons of Type-1 Hypervisor:
Pros: Such kinds of hypervisors are very efficient because they have direct access to the physical
hardware resources(like Cpu, Memory, Network, and Physical storage). This causes the empowerment of
the security because there is nothing any kind of the third party resource so that attacker couldn‟t
compromise with anything.
Cons: One problem with Type-1 hypervisors is that they usually need a dedicated separate machine to
perform their operation and to instruct different VMs and control the host hardware resources.
TYPE-2 Hypervisor:
A Host operating system runs on the underlying host system. It is also known as „Hosted Hypervisor”.
Such kind of hypervisors doesn‟t run directly over the underlying hardware rather they run as an
application in a Host system(physical machine). Basically, the software is installed on an operating
system. Hypervisor asks the operating system to make hardware calls. An example of a Type 2 hypervisor
includes VMware Player or Parallels Desktop. Hosted hypervisors are often found on endpoints like
PCs. The type-2 hypervisor is very useful for engineers, and security analysts (for checking malware, or
malicious source code and newly developed applications).
Pros & Cons of Type-2 Hypervisor:
Pros: Such kind of hypervisors allows quick and easy access to a guest Operating System alongside the
host machine running. These hypervisors usually come with additional useful features for guest machines.
Such tools enhance the coordination between the host machine and the guest machine.
Cons: Here there is no direct access to the physical hardware resources so the efficiency of these
hypervisors lags in performance as compared to the type-1 hypervisors, and potential security risks are
also there an attacker can compromise the security weakness if there is access to the host operating
system so he can also access the guest operating system.
Choosing the right hypervisor :
Type 1 hypervisors offer much better performance than Type 2 ones because there‟s no middle layer,
making them the logical choice for mission-critical applications and workloads. But that‟s not to say that
hosted hypervisors don‟t have their place – they‟re much simpler to set up, so they‟re a good bet if, say,
you need to deploy a test environment quickly. One of the best ways to determine which hypervisor meets
your needs is to compare their performance metrics. These include CPU overhead, the amount of
maximum host and guest memory, and support for virtual processors. The following factors should be
examined before choosing a suitable hypervisor:
1. Understand your needs: The company and its applications are the reason for the data center (and your
job). Besides your company‟s needs, you (and your co-workers in IT) also have your own needs. Needs
for a virtualization hypervisor are:
a. Flexibility
b. Scalability
c. Usability
d. Availability
e. Reliability
f. Efficiency
g. Reliable support
2. The cost of a hypervisor: For many buyers, the toughest part of choosing a hypervisor is striking the
right balance between cost and functionality. While a number of entry-level solutions are free, or
practically free, the prices at the opposite end of the market can be staggering. Licensing frameworks also
vary, so it‟s important to be aware of exactly what you‟re getting for your money.
3. Virtual machine performance: Virtual systems should meet or exceed the performance of their
physical counterparts, at least in relation to the applications within each server. Everything beyond
meeting this benchmark is profit.
4. Ecosystem: It‟s tempting to overlook the role of a hypervisor‟s ecosystem – that is, the availability of
documentation, support, training, third-party developers and consultancies, and so on – in determining
whether or not a solution is cost-effective in the long term.
5. Test for yourself: You can gain basic experience from your existing desktop or laptop. You can run
both VMware vSphere and Microsoft Hyper-V in either VMware Workstation or VMware Fusion to
create a nice virtual learning and testing environment.
HYPERVISOR REFERENCE MODEL :
There are 3 main modules coordinates in order to emulate the underlying hardware:
DISPATCHER:
The dispatcher behaves like the entry point of the monitor and reroutes the instructions of the virtual
machine instance to one of the other two modules.
ALLOCATOR:
The allocator is responsible for deciding the system resources to be provided to the virtual machine
instance. It means whenever a virtual machine tries to execute an instruction that results in changing the
machine resources associated with the virtual machine, the allocator is invoked by the dispatcher.
INTERPRETER:
The interpreter module consists of interpreter routines. These are executed, whenever a virtual machine
executes a privileged instruction.
Overview of Cloud interoperability and portability
Nowadays, every organization/ business driving their digital transformation is increasingly moving
towards cloud-based solutions. But suitable interoperability and portability is very essential. So in this
article we will discuss about cloud interoperability and portability, its major categories, along with
various scenarios where it is required and ending with challenges faced during this period. So let‟s go a
little bit deep into the concept to get an overview of this cloud interoperability and portability.
Interoperability :
It is defined as the capacity of at least two systems or applications to trade with data and utilize it. On
the other hand, cloud interoperability is the capacity or extent at which one cloud service is connected
with the other by trading data as per strategy to get results.
The two crucial components in Cloud interoperability are usability and connectivity, which are further
divided into multiple layers.
Behaviour
Policy
Semantic
Syntactic
Transport
Portability
It is the process of transferring the data or an application from one framework to others, making it stay
executable or usable. Portability can be separated into two types: Cloud data portability and Cloud
application portability.
Cloud data portability –
It is the capability of moving information from one cloud service to another and so on without
expecting to re-enter the data.
Cloud application portability –
It is the capability of moving an application from one cloud service to another or between a
client‟s environment and a cloud service.
Categories of Cloud Computing Interoperability and portability :
The Cloud portability and interoperability can be divided into –
Data Portability
Platform Interoperability
Application Portability
Management Interoperability
Platform Portability
Application Interoperability
Publication and Acquisition Interoperability
Data Portability –
Data portability, which is also termed as cloud portability, refers to the transfer of data from one source
to another source or from one service to another service, i.e. from one application to another application
or it may be from one cloud service to another cloud service in the aim of providing a better service to
the customer without affecting it‟s usability. Moreover, it makes the cloud migration process more
easier.
Application Portability –
It enables re-use of various application components in different cloud PaaS services. If the components
are independent in their cloud service provider, then application portability can be a difficult task for
the enterprise. But if components are not platform specific, porting to another platform is easy and
effortless.
Platform Portability –
There are two types of platform portability- platform source portability and machine image portability.
In the case of platform source portability, e.g. UNIX OS, which is mostly written in C language, can be
implemented by re-compiling on various different hardware and re-writing sections that are hardware-
dependent which are not coded in C. Machine image portability binds application with platform by
porting the resulting bundle which requires standard program representation.
Application Interoperability –
It is the interoperability between deployed components of an application deployed in a system.
Generally, applications that are built on the basis of design principles show better interoperability than
those which are not.
Platform Interoperability –
It is the interoperability between deployed components of platforms deployed in a system. It is an
important aspect, as application interoperability can‟t be achieved without platform interoperability.
Management Interoperability –
Here, the Cloud services like SaaS, PaaS or IaaS and applications related to self-service are assessed. It
would be pre-dominant as Cloud services are allowing enterprises to work-in-house and eradicate
dependency from third parties.
Publication and Acquisition Interoperability –
Generally, it is the interoperability between various platforms like PaaS services and the online
marketplace.
Major Scenarios where interoperability and portability is required :
Cloud Standards Custom Council (CSCC) has identified some of the basic scenarios where portability
and interoperability is required.
Switching between cloud service providers –
The customer wants to transfer data or applications from Cloud 1 to Cloud 2.
Using multiple cloud service providers-
The client may subscribe to the same or different services e.g. Cloud 1 and 2.
Directly linked cloud services-
The customer can use the service by linking to Cloud 1 and Cloud 3.
Hybrid Cloud configuration-
Here the customer connects with a legacy system not in a public, but, private cloud, i.e. Cloud 1, which
is then connected to public cloud services i.e. Cloud 3.
Cloud Migration-
Clients migrate to one or more in-house applications to Cloud 1.
Challenges faced in Cloud Portability and Interoperability :
If we move the application to another cloud, then, naturally, data is also moved. And for some
businesses, data is very crucial. But unfortunately, most cloud service providers charge a small amount
of money to get the data into the cloud.
The degree of mobility of data can also act as an obstacle. Moving data from one cloud to another
cloud, the capability of moving workload from one host to another should also be accessed.
Interoperability should not be left out, otherwise data migration can be highly affected. So the
functioning of all components and applications should be ensured.
As data is highly important in business, the safety of customer‟s data should be ensured.
Cloud interoperability eradicates the complex parts by providing custom interfaces. Moving from one
framework can be conceivable with a container service which improves scalability. Having a few
hurdles, adaptability to change in service providers, better assistance in cloud clients will enhance the
improvement of cloud interoperability.
Cloud Management in Cloud Computing
Cloud computing management is maintaining and controlling the cloud services and resources be it
public, private or hybrid. Some of its aspects include load balancing, performance, storage, backups,
capacity, deployment etc. To do so a cloud managing personnel needs full access to all the functionality
of resources in the cloud. Different software products and technologies are combined to provide a
cohesive cloud management strategy and process.
As we know Private cloud infrastructure is operated only for a single organization, so that can be
managed by the organization or by a third party. Public cloud services are delivered over a network that is
open and available for public use. In this model, the IT infrastructure is owned by a private company and
members of the public can purchase or lease data storage or computing capacity as needed. Hybrid cloud
environments are a combination of public and private cloud services from different providers. Most
organizations store data on private cloud servers for privacy concerns, while leveraging public cloud
applications at a lower price point for less sensitive information. The combination of both the public and
private cloud are known as Hybrid cloud servers.
Need of Cloud Management :
Cloud is nowadays preferred by huge organizations as their primary data storage. A small downtime or an
error can cause a great deal of loss and inconvenience for the organizations. So as to design, handle and
maintain a cloud computing service specific members are responsible who make sure things work out as
supposed and all arising issues are addressed.
Cloud Management Platform :
A cloud management platform is a software solution that has a robust and extensive set of APIs that allow
it to pull data from every corner of the IT infrastructure. A CMP allows an IT organization to establish a
structured approach to security and IT governance that can be implemented across the organization‟s
entire cloud environment.
Cloud Management Tasks :
The below figure represents different cloud management tasks :
Cloud Management Tasks
Auditing System Backups –
It is required to audit the backups from time to time to ensure restoration of randomly selected files of
different users. This might be done by the organization or by the cloud provider.
Flow of data in the system –
The managers are responsible for designing a data flow diagram that shows how the data is supposed to
flow throughout the organization.
Vendor Lock-In –
The managers should know how to move their data from a server to another in case the organization
decides to switch providers.
Knowing provider’s security procedures –
The managers should know the security plans of the provider, especially Multitenant use, E-commerce
processing, Employee screening and Encryption policy.
Monitoring the Capacity, Planning and Scaling abilities –
The manager should know if their current cloud provider is going to meet their organization‟s demand in
the future and also their scaling capabilities.
Monitoring audit log –
In order to identify errors in the system, logs are audited by the managers on a regular basis.
Solution Testing and Validation –
It is necessary to test the cloud services and verify the results and for error-free solutions.
Cloud analytics is a service model in which elements of the data analytics process are provided through a
public or private cloud. Cloud analytics applications and services are typically offered under a
subscription-based or utility (pay-per-use) pricing model.
Gartner defines the six key elements of analytics as data sources, data models, processing applications,
computing power, analytic models and sharing or storage of results. In its view, any analytics initiative
“in which one or more of these elements is implemented in the cloud” qualifies as cloud analytics.
Gartner analyst Bill Gassman noted that vendors offering cloud-based technologies designed to support a
single element refer to themselves as cloud analytics companies, which can cause confusion for potential
users.
Examples of cloud analytics products and services include hosted data warehouses, software-as-a-service
business intelligence (SaaS BI) and cloud-based social media analytics.
SaaS BI (also known as on-demand BI or cloud BI) involves delivery of business intelligence (BI)
applications to end users from a hosted location. This model is scalable and makes start-up easier and less
expensive, but the product may not offer the same features as an in-house application.
Cloud-based social media analytics involves the remote provisioning of tools that include applications for
selecting the social media sites that best serve your purposes, separate applications for harvesting data,
storage services and data analytics software.
A hosted data warehouse is a centralized repository for enterprise data that is made available to users
from a remote location operated by the service provider, rather than being located on the enterprise‟s own
systems.
According to Gassman, before investing in cloud analytics, an enterprise needs to fully grasp the extent of
what‟s involved. “The danger is people will go down this road and not understand the scope,” Gassman
said. Investing in cloud analytics can be profitable for an organization but proper planning is essential to
ensure that all six analytics elements are covered.
Cloud analytics is a service model in which elements of the data analytics process are provided through a
public or private cloud. Cloud analytics applications and services are typically offered under a
subscription-based or utility (pay-per-use) pricing model.
Gartner defines the six key elements of analytics as data sources, data models, processing applications,
computing power, analytic models and sharing or storage of results. In its view, any analytics initiative
“in which one or more of these elements is implemented in the cloud” qualifies as cloud analytics.
Gartner analyst Bill Gassman noted that vendors offering cloud-based technologies designed to support a
single element refer to themselves as cloud analytics companies, which can cause confusion for potential
users.
Examples of cloud analytics products and services include hosted data warehouses, software-as-a-service
business intelligence (SaaS BI) and cloud-based social media analytics.
SaaS BI (also known as on-demand BI or cloud BI) involves delivery of business intelligence (BI)
applications to end users from a hosted location. This model is scalable and makes start-up easier and less
expensive, but the product may not offer the same features as an in-house application.
Cloud-based social media analytics involves the remote provisioning of tools that include applications for
selecting the social media sites that best serve your purposes, separate applications for harvesting data,
storage services and data analytics software.
A hosted data warehouse is a centralized repository for enterprise data that is made available to users
from a remote location operated by the service provider, rather than being located on the enterprise‟s own
systems.
According to Gassman, before investing in cloud analytics, an enterprise needs to fully grasp the extent of
what‟s involved. “The danger is people will go down this road and not understand the scope,” Gassman
said. Investing in cloud analytics can be profitable for an organization but proper planning is essential to
ensure that all six analytics elements are covered.