UNIT -4
VIRTUALIZATION:
“It is the creation of virtual version of something such as server , a desktop , a storage
device or network resources”.
It is the technique how to separate a service from online physical delivery of that
service. Virtualization means to create a virtual version of a device or resources such as
server , storage device , network , OS where the framework divides the resource into
one or more execution environment.
Virtualization is the process of creating a virtual environment on an existing server to
run your desired program without interfering it any of the services provided by the
server or host platform to other users. Virtualization separates resources and services
from the underlying physical delivery environment.
Virtualization provides a way of relaxing the foregoing constraints and increasing
flexibility. When a system (or subsystem),e.g., a processor, memory, or I/O device, is
virtualized, its interface and all resources visible through the interface are mapped onto
the interface and resources of a real system actually implementing it. Consequently, the
real system is transformed so that it appears to be a different, virtual system or even a
set of multiple virtual systems.
Formally, virtualization involves the construction of an isomorphism that maps a virtual
guest system to a real host. This isomorphism, illustrated in Figure 1.2, maps the guest
state to the host state (function V in Figure 1.2), and for a sequence of operations, e, that
modifies the state in the guest (the function e modifies state Si to state Sj) there is a
corresponding sequence of operations e in the host that performs an equivalent
modification to the host’s state (changes Si to Sj).
Although such an isomorphism can be used to characterize abstraction as well as
virtualization, we distinguish the two: Virtualization differs from abstraction in that
virtualization does not necessarily hide details; the level of detail in a virtual system is
often the same as that in the underlying real system. Consider again the example of a
hard disk. In some applications, it may be desirable to partition a single large hard disk
into a number of smaller virtual disks. The virtual disks are mapped to a real disk by
implementing each of the virtual disks as a single large file on the real disk Virtualizing
software provides a mapping between virtual disk contents and real disk contents (the
function V in the isomorphism), using the file abstraction as an intermediate step. Each
of the virtual disks is given the appearance of having a number of logical tracks and
sectors (although fewer than in the large disk). A write to a virtual disk (the function e
in the isomorphism) is mirrored by a file write and a corresponding real disk write, in
the host system (the function e in the isomorphism). In this example, the level of detail
provided at the virtual disk interface, i.e., sector/track addressing, remains the same as
for a real disk; no abstraction takes place. A virtual machine (VM) is implemented by
adding a layer of software to a real machine to support the desired virtual machine’s
architecture. For example, virtualizing software installed on an Apple Macintosh can
provide a Windows/IA-32 virtual machine capable of running PC application programs.
Multiple, replicated virtual machines can be implemented on a single hardware
platform to provide individuals or user groups with their own operating system
environments. Virtual machines can also employ emulation techniques to support cross
platform software compatibility. For example, a platform implementing the PowerPC
instruction set can be converted into a virtual platform running the IA-32 instruction
set. This compatibility can be provided either at the system level (e.g., to run a Windows
OS on a Macintosh) or at the program or process level (e.g., to run Excel on a Sun
Solaris/SPARCplatform).
NEED FOR VIRTUALISATION:
Virtualization provides various benefits including saving time and energy, decreasing
costs and minimizing overall risk.
✓ Provides ability to manage resources effectively.
✓ Increases productivity, as it provides secure remote access.
✓ Provides for data loss prevention.
What makes virtualization possible?
There is a software that makes virtualization possible. This software is known as a
Hypervisor, also known as a virtualization manager. It sits between the hardware and the
operating system, and assigns the amount of access that the applications and operating
systems have with the processor and other hardware resources.
1. ENHANCED PERFORMANCE-
Currently, the end user system i.e. PC is sufficiently powerful to fulfill all the basic
computation requirements of the user, with various additional capabilities which
are rarely used by the user. Most of their systems have sufficient resources which
can host a virtual machine manager and can perform a virtual machine with
acceptable performance so far.
2. LIMITED USE OF HARDWARE AND SOFTWARE RESOURCES-
The limited use of the resources leads to under-utilization of hardware and software
resources. As all the PCs of the user are sufficiently capable to fulfill their regular
computational needs that’s why many of their computers are used often which can be
used 24/7 continuously without any interruption. The efficiency of IT infrastructure
could be increase by using these resources after hours for other purposes. This
environment is possible to attain with the help of Virtualization.
3. SHORTAGE OF SPACE-
The regular requirement for additional capacity, whether memory storage or compute
power, leads data centers raise rapidly. Companies like Google, Microsoft and Amazon
develop their infrastructure by building data centers as per their needs. Mostly,
enterprises unable to pay to build any other data center to accommodate additional
resource capacity. This heads to the diffusion of a technique which is known as server
consolidation.
4. ECO-FRIENDLY INITIATIVES-
At this time, corporations are actively seeking for various methods to minimize their
expenditures on power which is consumed by their systems. Data centers are main
power consumers and maintaining a data center operations needs a continuous power
supply as well as a good amount of energy is needed to keep them cool for well-
functioning. Therefore, server consolidation drops the power consumed and cooling
impact by having a fall in number of servers. Virtualization can provide a sophisticated
method of server consolidation.
5. ADMINISTRATIVE COSTS-
Furthermore, the rise in demand for capacity surplus, that convert into more servers in
a data center, accountable for a significant increase in administrative costs. Hardware
monitoring, server setup and updates, defective hardware replacement, server
resources monitoring, and backups are included in common system administration
tasks. These are personnel-intensive operations. The administrative costs is increased
as per the number of servers. Virtualization decreases number of required servers for a
given workload, hence reduces the cost of administrative employees.
6. DISASTER RECOVERY
Disaster recovery is very easy when your servers are virtualized. With up-to-date
snapshots of your virtual machines, you can quickly get back up and running. An
organization can more easily create an affordable replication site. If a disaster strikes
in the data center or server room itself, you can always move those virtual machines
elsewhere into a cloud provider. Having that level of flexibility means your disaster
recovery plan will be easier to enact and will have a 99% success rate.
7. ENHANCED SECURITY
While you are testing or installing something on your servers and it crashes, do not
panic, as there is no data loss. Just revert to a previous snapshot and you can move
forward as if the mistake did not even happen. You can also isolate these testing
environments from end users while still keeping them online. When you have
completely done your work, deploy it in live.
Advantages & Disadvantages(pros & cons) of Virtualization
Pros /Advantages
The advantages of switching to a virtual environment are plentiful, saving you money
and time while providing much greater business continuity and ability to recover from
disaster.
Reduced spending. For companies with fewer than 1,000 employees, up to 40 percent of
an IT budget is spent on hardware. Purchasing multiple servers is often a good chunk of
this cost. Virtualizing requires fewer servers and extends the lifespan of existing
hardware. This also means reduced energy costs.
Easier backup and disaster recovery. Disasters are swift and unexpected. In seconds, leaks,
floods, power outages, cyber-attacks, theft and even snow storms can wipe out data
essential to your business. Virtualization makes recovery much swifter and accurate,
with less manpower and a fraction of the equipment – it’s all virtual.
Better business continuity. With an increasingly mobile workforce, having good business
continuity is essential. Without it, files become inaccessible, work goes undone,
processes are slowed and employees are less productive. Virtualization gives employees
access to software, files and communications anywhere they are and can enable
multiple people to access the same information for more continuity.
More efficient IT operations. Going to a virtual environment can make everyone’s job
easier – especially the IT staff. Virtualization provides an easier route for technicians to
install and maintain software, distribute updates and maintain a more secure network.
They can do this with less downtime, fewer outages, quicker recovery and instant
backup as compared to a non-virtual environment.
It automates routine tasks.
This technology allows you to automate your important routine IT tasks. Something that
is as simple as patches for your operating system will become simpler and quicker.
It makes a business energy-efficient.
Virtualization is seen as revolutionary advancement due to a number of reasons, and
one of those having the most impact is energy savings. By implementing such a
technology in your business, you will be able to lessen your carbon footprint immensely,
which would eventually make a big difference overall. If your organization is
environmentally conscious, then virtualization is the way to go.
It promotes greater redundancy.
Virtualization technologies can help you improve your uptime by allowing greater
security and safety while reducing points of contact.
It greatly helps with development.
This technology is observed to be very helpful in development environments. For
example, if you are running several websites, you can make the coding process of these
sites easier by using a virtualized server.
It allows for faster deployment.
In a virtualized environment, provisioning would become quick and simple. You should
know that deploying virtual machines is simpler than deploying the older physical
versions.
Cons/Disadvantages
Although you cannot find many disadvantages for virtualization, we will discuss a few
prominent ones as follows −
1.Extra Costs
Maybe you have to invest in the virtualization software and possibly additional
hardware might be required to make the virtualization possible. This depends on your
existing network. Many businesses have sufficient capacity to accommodate the
virtualization without requiring much cash. If you have an infrastructure that is more
than five years old, you have to consider an initial renewal budget.
2.Software Licensing
This is becoming less of a problem as more software vendors adapt to the increased
adoption of virtualization. However, it is important to check with your vendors to
understand how they view software use in a virtualized environment.
3.Learn the new Infrastructure
Implementing and managing a virtualized environment will require IT staff with
expertise in virtualization. On the user side, a typical virtual environment will operate
similarly to the non-virtual environment. There are some applications that do not adapt
well to the virtualized environment.
4.Creates an availability issues
Another drawback is the possibility that not all servers and applications are
virtualization friendly. This can be a problem when an investment has already been
made on several servers or if the applications used to run your business do not have an
upgraded version that allows for virtualization.
5. Creates a security issues
Information is our modern currency. If you have it, you can make money. If you don’t
have it, you’ll be ignored. Because data is crucial to the success of a business, it is
targeted frequently. The average cost of a data security breach in 2017, according to a
report published by the Ponemon Institute, was $3.62 million. For perspective: the
chances of being struck by lightning are about 1 in a million. The chances of
experiencing a data breach while using virtualization? 1 in 4.
TYPES OF VIRTUALIZATION:
Today the term virtualization is widely applied to a number of concepts, some of which
are described below −
• Hardware/Server Virtualization
• Client & Desktop Virtualization
• Software Virtualization
• Network Virtualization
• Storage Virtualization
• Memory Virtualization
• Data Virtualization
Hardware/Server Virtualization
It is the most common type of virtualization as it provides advantages of
hardware utilization and application uptime. The basic idea of the technology is
to combine many small physical servers into one large physical server, so that
the processor can be used more effectively and efficiently. The operating system
that is running on a physical server gets converted into a well-defined OS that
runs on the virtual machine.
The hypervisor controls the processor, memory, and other components by
allowing different OS to run on the same machine without the need for a source
code.
Hardware virtualization is further subdivided into the following types:
• Full Virtualization – In it, the complete simulation of the actual
hardware takes place to allow software to run an unmodified guest OS.
• Para Virtualization – In this type of virtualization, software unmodified
runs in modified OS as a separate system.
• Partial Virtualization – In this type of hardware virtualization, the
software may need modification to run.
Client & Desktop Virtualization
This is similar to server virtualization, but this time is on the user’s site where you
virtualize their desktops. Desktop virtualization allows the users’ OS to be remotely
stored on a server in the data center.It allows the user to access their desktop virtually,
from any location by different machine. Users who wants specific operating systems
other than Windows Server will need to have a virtual desktop.Main benefits of desktop
virtualization are user mobility, portability, easy management of software installation, updates and patches.
Software Virtualization
Software Virtualization involves the creation of an operation of multiple virtual
environments on the host machine. It creates a computer system complete with
hardware that lets the guest operating system to run. For example, it lets you run
Android OS on a host machine natively using a Microsoft Windows OS, utilizing the
same hardware as the host machine does.
Subtypes:
Operating System Virtualization – hosting multiple OS on the native OS
Application Virtualization – hosting individual applications in a virtual environment
separate from the native OS
Service Virtualization – hosting specific processes and services related to a particular
application
Network Virtualization
The ability to run multiple virtual networks that each has a separate control and data plan.It co-exist
together on top of one physical network.It can be managed by individual parties that potentially
confidential to each other.Network virtualization, provides a facility to create and provision virtual
networks—logical switches, routers, firewalls, load balancer, Virtual Private Network (VPN), and
workload security within days or even in weeks. It helps you in creating multiple switching, Vlans,
NAT-ing, etc.
Subtypes:
✓ Internal network: Enables a single system to function like a network
✓ External network: Consolidation of multiple networks into a single one, or
segregation of a single network into multiple ones
The following illustration shows the VMware schema −
Storage Virtualization
In this type of virtualization, multiple network storage resources are present as a single
storage device for easier and more efficient management of these resources. It provides
various advantages as follows:
• Improved storage management in a heterogeneous IT environment
• Easy updates, better availability
• Reduced downtime
• Better storage utilization
• Automated managements
Block Virtualization – Multiple storage devices are consolidated into one
File Virtualization – Storage system grants access to files that are stored over multiple hosts.
A schematic illustration is given below −
Memory Virtualization
It introduces a way to decouple memory from the server to provide a shared, distributed or
networked function. It enhances performance by providing greater memory capacity without
any addition to the main memory. That’s why a portion of the disk drive serves as an extension
of the main memory.
Implementations –
Application-level integration – Applications running on connected computers directly
connect to the memory pool through an API or the file system.
Operating System Level Integration – The operating system first connects to the memory
pool, and makes that pooled memory available to applications.
Data Virtualization
Without any technical details, you can easily manipulate data and know how it is formatted or
where it is physically located. It decreases the data errors and workload.
VIRTUAL MACHINES:
A virtual machine is an emulation of a computer system. Virtual Machine are based on
computer architecture and provides functionality of a physical computer and their
implementation involves specialized software ,hardware or a combination.
A virtual machine executes software (either an individual process or a full system,
depending on the type of machine) in the same manner as the machine for which the
software was developed. The virtual machine is implemented as a combination of a real
machine and virtualizing software. The virtual machine may have resources different
from the real machine, either in quantity or in type. For example, a virtual machine may
have more or fewer processors than the real machine, and the processors may execute a
different instruction set than does the real machine.
The main advantages of virtual machines: Multiple OS environments can exist
simultaneously on the same machine, isolated from each other; Virtual machine can
offer an instruction set architecture that differs from real computer's; Easy
maintenance, application provisioning, availability and convenient recovery.
The process of virtualization consists of two parts:
(1) The mapping of virtual resources or state, e.g., registers, memory, or files, to real
resources in the underlying machine.
(2) The use of real machine instructions and/or system calls to carry out the actions
specified by virtual machine instructions and/or system calls, e.g., emulation of the
virtual machine ABI or ISA.
The main disadvantages:
• When multiple virtual machines are simultaneously running on a host computer,
each virtual machine may introduce an unstable performance, which depends on
the workload on the system by other running virtual machines;
• Virtual machine is not that efficient as a real one when accessing the hardware.
TYPES OF VIRTUAL MACHINE
Virtual machines can be divided into two categories:
1. System Virtual Machines: A system platform that supports the sharing of the host
computer's physical resources between multiple virtual machines, each running with its
own copy of the operating system. The virtualization technique is provided by a
software layer known as a hypervisor, which can run either on bare hardware or on top
of an operating system.
2. Process Virtual Machine: Designed to provide a platform-independent
programming environment that masks the information of the underlying hardware or
operating system and allows program execution to take place in the same way on any
given platform.
PROCESS VIRTUAL MACHINES
A process virtual machine is capable of supporting an individual process. The real
platform that corresponds to a virtual machine, i.e., the real machine being emulated by
the virtual machine, is referred to as the native machine. . In process VMs, virtualizing
software is often referred to as the runtime, which is short for “runtime
software.”Process-level VMs provide user applications with a virtual ABI environment.
In their various implementations, process VMs can provide replication, emulation, and
optimization.
1. Multiprogramming-
The first and most common virtual machine is so ubiquitous that we don’t even
think of it as being a virtual machine. The combination of the OS call interface and
the user instruction set forms the machine that executes a user process. Most
operating systems can simultaneously support multiple user processes through
multiprogramming, where each user process is given the illusion of having a
complete machine to itself. Each process is given its own address space and is given
access to a file structure. The operating system timeshares the hardware and
manages underlying resources to make this possible. In effect, the operating system
provides a replicated process-level virtual machine for each of the concurrently
executing applications.
2. Emulation and dynamic binary translators-
A more challenging problem for process-level virtual machines is to support
program binaries compiled to a different instruction set than the one executed by
the host’s hardware, i.e., to emulate one instruction set on hardware designed for
another.
The most straightforward emulation method is interpretation. An interpreter
program executing the target ISA fetches, decodes, and emulates the execution of
individual source instructions. This can be a relatively slow process, requiring tens
of native target instructions for each source instruction interpreted. For better
performance, binary translation is typically used. With binary translation, blocks of
source instructions are converted to target instructions that perform equivalent
functions. There can be a relatively high overhead associated with the translation
process, but once a block of instructions is translated, the translated instructions can
be cached and repeatedly executed much faster than they can be interpreted.
Because binary translation is the most important feature of this type of process
virtual machine, they are sometimes called dynamic binary translators.
3. High Level Language Virtual Machine Platform Independent-
High-level language VMs first became popular with the Pascal programming
environment (Bowles 1980). In a conventional system, Figure 1.10a, the compiler
consists of a frontend that performs lexical, syntax, and semantic analysis to
generate simple intermediate code — similar to machine code but more abstract.
Typically the intermediate code does not contain specific register assignments, for
example. Then a code generator takes the intermediate code and generates a binary
containing machine code for a specific ISA and OS. This binary file is distributed and
executed on platforms that support the given ISA/OS combination. To execute the
program on a different platform, however, it must be recompiled for that platform.
In HLL VMs, this model is changed (Figure 1.10b). The steps are similar to the
conventional ones, but the point at which program distribution takes place is at a
higher level.As shown in Figure1.10b, a conventional compiler front end generates
abstract machine code, which is very similar to an intermediate form. In many HLL
VMs, this is a rather generic stack-based ISA. This virtual ISA is in essence the
machine code for a virtual machine. The portable virtual ISA code is distributed for
execution on different platforms. For each platform, a VM capable of executing the
virtual IS A is implemented. In its simplest form, the VM contains an interpreter that
takes each instruction, decodes it, and then performs the required state
transformations (e.g., involving memory and the stack). I/O functions are performed
via a set of standard library calls that are defined as part of the VM.In more
sophisticated, higher performance VMs, the abstract machine code may be compiled
(binary translated) into host machine code for direct execution on the host platform.
An advantage of an HLL VM is that software is easily portable ,once the VM is
implemented on a target platform.
4. Same -ISA Binary Optimizer-
same-ISA dynamic binary optimizers are implemented in a manner very similar to
emulating virtual machines, including staged optimization and software caching of
optimized code. Same-ISA dynamic binary optimizers are most effective for source
binaries that are relatively unoptimized to begin with, a situation that is fairly
common in practice. A dynamic binary optimizer can collect a profile and then use
this profile information to optimize the binary code on the fly. An example of such a
same-ISA dynamic binary optimizer is the Dynamo system, originally developed as a
research project at Hewlett-Packard.
SYSTEM VIRTUAL MACHINE:
System virtual machines provide a complete system environment in which many
processes, possibly belonging to multiple users, can coexist. These VMs were first
developed during the 1960s and early 1970s, and they were the origin of the term
virtual machine. By using system VMs, a single host hardware platform can support
multiple guest OS environments simultaneously. At the time they were first
developed, mainframe computer systems were very large and expensive, and
computers were almost always shared among a large number of users. Different
groups of users sometimes wanted different operating systems to be run on the
shared hardware, and VMs allowed them to do so. Alternatively, a multiplicity of
single-user operating systems allowed a convenient way of implementing time-
sharing among many users. Over time, as hardware became much less expensive and
much of it migrated to the desktop, interest in these classic system VMs faded.
The most important feature of today’s system VMs is that they provide a secure way
of partitioning major software systems that run concurrently on the same hardware
platform. Software running on one guest system is isolated from software running
on other guest systems. Furthermore, if security on one guest system is
compromised or if the guest OS suffers a failure, the software running on other guest
systems is not affected. The ability to support different operating systems
simultaneously, e.g., Windows and Linux (as illustrated in Figure 1.11), is another
reason for their appeal, although it is probably of secondary importance to most
users.
In system VMs, platform replication is the major feature provided by a VMM. The
central problem is that of dividing a single set of hardware resources among
multiple guest operating system environments. The VMM has access to and manages
all the hardware resources. A guest operating system and application programs
compiled for that operating system are then managed under (hidden) control of the
VMM. This is accomplished by constructing the system so that when a guest OS
performs certain operations, such as a privileged instruction that directly involves
the shared hardware resources, the operation is intercepted by the VMM, checked
for correctness, and performed by the VMM on behalf of the guest. Guest software is
unaware of the “behind the-scenes” work performed by the VMM.
Implementations of System Virtual Machines
From the user perspective, most system VMs provide more or less the same
functionality. The thing that tends to differentiate them is the way in which they are
implemented. As discussed earlier, in Section 1.2, there are a number of interfaces in
a computer system, and this leads to a number of choices for locating the system
VMM software. Summaries of two of the more important implementations follow.
Figure 1.11 illustrates the classic approach to system VM architecture (Popek and
Goldberg 1974). The VMM is first placed on bare hardware, and virtual machines fit
on top. The VMM runs in the most highly privileged mode, while all the guests
systems run with lesser privileges. Then in a completely transparent way, the VMM
can intercept and implement all the guest OS’s actions that interact with hardware
resources. In many respects, this system VM architecture is the most efficient, and it
provides service to all the guest systems in a more or less equivalent way. One
disadvantage of this type of system, at least for desktop users, is that installation
requires wiping an existing system clean and starting from scratch, first installing
the VMM and then installing guest operating systems on top. Another disadvantage
is that I/O device drivers must be available for installation in the VMM, because it is
the VMM that interacts directly with I/O devices.
Whole SystemVMs: Emulation
In the conventional system VMs described earlier,all the system software(both guest
and host) and application software use the same ISA as the underlying hardware. In
some important situations, however, the host and guest systems do not have a
common ISA. For example, the Apple PowerPC-based systems and Windows PCs use
different ISAs (and different operating systems), and they are the two most popular
desktop systems today. As another example,
Sun Microsystems servers use a different OS and ISA than the Windows PCs that are
commonly attached to them as clients. Because software systems are so closely tied
to hardware systems, this may require purchase of multiple platformtypes,even
when unnecessary for any other reason,which complicates software support and/or
restricts the availability of useful software packages to users.
Codesigned Virtual Machines: Hardware Optimization
In all the VM models discussed thus far, the goal has been functionality and
portability — either to support multiple (possibly different) operating systems on
the same host platform or to support different ISAs and operating systems on the
same platform. In practice, these virtual machines are implemented on hardware
already developed for some standard ISA and for which native (host) applications,
libraries, and operating systems already exist. By and large, improved performance
(i.e., going beyond native platform performance) has not been a goal — in fact
minimizing performance losses is often the performance goal. Codesigned VMs have
a different objective and take a different approach. These VMs are designed to
enable innovative ISAs and/or hardware implementations for improved
performance, power efficiency, or both. The host’s ISA may be completely new, or it
may be based on an existing ISA with some new instructions added and/or some
instructions deleted.InacodesignedVM, there are no native ISA applications. It is as if
the VM software is, in fact, part of the hardware implementation.
BINARY TRANSLATION
Depending on implementation technologies, hardware virtualization can be
classified into two categories: full virtualization and host-based virtualization. Full
virtualization does not need to modify the host OS. It relies on binary translation to
trap and to virtualize the execution of certain sensitive, nonvirtualizable
instructions. The guest OSes and their applications consist of noncritical and critical
instructions. In a host-based system, both a host OS and a guest OS are used. A
virtualization software layer is built between the host OS and guest OS.
Full Virtualization
With full virtualization, noncritical instructions run on the hardware directly while
critical instructions are discovered and replaced with traps into the VMM to be
emulated by software. Both the hypervisor and VMM approaches are considered full
virtualization. Why are only critical instructions trapped into the VMM? This is
because binary translation can incur a large performance overhead. Noncritical
instructions do not control hardware or threaten the security of the system, but
critical instructions do. Therefore, running noncritical instructions on hardware not
only can promote efficiency, but also can ensure system security.
Binary Translation of Guest OS Requests Using a VMM
This approach was implemented by VMware and many other software companies.
As shown in Figure 3.6, VMware puts the VMM at Ring 0 and the guest OS at Ring 1.
The VMM scans the instruction stream and identifies the privileged, control- and
behavior-sensitive instructions. When these instructions are identified, they are
trapped into the VMM, which emulates the behavior of these instructions. The
method used in this emulation is called binary translation. Therefore, full
virtualization combines binary translation and direct execution. The guest OS is
completely decoupled from the underlying hardware. Consequently, the guest OS is
unaware that it is being virtualized. The performance of full virtualization may not
be ideal, because it involves binary translation which is rather time-consuming. In
particular, the full virtualization of I/O-intensive applications is a really a big
challenge. Binary translation employs a code cache to store translated hot
instructions to improve performance, but it increases the cost of memory usage. At
the time of this writing, the performance of full virtualization on the x86
architecture is typically 80 percent to 97 percent that of the host machine
Host-Based Virtualization
An alternative VM architecture is to install a virtualization layer on top of the host
OS. This host OS is still responsible for managing the hardware. The guest OSes are
installed and run on top of the virtualization layer. Dedicated applications may run
on the VMs. Certainly, some other applications can also run with the host OS directly.
This host-based architecture has some distinct advantages, as enumerated next.
First, the user can install this VM architecture without modifying the host OS. The
virtualizing software can rely on the host OS to provide device drivers and other
low-level services. This will simplify the VM design and ease its deployment. Second,
the host-based approach appeals to many host machine configurations. Compared to
the hypervisor/VMM architecture, the performance of the host-based architecture
may also be low. When an application requests hardware access, it involves four
layers of mapping which downgrades performance significantly. When the ISA of a
guest OS is different from the ISA of the underlying hardware, binary translation
must be adopted. Although the host-based architecture has flexibility, the
performance is too low to be useful in practice.
Para-Virtualization with Compiler Support
Para-virtualization needs to modify the guest operating systems. A para-virtualized
VM provides special APIs requiring substantial OS modifications in user
applications. Performance degradation is a critical issue of a virtualized system. No
one wants to use a VM if it is much slower than using a physical machine. The
virtualization layer can be inserted at different positions in a machine software
stack. However, para-virtualization attempts to reduce the virtualization overhead,
and thus improve performance by modifying only the guest OS kernel. Figure 3.7
illustrates the concept of a para-virtualized VM architecture. The guest operating
systems are para-virtualized. They are assisted by an intelligent compiler to replace
the nonvirtualizable OS instructions by hypercalls as illustrated in Figure 3.8. The
traditional x86 processor offers four instruction execution rings: Rings 0, 1, 2, and 3.
The lower the ring number, the higher the privilege of instruction being executed.
The OS is responsible for managing the hardware and the privileged instructions to
execute at Ring 0, while user-level applications run at Ring 3. The best example of
para-virtualization is the KVM to be described below
Para-Virtualization Architecture
When the x86 processor is virtualized, a virtualization layer is inserted between the
hardware and the OS. According to the x86 ring definition, the virtualization layer
should also be installed at Ring 0. Different instructions at Ring 0 may cause some
problems. In Figure 3.8, we show that paravirtualization replaces nonvirtualizable
instructions with hypercalls that communicate directly with the hypervisor or VMM.
However, when the guest OS kernel is modified for virtualization, it can no longer
run on the hardware directly.
Although para-virtualization reduces the overhead, it has incurred other problems.
First, its compatibility and portability may be in doubt, because it must support the
unmodified OS as well. Second, the cost of maintaining para-virtualized OSes is high,
because they may require deep OS kernel modifications. Finally, the performance
advantage of para-virtualization varies greatly due to workload variations.
Compared with full virtualization, para-virtualization is relatively easy and more
practical. The main problem in full virtualization is its low performance in binary
translation. To speed up binary translation is difficult. Therefore, many
virtualization products employ the paravirtualization architecture. The popular Xen,
KVM, and VMware ESX are good examples.
KVM (Kernel-Based VM)
This is a Linux para-virtualization system—a part of the Linux version 2.6.20 kernel.
Memory management and scheduling activities are carried out by the existing Linux
kernel. The KVM does the rest, which makes it simpler than the hypervisor that
controls the entire machine. KVM is a hardwareassisted para-virtualization tool,
which improves performance and supports unmodified guest OSes such as
Windows, Linux, Solaris, and other UNIX variants.
Para-Virtualization with Compiler Support
Unlike the full virtualization architecture which intercepts and emulates privileged
and sensitive instructions at runtime, para-virtualization handles these instructions
at compile time. The guest OS kernel is modified to replace the privileged and
sensitive instructions with hypercalls to the hypervisor or VMM. Xen assumes such a
para-virtualization architecture. The guest OS running in a guest domain may run at
Ring 1 instead of at Ring 0. This implies that the guest OS may not be able to execute
some privileged and sensitive instructions. The privileged instructions are
implemented by hypercalls to the hypervisor. After replacing the instructions with
hypercalls, the modified guest OS emulates the behavior of the original guest OS. On
an UNIX system, a system call involves an interrupt or service routine. The
hypercalls apply a dedicated service routine in Xen.
VIRTUAL MACHINE MONITOR:
A Virtual Machine Monitor (VMM) is a software program that enables the creation,
management and governance of virtual machines (VM) and manages the operation
of a virtualized environment on top of a physical host machine.
VMM is also known as Virtual Machine Manager and Hypervisor. However, the
provided architectural implementation and services differ by vendor product.
VMM is the primary software behind virtualization environments and
implementations. When installed over a host machine, VMM facilitates the creation
of VMs, each with separate operating systems (OS) and applications. VMM manages
the backend operation of these VMs by allocating the necessary computing, memory,
storage and other input/output (I/O) resources.
In a virtual machine environment, the virtual machine monitor (VMM) becomes the
master control program with the highest privilege level, and the VMM manages one
or more operating systems, now referred to as "guest operating systems." Each
guest OS manages its own applications as it normally does in a non-virtual
environment, except that it has been isolated in the computer by the VMM. Each
guest OS with its applications is known as a "virtual machine" and is sometimes
called a "guest OS stack."
VMM Types
Following are the three common VMM architectures, showing the relationship
between the VMM, the guest OS and device drivers. All three methods can be
paravirtualized (changes in the guest OS are made) or fully virtualized (no changes
in the guest OS). The term "hypervisor" is used to refer to the virtual machine
monitor component nearest the hardware.
1. Host OS – This method helps in enabling VMM in order to get installed on guest
OS and running computer with no modification.
2. Hypervisor – It offers best performance, flexibility, and most control in VM
(Virtual machine) environment.
3. Service OS – It combines hypervisor’s robustness with hosted model’s flexibility
that uses existing OS.
Implementing Virtual Machine Monitors
Virtual machine monitors can be implemented in two ways. First, one can run the
monitor directly on hardware in kernel mode, with the guest operating systems in
user mode. Second, one can run the monitor as an application in user mode on top of
a host operating system. The latter may be less complex to implement because the
monitor can take advantage of the abstractions provided by the host operating
systems, but it is only possible if the host operating system forwards all the events
that monitor needs to perform its job.
Virtual machine properties
A virtual machine (VM) is implemented by adding a layer of software to a real machine
to support the desired virtual machine’s architecture. For example, virtualizing software
installed on an Apple Macintosh can provide a Windows/IA-32 virtual machine capable
of running PC application programs. In general, a virtual machine can circumvent real
machine compatibility constraints and hardware resource constraints to enable a
higher degree of software portability and flexibility. A wide variety of virtual machines
exist to provide an equally wide variety of benefits. Multiple, replicated virtual machines
can be implemented on a single hardware platform to provide individuals or user
groups with their own operating system environments.
The different system environments (possibly with different operating systems)also
provide isolation and enhanced security. A large multiprocessor server can be divided
into smaller virtual servers, while retaining the ability to balance the use of hardware
resources across the system. Virtual machines can also employ emulation techniques to
support cross platform software compatibility. For example, a platform implementing
the PowerPC instruction set can be converted into a virtual platform running the IA-32
instruction set. Consequently, software written for one platform will run on the other.
This compatibility can be provided either at the system level (e.g.,to run a Windows OS
on a Macintosh)or at the program or process level (e.g.,to run Excel on a Sun
Solaris/SPARC platform).In addition to emulation, virtual machines can provide
dynamic, on-the-fly optimization of program binaries. Finally, through emulation,
virtual machines can enable new, proprietary instruction sets,e.g.,incorporating very
long instruction words(VLIWs), while supporting programs in an existing, standard
instruction set. The virtual machine examples just described are constructed to match
the architectures of existing real machines. However, there are also virtual machines for
which there are no corresponding real machines. It has become common for language
developers to invent a virtual machine tailored to a new high-level language. Programs
written in the high-level language are compiled to “binaries” targeted at the virtual
machine. Then any real machine on which the virtual machine is implemented can run
the compiled code. The power of this approach has been clearly demonstrated with the
Java high level language and the Java virtual machine, where a high degree of platform
independence has been achieved, thereby enabling a very flexible network computing
environment. Virtual machines have been investigated and built by operating system
developers, language designers, compiler developers, and hardware designers.
Although each application of virtual machines has its unique characteristics, there also
are underlying concepts and technologies that are common across the spectrum of
virtual machines. Because the various virtual machine architectures and underlying
technologies have been developed by different groups, it is especially important to unify
this body of knowledge and understand the base technologies that cut across the
various forms of virtual machines. The goals of this book are to describe the family of
virtual machines in a unified way, to discuss the common underlying technologies that
support them, and to demonstrate their versatility by exploring their many applications.
HLLVM(High Level Language Virtual Machine)
For the process VMs cross-platform portability is clearly a very important objective. For
example, the FX!32 system enabled portability of application software compiled for a
popular platform (IA-32 PC) to a less popular platform (Alpha). However, this approach
allows cross-platform compatibility only on a case-by-case basis and requires a great
deal of programming effort.
For example, if one wanted to run IA-32 binaries on a number of hardware platforms
currently in use, e.g., SPARC, PowerPC, and MIPS, then an FX!32-like VM would have to
be developed for each of them. Full cross-platform portability is more easily achieved by
taking a step back and designing it into an overall software framework. These high-
level language VMs (HLL VMs) are similar to the process VMs described earlier.
However, they are focused on minimizing hardware-specific and OS-specific features
because these would compromise platform independence. High-level language VMs first
became popular with the Pascal programming environment.
.In a conventional system, Figure 1.10a, the compiler consists of a frontend that
performs lexical, syntax, and semantic analysis to generate simple intermediate code —
similar to machine code but more abstract. Typically the intermediate code does not
contain specific register assignments, for example. Then a code generator takes the
intermediate code and generates a binary containing machine code for a specific ISA
and OS. This binary file is distributed and executed on platforms that support the given
ISA/OS combination. To execute the program on a different platform, however, it must
be recompiled for that platform. In HLL VMs, this model is changed (Figure 1.10b). The
steps are similar to the conventional ones, but the point at which program distribution
takes place is at a higher level. As shown in Figure 1.10b, a conventional compiler
frontend generates abstract machine code, which is very similar to an intermediate
form.
An advantage of an HLL VM is that software is easily portable, once the VM is
implemented on a target platform. While the VM implementation would take some
effort, it is a much simpler task than developing a compiler for each platform and
recompiling every application when it is ported. It is also simpler than developing a
conventional emulating process VM for a typical real-world ISA.
The Sun Microsystems Java VM architecture (Lindholm and Yellin 1999) and the
Microsoft common language infrastructure (CLI), which is the foundation of the .NET
framework (Box 2002), are more recent, widely used examples of HLL VMs.
HyperVisors
Hypervisors are virtual machine monitor(VMM) that enables numerous virtual
operating systems to simultaneously run on a computer system. These virtual machines
are also referred as guest machines and they all share the hardware of the physical
machine like memory, processor, storage and other related resources. This improves
and enhances the utilization of the underlying resources.
The hypervisor isolates the operating systems from the primary host machine. The job
of a hypervisor is to cater to the needs of a guest operating system and to manage it
efficiently. Each virtual machine is independent and do not interfere with each another
although they run on the same host machine. They are no way connected to one
another. Even at times one of the virtual machines crashes or faces any issues, the other
machines continue to perform normally. Hypervisors are divided into two types
Type -1: is the bare-metal hypervisor that are deployed directly over the host's system
hardware without any underlying operating systems or software. Some examples of the
type 1 hypervisors are Microsoft Hyper-V hypervisor, VMware ESXi, Citrix XenServer.
Type 2: is a hosted hypervisor that runs as a software layer within a physical operating
system. The hypervisor runs as a separate second layer over the hardware while the
operating system runs as a third layer. The hosted hypervisors include Parallels
Desktop and VMware Player.
[Hypervisor]
Xen
Xen Project is a type-1 hypervisor, providing services that allow multiple computer operating
systems to execute on the same computer hardware concurrently. It was developed by
the University of Cambridge and is now being developed by the Linux Foundation with support
from Intel.
The University of Cambridge Computer Laboratory developed the first versions of Xen. The Xen
Project community develops and maintains Xen Project as free and open-source software,
subject to the requirements of the GNU General Public License(GPL), version 2. Xen Project is
currently available for the IA-32, x86-64 and ARM instruction sets.
Xen is primarily a bare-metal, type-1 hypervisor that can be directly installed on computer
hardware without the need for a host operating system. Because it's a type-1 hypervisor,
Xen controls, monitors and manages the hardware, peripheral and I/O resources directly.
Guest virtual machines request Xen to provision any resource and must install Xen virtual
device drivers to access hardware components. Xen supports multiple instances of the same
or different operating systems with native support for most operating systems, including
Windows and Linux. Moreover, Xen can be used on x86, IA-32 and ARM processor
architecture.
Xen components
A typical environment running Xen consists of different parts. To start with, there's Domain 0.
In Xen, this is how you refer to the host operating system (OS), as it's not really a host OS in
the sense that other virtual machines (VMs) -- domains in Xen terminology -- don't have to
use it to get access to the host server hardware. Domain 0 is only responsible for access to
the drivers, and if any coordination has to be done, it will be handled by Domain 0. Apart
from Domain 0, there are the other VMs that are referred to as Domain U.
Xen offers two types of virtualization: para virtualization and full virtualization.
In paravirtualization, the virtualized OS runs a modified version of the OS, which
results in the OS knowing that it's virtualized. This enables much more efficient
communication between the OS and the physical hardware, as the hardware
devices can be addressed directly. The only drawback of paravirtualization is that a
modified guest OS needs to be used, which isn't provided by many vendors.
The counterpart of paravirtualization is full virtualization. This is a virtualization
mode where the CPU needs to provide support for virtualization extensions. In full
virtualization, unmodified virtualized OSes can efficiently address the hardware
because of this support.
Commercial versions of Xen
Although Xen is included in the Linux kernel, only a few Linux distributions, such as
Oracle Unbreakable Linux and SUSE Linux Enterprise Server, offer a supported Xen
stack. Red Hat included Xen up to Red Hat Enterprise Linux (RHEL) 5, but switched to
KVM with the release of RHEL 6.
In addition to the Linux distributions that include Xen, Citrix has an open source
product called XenServer. XenServer offers Xen-based virtualization packaged
with a web-based management platform, which enables easy management of
VMs on a cluster consisting of multiple Xen servers. A similar web-based
management client is included in Oracle VM, but SUSE doesn't provide a
graphical integrated management client. Citrix also developed XenDesktop, a
product that enables companies to run virtual desktops, on top of a Xen
infrastructure.
Originally, Xen offered the Xend process to install and manage VMs from the
command line. The Xend process is a management daemon that's addressed by
the xe, xk or xl command, depending on the Xen version that's used. In modern
Xen environments, the Xend process is often replaced by libvirtd, a more generic
interface used by other virtualization platforms like KVM, which is managed by
using the rich virsh command line utility.
KVM
Kernel-based Virtual Machine (KVM) is an open source virtualization technology
built into Linux®. Specifically, KVM lets you turn Linux into a hypervisor that
allows a host machine to run multiple, isolated virtual environments called guests
or virtual machines (VMs). KVM is part of Linux. If you’ve got Linux 2.6.20 or
newer, you’ve got KVM. KVM was first announced in 2006 and merged into the
mainline Linux kernel version a year later. Because KVM is part of existing Linux
code, it immediately benefits from every new Linux feature, fix, and advancement
without additional engineering.
How does KVM work?
KVM converts Linux into a type-1 (bare-metal) hypervisor. All hypervisors need
some operating system-level components—such as a memory manager, process
scheduler, input/output (I/O) stack, device drivers, security manager, a network
stack, and more—to run VMs. KVM has all these components because it’s part of
the Linux kernel. Every VM is implemented as a regular Linux process, scheduled
by the standard Linux scheduler, with dedicated virtual hardware like a network
card, graphics adapter, CPU(s), memory, and disks.
In the KVM architecture KVM implemented as regular Linux process, scheduled by the
standard Linux scheduler. In fact, each virtual CPU appears as a regular Linux process.
This allows KVM to benefit from all the features of the Linux kernel.
Device emulation is handle by a modified version of qemu that provides an emulated
BIOS, PCI bus, USB bus, and a standard set of devices such as
IDE and SCSI disk controllers, network cards, etc.
VM Ware
VM Ware is a virtualisation and cloud computing software. VM Ware in a virtual
platform is a thin software layer that allows multiple operating system environment to
access and run concurrently using the same hardware resources. A popular virtual
machine infrastructure for IA-32-based PCs and servers is the VMware Virtual Platform
(VMware 2000). VMware system is an example of a hosted virtual machine system.
More recently, VMware has included a native virtualization architecture embodied in a
product called the VMware ESX Server, which is claimed to provide better resource
control, scalability, and performance, but at the expense of full support for all types of
hardware.
VMware, as a separate company from either a hardware developer, such as Intel, or an
operating system company, such as Microsoft, faced an additional challenge. It had to
ensure that its VMM software can be easily installed and used. In particular it cannot
expect its users to wipe out an existing operating system to install VMware software
and then reinstall the old operating system over the VMM. This, in fact, directly
influenced the architecture of the VMM developed by VMware.
VM Ware products:
• VM Ware Vsphere- Data center and cloud infrastructure
• VM Ware NSX- VM Ware Network Security
• VM Ware VSan-Storage
On the VMware system, these three components are respectively named
VMMonitor, VMApp, and VMDriver. 33. At any given time, the processor executes
either in the host operating system environment or in the VMMonitor environment.
Transfer of control between the two worlds is facilitated by the VMDriver and
involves saving and restoring all user and system visible state information on the
processor. As opposed to the view presented to the user shown in Figure 8.21, the
structural view of the VMware system is as shown in Figure 8.22.
VIRTUAL BOX
A Virtual Box or VB is a hypervisor for X86 computers from Oracle corporation. It was
first developed by Innotek GmbH and released in 2007 as an open source software
package. The company was later acquired by Sun Micro Systems in 2008. After that,
Oracle has continued the development of Virtual Box since 2010 and the product name
is titled as Oracle VM Virtual Box.
VMware Virtual Platform allows virtual machines to work in concert with each other,
sharing files and devices. This is possible because each virtual machine has its own
unique network address. VMware Virtual Platform supports the integration of multiple
environments so that these environments perform like multiple applications on a single
computer.
In general, a Virtual Box is a software virtualization package that can be installed on any
operating system as an application software. It allows additional operating systems to
be installed on it, as a Guest OS. It can than create and manage guest virtual machines,
each with a guest operating system and its own virtual environment. Virtual Box is
being supported by many operating systems like Windows XP, Windows Vista,
Windows 7, Linux, Mac OS X, Solaris, and Open Solaries. Supported guest operating
systems are versions and derivations of Windows, Linux, OS/2, BSD, Haiku, etc.
Architecture virtual box
VirtualBox uses a layered architecture consisting of a set of kernel modules for running
virtual machines, an API for managing the guests, and a set of user programs and
services. At the core is the hypervisor, implemented as a ring 0 (privileged) kernel
service. The kernel service consists of a device driver named vboxsrv, which is
responsible for tasks such as allocating physical memory for the guest virtual machine,
and several loadable hypervisor modules for things like saving and restoring the guest
process context when a host interrupt occurs, turning control over to the guest OS to
begin execution, and deciding when VT-x or AMD-V events need to be handled.
The hypervisor does not get involved with the details of the guest operating system
scheduling. Instead, those tasks are handled completely by the guest during its
execution. The entire guest is run as a single process on the host system and will run
only when scheduled by the host. If they are present, an administrator can use host
resource controls such as scheduling classes and CPU caps or reservations to give very
predictable execution of the guest machine.
Additional device drivers will be present to allow the guest machine access to other host
resources such as disks, network controllers, and audio and USB devices. In reality, the
hypervisor actually does little work. Rather, most of the interesting work in running the
guest machine is done in the guest process. Thus the host's resource controls and
scheduling methods can be used to control the guest machine behavior.
VMware Virtual Platform meets three key requisites demanded by users and
organizations:
High performance – VMware Virtual Platform overhead is impressively low.
Applications running on virtual machines perform comparably to those running on real
machines. Unlike PC simulators, VMware Virtual Platform uses all the components of
the processor and eliminates the overheads typically associated with PC emulators.
Portability – The VMware Virtual Platform architecture runs on any x86-based
computer regardless of its manufacturer or its I/O devices.
Easy installation – VMware Virtual Platform installs as easily as an application. On an
existing system, it installs without requiring changes to the operating system, and
without requiring the addition or repartitioning of disk resources.
Hyper-V:
Hyper-V is Microsoft's hardware virtualization product. It lets you create and run a
software version of a computer, called a virtual machine. Each virtual machine acts like a
complete computer, running an operating system and programs. When you need
computing resources, virtual machines give you more flexibility, help save time and
money, and are a more efficient way to use hardware than just running one operating
system on physical hardware.
Hyper-V runs each virtual machine in its own isolated space, which means you can run
more than one virtual machine on the same hardware at the same time. You might want
to do this to avoid problems such as a crash affecting the other workloads, or to give
different people, groups or services access to different systems.
Hyper-V can help you:
• Establish or expand a private cloud environment. Provide more flexible, on-
demand IT services by moving to or expanding your use of shared resources and
adjust utilization as demand changes.
• Use your hardware more effectively. Consolidate servers and workloads onto
fewer, more powerful physical computers to use less power and physical space.
• Improve business continuity. Minimize the impact of both scheduled and
unscheduled downtime of your workloads.
• Establish or expand a virtual desktop infrastructure (VDI). Use a centralized
desktop strategy with VDI can help you increase business agility and data security,
as well as simplify regulatory compliance and manage desktop operating systems
and applications. Deploy Hyper-V and Remote Desktop Virtualization Host (RD
Virtualization Host) on the same server to make personal virtual desktops or
virtual desktop pools available to your users.
• Make development and test more efficient. Reproduce different computing
environments without having to buy or maintain all the hardware you'd need if
you only used physical systems.
• Architecture
Hyper-V implements isolation of virtual machines in terms of a partition. A partition is a logical
unit of isolation, supported by the hypervisor, in which each guest operating system executes.
There must be at least one parent partition in a hypervisor instance, running a supported version
of Windows Server(2008 and later). The virtualization software runs in the parent partition and
has direct access to the hardware devices. The parent partition creates child partitions which
host the guest OSs. A parent partition creates child partitions using the hypercall API, which is
the application programming interfaceexposed by Hyper-V.
A child partition does not have access to the physical processor, nor does it handle its
real interrupts. Instead, it has a virtual view of the processor and runs in Guest Virtual Address,
which, depending on the configuration of the hypervisor, might not necessarily be the
entire virtual address space. Depending on VM configuration, Hyper-V may expose only a subset
of the processors to each partition. The hypervisor handles the interrupts to the processor, and
redirects them to the respective partition using a logical Synthetic Interrupt Controller (SynIC).
Hyper-V can hardware accelerate the address translation of Guest Virtual Address-spaces by
using second level address translation provided by the CPU, referred to as EPT on Intel
and RVI (formerly NPT) on AMD.
Child partitions do not have direct access to hardware resources, but instead have a virtual view
of the resources, in terms of virtual devices. Any request to the virtual devices is redirected via
the VMBus to the devices in the parent partition, which will manage the requests. The VMBus is
a logical channel which enables inter-partition communication. The response is also redirected
via the VMBus. If the devices in the parent partition are also virtual devices, it will be redirected
further until it reaches the parent partition, where it will gain access to the physical devices.
Parent partitions run a Virtualization Service Provider (VSP), which connects to the VMBus and
handles device access requests from child partitions. Child partition virtual devices internally run
a Virtualization Service Client (VSC), which redirect the request to VSPs in the parent partition
via the VMBus. This entire process is transparent to the guest OS.
Virtual devices can also take advantage of a Windows Server Virtualization feature,
named Enlightened I/O, for storage, networking and graphics subsystems, among others.
Enlightened I/O is a specialized virtualization-aware implementation of high level communication
protocols, like SCSI, that allows bypassing any device emulation layer and takes advantage of
VMBus directly. This makes the communication more efficient, but requires the guest OS to
support Enlightened I/O.
******