0% found this document useful (0 votes)
31 views7 pages

Best Practices For Server Virtualization

The Computer Measurement Group (CMG) is a non-profit organization focused on the performance evaluation and capacity management of computer systems. This document discusses virtualization techniques, particularly using VMWare's ESX Server, and outlines best practices for maximizing the benefits of virtual servers. It also includes a pilot program's findings and recommendations for improving server performance and resource allocation in a virtual environment.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
31 views7 pages

Best Practices For Server Virtualization

The Computer Measurement Group (CMG) is a non-profit organization focused on the performance evaluation and capacity management of computer systems. This document discusses virtualization techniques, particularly using VMWare's ESX Server, and outlines best practices for maximizing the benefits of virtual servers. It also includes a pilot program's findings and recommendations for improving server performance and resource allocation in a virtual environment.

Uploaded by

kmdbasappa
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
You are on page 1/ 7

The Association of System

Performance Professionals

The Computer Measurement Group, commonly called CMG, is a not for profit, worldwide organization of data processing professionals committed to the
measurement and management of computer systems. CMG members are primarily concerned with performance evaluation of existing systems to maximize
performance (eg. response time, throughput, etc.) and with capacity management where planned enhancements to existing systems or the design of new
systems are evaluated to find the necessary resources required to provide adequate performance at a reasonable cost.

This paper was originally published in the Proceedings of the Computer Measurement Group’s 2004 International Conference.

For more information on CMG please visit http://www.cmg.org

Copyright Notice and License

Copyright 2004 by The Computer Measurement Group, Inc. All Rights Reserved. Published by The Computer Measurement Group, Inc. (CMG), a non-profit
Illinois membership corporation. Permission to reprint in whole or in any part may be granted for educational and scientific purposes upon written application to
the Editor, CMG Headquarters, 151 Fries Mill Road, Suite 104, Turnersville , NJ 08012.

BY DOWNLOADING THIS PUBLICATION, YOU ACKNOWLEDGE THAT YOU HAVE READ, UNDERSTOOD AND AGREE TO BE BOUND BY THE
FOLLOWING TERMS AND CONDITIONS:

License: CMG hereby grants you a nonexclusive, nontransferable right to download this publication from the CMG Web site for personal use on a single
computer owned, leased or otherwise controlled by you. In the event that the computer becomes dysfunctional, such that you are unable to access the
publication, you may transfer the publication to another single computer, provided that it is removed from the computer from which it is transferred and its use
on the replacement computer otherwise complies with the terms of this Copyright Notice and License.

Concurrent use on two or more computers or on a network is not allowed.

Copyright: No part of this publication or electronic file may be reproduced or transmitted in any form to anyone else, including transmittal by e-mail, by file
transfer protocol (FTP), or by being made part of a network-accessible system, without the prior written permission of CMG. You may not merge, adapt,
translate, modify, rent, lease, sell, sublicense, assign or otherwise transfer the publication, or remove any proprietary notice or label appearing on the
publication.

Disclaimer; Limitation of Liability: The ideas and concepts set forth in this publication are solely those of the respective authors, and not of CMG, and CMG
does not endorse, approve, guarantee or otherwise certify any such ideas or concepts in any application or usage. CMG assumes no responsibility or liability
in connection with the use or misuse of the publication or electronic file. CMG makes no warranty or representation that the electronic file will be free from
errors, viruses, worms or other elements or codes that manifest contaminating or destructive properties, and it expressly disclaims liability arising from such
errors, elements or codes.

General: CMG reserves the right to terminate this Agreement immediately upon discovery of violation of any of its terms.
Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

BEST PRACTICES FOR SERVER VIRTUALIZATION

Chris Molloy

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
[email protected]

Logical partitioning of the mainframe environment has existed for several


Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

years. Logical partitioning of the distributed environment using products


such as VMWare’s™ ESX Server or Microsoft’s™ Virtual
Server is gaining momentum as companies look to improve their return on
investment with server consolidation projects. This paper reviews some of
the virtualization techniques now available in the server environment, and
what best practices should be put in place to maximize the benefits of virtual
servers.

1 Background boundaries (e.g. fixed amount of memory or


processors).
Those that have worked on the mainframe platform are
very familiar with LPARs (Logical Partitions). The There has been a large interest over the last couple of
LPAR virtualization functions were incorporated into years to allow multiple virtual instance operating
the platform in three phases. The first phase was systems to run on a single CISC server. Two leading
where CPU, memory, and I/O resources could be vendors, VMWare and Connectix (purchased in the
dedicated to a partition of the machine. The second last year by Microsoft Corporation) created products
phase was the ability to dynamically move dedicated that provide this function. While there are several
resources between partitions by turning the resource benefits for running virtual instances, the two most
off to one partition, and then turning the resource on to important to the performance personnel are (1) the
another partition. The third phase was the ability to ability to increase server utilization, (2) without the
dynamically share resource between partitions. problems associated with application coexistence.
Phases one and two are commonly referred to as Several studies have indicated that the average CISC
physical partitioning, while the last phase is referred to servers run in single digit CPU utilization. Performing
as logical partitioning. server consolidation through application coexistence
(running multiple applications within the same
The ability to perform logical partitioning on the operating system) prove difficult for some CISC
mainframe first started with the operating system, and applications, since they are designed as single server
was then embedded into the hardware with microcode. applications, and contained conflicts when running with
An example of this is z/VM™, an operating system that other applications.
allocates resources to “guest” operating systems.
These functions were incorporated into the hardware In order to learn about these new technologies, we
through Processor Resource/System Manager decided to conduct a pilot program using VMWare’s
(PR/SM) and Intelligent Resource Director (IRD) in the ESX Server product. The ESX Server product is
latest IBM™ zSeries 990 mainframes. designed for servers, while the GSX™ Server product
was designed for workstations.
These phases of virtualization are now being included
in the RISC (reduced instruction set computing, usually
running UNIX variant operating systems) and CISC 2 VMWare ESX Pilot
(complex instruction set computing, usually running
one of Microsoft’s™ Windows operating systems) Due to cost and timing constraints, we built a VMWare
architectures. As RISC examples, both Sun’s™ Sun ESX 1.0 test environment using 4-way 700 MHz
Fire servers and IBM’s high end pSeries servers Pentium III servers with 8 GB of RAM, noting that this
provide for physical partitioning of resources, as does configuration was below the minimum specifications
IBM’s CISC xSeries x445 servers. Some of these for certification. Architecturally, ESX is running a base
servers provide phase two functions, where resources operating system (Linux) which supports the virtual
can be moved between partitions on specific resource instances. While the configuration was not certified,

Find a CMG regional meeting near you at www.cmg.org/regions


Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

the virtual instances ran without code related configuration, increased memory speed and front side
problems. Part of the reason for the use of 700 MHz bus (FSB) speed added to creating improved response
servers was that there were several servers in in this configuration. The response proved to be better
production that were still running at this speed. than the physical processor configuration, even when

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
running multiple virtual instances, as the other virtual
A stress test was then performed against a physical instances had dedicated processors of their own.
server with a similar physical configuration. This was
not an apple to apples comparison for the following A mentioned previously, the application programmers
reasons: were very concerned over the poor results of the
original stress testing. This was despite the technical
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

1. The virtual instances can only have one explanation for the unequal response. Both the
allocated CPU to it, while the physical server performance and technology were being challenged by
was capable of using all 4 CPUs. the application programmers. In order to address
2. The physical server had 3 GB of memory to these concerns an external consultant was hired to
use, while the virtual instance only had 512 analyze the initial environment, and provide a size
MB of storage allocated to it. estimate for the new environment.
3. The physical server had 2 network interface
cards (NICs) versus a single NIC for the virtual The consultants were not able to find anything that was
instance. obviously improperly configured to impact
4. The ESX base operating system takes performance. They found the following minor items
additional resources to run. that would be addressed in the new environment:
5. There were other virtual instances running at
the time of the stress test. 1. They recommended that the vmimages
partition be created up front as a Linux ext3
Not surprisingly, the physical server ran much faster partition to store the .vmdk and .iso files.
than the virtual instance when workload from 2. Change the service console memory
LoadRunner was issued against both environments. In allocation from 256 MB to at least 512 MB.
order to improve the virtual instance performance we 3. The VMFS access should be set from
reran the tests by shutting down all other virtual “shared” to “private”.
instances, increasing the virtual instance to 2GB of 4. Reinstall the VMWare tools at the latest level.
RAM, adding the Terminal Services workload option, 5. Upgrade to ESX 2.0.
and replacing the default vlance network driver to use 6. Separating the virtual disks from the two
vmxnet. The stress tests were rerun, with the virtual virtual instances to be on two different LUNs to
instance running around 40-50% of the single CPU. spread disk I/O load.
The resulting response time after these changes was 7. Upgrade to Gigabit Ethernet cards.
acceptable to the systems programmers, but not to the 8. Split out the C system drive from the D data
applications programmers. drive into separate virtual disks.
9. Terminate the services and processes that
Once we determined that a minimum level of were not being used.
performance could be obtained, we decided to move
forward with replacement of physical servers with All of the above items were targeted to be addressed
virtual instances on a single virtual server. At this time, when the new systems were rebuilt.
VMWare ESX 2.0 became available. This release
provides symmetric multi-processor (SMP) support, as 3 Sizing The New Virtual Environment
well as additional performance improvements. We
decided that the first physical servers that would be The next thing to do was to size the new servers.
migrated would be the test and development servers VMWare provides an excellent spreadsheet for sizing
for our application area. This approach minimized our new workload. The spreadsheet takes into account
risk, as none of these servers contained external the resources needed for the base Linux operating
production workload. system, including the increased utilization of the base
system as the virtual instances increase their
We decided to purchase new servers for this workload. The primary input for the spreadsheet is the
environment, since most of the existing servers were CPU usage, memory usage, disk space allocated, and
out of warranty and should not continue to be network traffic. In our ESX 2.0 environment, the
redeployed. The processors on these new servers network resources were also shared, so we placed
were 2.8 GHz, 4 times faster than the 700 MHz more importance in this sizing due to the potential for
processors. This compensated for the ability to use cross virtual instance network bottlenecks
one processor instead of four, and the SMP feature of
ESX 2.0 was not needed. While the processor speed In order to get the proper input for the spreadsheet, the
was the most significant improvement in the new

Find a CMG regional meeting near you at www.cmg.org/regions


Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

consultants measured the usage of the physical had not been updated for Gigabit Ethernet capability.
servers that were being replaced. In addition to the This provided us a growth path by allowing us to wait to
actual resources used, they projected growth for the upgrade the network to Gigabit Ethernet at a different
following 12 months. Since the application time. The base operating system had minimal network

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
programmers raised large concerns over performance, requirements.
the CPU maximum utilization was used instead of
average utilization. The CPU resource turned out to be The disk sizing had to take into account the amount of
the least concern, as the new processors were four space needed and the effects of concurrent I/O. The
times faster than the original processors, and the space sizing was relatively easy, adding up the
physical servers were running in single digit utilization. allocated space for each of the original physical
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

For example, with all other things being equal, a four machines, adding the space needed for the base
processor 700 MHz physical server running at 5% operating system, and adding space for growth. We
utilization will run at 5% utilization on a single 2.8 GHz planned on having some of the new servers with locally
processor. Taking cost into account, we decided to attached storage, and some with network attached
allocate one CPU for the base operating system, and storage similar to the original servers that were being
one CPU for each of the virtual instances. We replaced. We had to keep in mind that all I/O by the
predicted that this would yield the same CPU utilization virtual instances was actually being performed
for the virtual instance as the single digit utilization physically by the base operating system. We had to
physical servers. While this is overkill, it provided ensure that there were enough physical connections to
room for additional virtual instances once the the devices such that no bottlenecks occurred. As
technology had greater acceptance. This also implied described earlier, the consultants suggested some I/O
that we did not have to purchase the SMP software for layout changes in order to reduce the potential for
these servers, as we did not plan on using more than bottlenecks and improve performance.
one processor per virtual instance.
4 Lessons Learned
The ESX 2.0 environment has an architectural
restriction of 3 GB of storage per virtual instance. This Now that we were running virtual instances on a base
affected the selection of some of the physical servers operating system, we needed to understand the
as virtual instances, as several of them were using performance and capacity characteristics of each of
more than 3 GB of storage. We decided to multiply the the virtual instances and the overall physical
number of instances by 3 GB to determine the amount environment.
of virtual instance memory needed by the new servers.
This was added to the memory needed by the base The ESX environment provides a set of application
operating system to determine the total memory programming interfaces (APIs) that provide utilization
requirements by the new servers. statistics for the base operating system and the virtual
instances. We originally coded a program to the ESX
For the network sizing, we also had to take into 1.0 API, which provided individual instance statistics
account the physical limitations of the new server. and total physical instance statistics. The ESX 2.0 API
Each of the original physical servers had 4 10/100 does not have total physical instance statistics (I am
Mbps NIC cards for utilization and redundancy. There assuming that this is due to the ability to have multiple
were not enough physical slots in the server to virtual CPUs per instance, and the difficulty in
dedicate this many cards for all the virtual instances, converting to a single utilization number where there is
so we decided to share the network cards. This the potential for CPU allocation to change). In order
influenced our decision to use 10/100/1000 Mbps NIC for us to compute the physical server utilization
cards, considering the nominal increase in cost. The statistics, we needed to obtain the configuration
newer servers we used had two 10/100/1000 Mbps information for each instance at the time of the
ports on the motherboard. In addition to using these, calculations. Since we were not using the SMP feature
we installed four 10/100/1000 Mbps cards. For a 8 (each virtual instance had 1 CPU), it was relatively
CPU system that we planned on installing 7 virtual easy to compute the overall physical utilization
instances (the remaining CPU was reserved for the statistics for CPU usage. For example, if we were
base operating system) this equated to 2800 Mbps (4 running an 8 way server at 5% utilization for the base
cards times 100 Mbps speed times 7 virtual instances) operating system, 4 virtual instances at 10% and 3
of maximum network capacity on the original servers. virtual instances at 20%, the overall physical utilization
The new servers would have 6000 Mbps of capacity (2 of the server would be 13%
motherboard 1000 Mbps ports, and 4 1000 Mbps (Calculation would be (5+10+10+10+10+20+20+20)/8).
cards). The reduction from 28 physical connections to
6 physical connections also simplified the network. As of May 2004, we were running 76 virtual instances
We determined during the sizing that we did not on 15 physical servers. This was an average of 5:1
actually need the Gigabit Ethernet capability, and could physical server reduction and we still had about a
safely run the instances attached to the network which

Find a CMG regional meeting near you at www.cmg.org/regions


Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

dozen other servers to convert to virtual instances. servers, the resource requirements of the base
This blended rate was on a combination of 4 way and 8 operating system were similar to the average virtual
way servers. As predicted, the server utilization was instance, and this will estimate will be used as our
comparatively low, and memory became the gating initial estimate of resources for future configurations.

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
factor on the ability to add further virtual instances to
each of the servers. We are waiting until the initial As of August 2004, we were running 109 virtual
physical server conversion is complete to determine instances on 16 physical servers. We have installed
the overall physical server requirements of the new reporting agents on 14 of the servers and 99 of the
environment. We will then analyze the utilization instances. Five of the reporting servers are running
characteristics of each of the new servers to determine ESX 1.x, with a total of 24 instances running on them.
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

if we need to order more memory, or reallocate These are x440 servers, with 4 physical processors.
memory from existing virtual instances to enable more Each of the instances has one logical processor
virtual instances to run on each of the physical servers. assigned to it. The largest number of instances on any
one x440 is 8.
After the virtual instances were installed, we left the old
physical server around until the application A summary of the 24 hour processor and storage
programmers gave the approval to remove them. This utilization for the month of August for the ESX 1.x
initially came after about a month. With the increased servers is listed in Appendix 1. The utilization numbers
familiarity with the virtual instances, the programmers for the entire box include the resources required to run
are now giving approval after about two weeks. The the base ESX image as well. The data indicates an
application programmers are now seeing improved approximate three fold in processor utilization, implying
response time over their original physical environment. that we upgraded the environment with one third of the
This is predominantly due to the increase in processor processing power that existed in the old environment.
speed. With the original physical environment in single The decrease in amount of processing resource
digit CPU utilization, they were not using the additional needed coupled with the decrease in cost of
processors the majority of the time. The net effect was processing resource led to a significant decrease in the
a four fold improvement in response time for CPU hardware costs in the new environment.
intensive workloads, as the CPUs were four times
faster. There were additional performance Nine of the reporting servers are running ESX 2.x, with
improvements going to newer memory and I/O devices a total of 75 instances running on them. These are
which had better performance characteristics as well. x445 servers, with 8 or 16 processors. Forty three of
The better characteristics overcompensated for the the instances has one logical processor assigned to it,
additional virtualization layer that was added to the while 32 instances has two logical processors
workload (e.g. overhead of converting virtual I/O to real assigned to it. The largest number of instances on any
I/O thru the use of virtual drivers and then performing one x445 is 18, on a server that only has 8 processors.
the actual I/O in the base operating system). A For the month of August, this server ran at 20.82%
statement of the programmer’s satisfaction with the utilized, the largest utilization of the x445 servers.
virtual environment is that they are now asking for
more of their physical servers to be converted so much We are approaching a 7:1 ratio of instances to physical
so that we can’t keep up with the speed that they want servers. This allows us to perform physical functions
them built. on the servers (e.g. bios upgrades) at one seventh of
the cost, while providing room for growth of an
One needs to understand the utilization requirements instance without requiring a physical upgrade to the
for the base operating system when sizing the server (including having to get and install a new
resources for the environment. Due to the maturity of server). We are currently in the process of analyzing
the mainframe environment (where the majority of the how additional instances can be placed on the existing
virtualization work is done in microcode) the PR/SM or physical servers as we increase the different workload
IRD overhead to the CPU, memory, and disk types that we are putting in the virtualized environment.
resources is minimal (less than 1%) and usually
ignored in mainframe sizing. In the CISC virtualization 5 Best Practices
environment, this function is performed by the base
operating system, and the resources required for this The following are our best practices for our planned
operating system are typical of a running operating environments:
system. There has been specific tuning done on the
base operating environment to foster virtual instances, 1. Perform response time measurements on the
but additional resource is used in the base operating existing physical environment and the new
system to determine which virtual instance gets which virtual environment, so that you can have an
physical resource at which time (most importantly empirical discussion of response verses an
physical CPU and physical memory resources), and to emotional one.
process all the physical I/O. For the majority of our

Find a CMG regional meeting near you at www.cmg.org/regions


Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

2. Use latest versions of virtual drivers, as they increased through the use of a virtualization
have the best performance characteristics. environment without having to test for application
3. Consider the base operating system as one of coexistence by putting the applications in separate
the workloads that needs to be sized, virtual instances.

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
understanding that the resources required for 5. The base operating system for VMWare ESX
the base operating system are not trivial. Server is Linux, and Linux performance skills are
4. Use 10/100/1000 Mbps NIC cards to position needed to support the environment.
for future network capacity.
5. When converting servers that are running on It needs to be pointed out that all the items relevant to
significantly older technology, you can afford performance tuning and capacity planning in the
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

to size the CPU and memory requirements physical environment are also relevant to the virtual
using their maximum values verses their instance environment. For example, performing I/O is
average values. still the slowest component in a transaction, and
6. Use the VMWare sizing spreadsheet, as it has anything that can be done for I/O avoidance by using
built in resource calculations which account for data in memory techniques will have a larger affect on
different utilization levels of the virtual improving response time than CPU tuning.
instances.
7. As the labor component costs become a While Microsoft’s Virtual Server product was not tested
larger portion of the total cost of ownership of during our work, similar best practices and conclusions
the virtual environment, it is better to add a would apply to this product, as the CISC architecture is
little more hardware to the initial configuration the same for both products.
(we spent more time discussing adding
another disk drive than the disk drive cost).
8. Determine the performance monitoring and 7 References (Acknowledgements)
reporting requirements for the virtual
environment considering that you now need to [1] “Virtualization Project Management Summary”.
analyze both the physical and logical utilization Bethann Reneaud, June 2004.
characteristics.
9. Determine the criteria for candidate servers [2] “Consultants summary”, IBM Corporation, March
prior to starting the conversion, considering 2004.
that there are architectural limits like the
memory limit. 8 Trademarks
10. Determine the acceptance criteria for retiring
the physical servers prior to conversion. IBM and z/OS are registered trademarks of IBM
11. Perform a post conversion analysis to identify Corporation in the United States, other countries, or
which (if any) physical resource bottlenecks both.
exist in order to determine what would be
needed to increase the number of virtual Linux is a registered trademark of Linus Torvalds.
instances or workload on the physical server.
12. In order to minimize risk, convert servers that Microsoft and Windows are registered trademarks of
are mission critical (e.g. customer facing) last. Microsoft Corporation in the United States, other
13. In order to gain acceptance of the countries, or both.
virtualization technology, convert servers that
are older (slower) first. Sun and Sun Fire are registered trademarks Sun
Microsystems, Incorporated in the United States, other
countries, or both.
6 Conclusions
VMWare, ESX Server, and GSX Server are registered
The following conclusions can be made from our trademarks of VMWare, Incorporated (an EMC
server virtualization implementation: company) in the United States, other countries, or
both.
1. VMWare ESX Server is resilient enough to support
customer workload. Other company, product or service names may be the
2. The queuing affects of additional utilization can be trademarks or service marks of others.
mitigated in the virtualization environment.
3. The additional overhead of running a base
operating system to support the virtual instances
can be mitigated in the virtual environment.
4. Utilization of CISC servers can be significantly

Find a CMG regional meeting near you at www.cmg.org/regions


Learn the basics and latest aspects of IT Service Management at CMG's Annual Conference - www.cmg.org/conference

Appendix 1 – August 2004 Server/Instance Utilization for servers with ESX 1.x installed

Buy the Latest Conference Proceedings and Find Latest Computer Performance Management 'How To' for All Platforms at www.cmg.org
Server Processor Utilization Memory Utilization
Server 1 (entire box) 41.76% 74.98%
Server 1, instance 1 43.35% 19.00%
Server 1, instance 2 43.29% 16.35%
Server 1, instance 3 44.60% 25.06%
Join over 14,000 peers - subscribe to free CMG publication, MeasureIT(tm), at www.cmg.org/subscribe

Server 1, instance 4 37.04% 14.70%


Server 1, instance 5 40.07% 14.06%
Server 1, instance 6 35.96% 15.58%
Server 1, instance 7 37.84% 04.05%
Server 1, instance 8 21.71% 01.31%
Server 2 (entire box) 14.83% 64.99%
Server 2, instance 1 13.11% 5.58%
Server 2, instance 2 12.40% 9.97%
Server 2, instance 3 25.33% 4.21%
Server 2, instance 4 29.21% 4.20%
Server 3 (entire box) 24.38% 30.03%
Server 3, instance 1 28.74% 14.54%
Server 3, instance 2 27.72% 14.07%
Server 3, instance 3 21.88% 14.25%
Server 3, instance 4 22.73% 11.41%
Server 4 (entire box) 22.74% 65.30%
Server 4, instance 1 31.57% 15.53%
Server 4, instance 2 25.65% 13.61%
Server 4, instance 3 25.42% 15.23%
Server 4, instance 4 25.09% 15.70%
Server 5 (entire box) 16.79% 61.90%
Server 5, instance 1 24.08% 16.65%
Server 5, instance 2 16.03% 12.82%
Server 5, instance 3 14.06% 12.58%
Server 5, instance 4 16.52% 10.80%

Find a CMG regional meeting near you at www.cmg.org/regions

You might also like