Windows Server 2003 Terminal Server Capacity and Scaling | PDF | Windows 2000 | Kernel (Operating System)
0% found this document useful (0 votes)
919 views

Windows Server 2003 Terminal Server Capacity and Scaling

Microsoft Windows Terminal Server lets users run Windows®-based applications on a remote computer running one of the Windows Server 2003 family of operating systems. This white paper contains testing methodologies, results, analysis, and sizing guidelines for Windows Server 2003 Terminal Server. Hewlett Packard worked in cooperation with Microsoft to perform the initial sizing tests and data collection in HP labs. Microsoft performed the final round of testing and analysis collection using Hewlett Packard equipment. The tests were performed using Windows Server 2003, Enterprise Edition, build 3790.

Uploaded by

bejocimahi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
919 views

Windows Server 2003 Terminal Server Capacity and Scaling

Microsoft Windows Terminal Server lets users run Windows®-based applications on a remote computer running one of the Windows Server 2003 family of operating systems. This white paper contains testing methodologies, results, analysis, and sizing guidelines for Windows Server 2003 Terminal Server. Hewlett Packard worked in cooperation with Microsoft to perform the initial sizing tests and data collection in HP labs. Microsoft performed the final round of testing and analysis collection using Hewlett Packard equipment. The tests were performed using Windows Server 2003, Enterprise Edition, build 3790.

Uploaded by

bejocimahi
Copyright
© Attribution Non-Commercial (BY-NC)
Available Formats
Download as DOC, PDF, TXT or read online on Scribd
You are on page 1/ 36

Windows Server 2003 Terminal Server Capacity and

Scaling

Microsoft Corporation
Published: June 2003

Abstract

Microsoft Windows Terminal Server lets users run Windows®-based applications on a remote
computer running one of the Windows Server 2003 family of operating systems. This white paper
contains testing methodologies, results, analysis, and sizing guidelines for Windows Server 2003
Terminal Server. Hewlett Packard worked in cooperation with Microsoft to perform the initial sizing tests
and data collection in HP labs. Microsoft performed the final round of testing and analysis collection
using Hewlett Packard equipment. The tests were performed using Windows Server 2003, Enterprise
Edition, build 3790.
This is a preliminary document and may be changed substantially prior to
final commercial release of the software described herein
The information contained in this document represents the current view of
Microsoft Corporation on the issues discussed as of the date of
publication. Because Microsoft must respond to changing market
conditions, it should not be interpreted to be a commitment on the part of
Microsoft, and Microsoft cannot guarantee the accuracy of any
information presented after the date of publication.
This
document
is for

informational purposes only. MICROSOFT MAKES NO WARRANTIES,


EXPRESS OR IMPLIED, AS TO THE INFORMATION IN THIS
DOCUMENT.
Complying with all applicable copyright laws is the responsibility of the
user. Without limiting the rights under copyright, no part of this document
may be reproduced, stored in or introduced into a retrieval system, or
transmitted in any form or by any means (electronic, mechanical,
photocopying, recording, or otherwise), or for any purpose, without the
express written permission of Microsoft Corporation.
Microsoft may have patents, patent applications, trademarks, copyrights,
or other intellectual property rights covering subject matter in this
document. Except as expressly provided in any written license agreement
from Microsoft, the furnishing of this document does not give you any
license to these patents, trademarks, copyrights, or other intellectual
property.
The example companies, organizations, products, people and events
depicted herein are fictitious. No association with any real company,
organization, product, person or event is intended or should be inferred.
© 2003 Microsoft Corporation. All rights reserved.
Microsoft, Windows, the Windows log, and Windows Server are either
registered trademarks or trademarks of Microsoft Corporation in the
United States and/or other countries.
The names of actual companies and products mentioned herein may be
the trademarks of their respective owners.
Contents
Contents................................................................................................................. .......................1

Introduction..................................................................................................................... ..............3

Results Overview........................................................................................................... ...............4

Server Capacity................................................................................................................. ..........4

System and User Memory Requirements............................................................................. .......4

Comparison with Terminal Services Running on Windows 2000....................................... ..........5

Test Environment and Testing Tools................................................................................ ...........7

Test Environment..................................................................................................................... ....7


Testing Tools and Scripts.................................................................................................... .........8

Testing Methodology........................................................................................................... .......11

Analysis of the Results.............................................................................................................. .13

Overview.......................................................................................................... .........................13

Memory Requirements and Utilization................................................................................. ......13

Network Usage................................................................................................................ ..........15

Effect of Increased Color Depth (16 bit).......................................................... ..........................16

Effect of File Redirection................................................................................. ..........................16

Effect of Printer Redirection.................................................................................................. .....17

Effect of Logon Activity on CPU Utilization........................................................... .....................17

Effect of Kernel Address Space Limitations..................................................................... ..........17

Effect of Typing Rate on CPU Utilization............................................................. ......................20

Effect of Background Spelling- and Grammar-Checking....................................... ....................20

Performing Your Own Scaling Tests.......................................................................................... 21

To Test or Pilot?............................................................................................................. ............21

Determining Application Suitability................................................................. ...........................21

Characterization of Users............................................................................................. .............22

Network Environment.............................................................................................. ..................22

Appendix A: Test Script Flow Charts................................................................. .......................23

Knowledge Worker Script........................................................................................... ...............23

Data Entry Worker Script................................................................................................. ..........25

Appendix B: Terminal Server Settings........................................................................ ..............26


Appendix C: HP Server specifications........................................................................ ..............28

The ProLiant Advantage............................................................................................... .............28

ProLiant Server Models................................................................................................ .............29

ProLiant Servers in a SBC Environment................................................................................... .29

Tested Servers............................................................................................... ...........................30

SBC Solution Sizing .......................................................................................... .......................32

Related Links............................................................................................................................ ...34


Introduction
Microsoft Windows Terminal Server lets users run Windows®-based applications on a remote
Windows 2000 or Windows 2003-based server. This white paper contains testing methodologies,
results, analysis, and sizing guidelines for Windows Server 2003 Terminal Server. Hewlett
Packard worked in cooperation with Microsoft to perform the initial sizing tests and data collection
in HP labs. Final testing and data collection was performed at Microsoft using Hewlett Packard
servers. The tests were performed using Windows Server 2003, Enterprise Edition, build 3790.

For information on Terminal Server features, licensing and architecture, see


http://www.microsoft.com/windowsserver2003/technologies/terminalservices/.

In a server-based computing environment, all application execution and data processing occur on
the server. Therefore it is extremely useful and desirable for server manufacturers to test the
scalability and capacity of their servers to determine how many client sessions a server can
typically support under a variety of different scenarios. Microsoft, using multiple Hewlett Packard
hardware configurations, tested Windows Server 2003 Terminal Server to provide customers with
guidelines to choose the right server according to their needs.

The results and analysis contained here should not be interpreted in isolation. The client
applications used in the test (components of Microsoft® Office 2000) are not easy to characterize
without accounting for the features or data sets an individual uses or creates. Two different user
scenarios are tested in accordance with Gartner Group recommendations (“Knowledge Worker”
and “Data Entry Worker”), but the actual applications, features, and data sets used in these user
scenarios cannot precisely mimic the experience of a real-life user on a moment-by-moment
basis. The tests assume a rather robotic quality, with users taking no prolonged breaks and
essentially using the same functions and data sets during a ten to thirty minute period of activity.
In short, your results may vary.

The results are conservative, with a server considered to be at capacity when the server is 10
percent slower than it was with a single user load. With this in mind, consider buying a server that
will, based on the analysis, comfortably accommodate the required number of users under the
expected peak workload, leaving room for expansion.
Results Overview
Server Capacity
The actual number of users that a specific configuration of server can support varies depending
on several criteria such as the processor type, the amount of the memory, the hard disk, the
network configuration, and the user type (typing speed, applications used, frequency, and so
forth). See Table 1 for server configurations for types of users.
Table 1 Maximum Users by Scenario and Server Type

Server Configuration Model Number Knowledge Worker Data Entry Worker

4 x Intel Xeon Processors MP DL560 270 Users, 520 Users


2 GHz
Figure 12 Maximum Users by Scenario and Processor Configuration
MB L2 Cache
4096 MB
on Intel Xeon Systems
2 x Intel Xeon Processors
2.4 GHz
DL360G3 200 Users 440 Users
2 MB L2 Cache
4096 MB

1 x Intel Xeon Processors


2.4 GHz
DL360 G3 140 Users 200 Users
2 MB L2 Cache
4096 MB
2 x Intel Xeon Processors
2.4 GHz
DL380G3 200 Users 440 Users
2 MB L2 Cache
4096 MB
2 x Intel Xeon Processors
2.4 GHz
BL20pG2 200 Users 440 Users
2 MB L2 Cache
4096 MB
1 x Intel Ultra Low Voltage
Pentium III
BL10e 50 Users 120 Users
900 MHz
1024 MB

4 x Intel Xeon Processors MP


2.0 GHz
BL40p 240 Users Not Tested
2 MB L2 Cache
4096 MB

x Pentium III Xeon


550 MHz
ProLiant 6400R 170 Users Not Tested
2 MB L2 Cache
4096 MB

System and User Memory Requirements


Table 2 contains general guidelines for Windows Server 2003 Terminal Services memory
requirements, based on the results achieved in the performance lab.

Table 2 Recommended Memory


Knowledge Workers Data Entry Workers

Memory per user (MB) 9.5 3.5

System Memory (MB) 128

Total Memory System + (# of Users x Memory per User)

Comparison with Terminal Services Running on Windows 2000


By default, Terminal Server on Windows Server 2003 is tuned to accommodate approximately 80
percent more users than Windows 2000 Server. This is made possible because of better utilization of
the limited 32-bit virtual address space by the kernel. The system must allocate virtual resources for
logged in users, whether they are active or not, even when they are disconnected. These virtual
limitations cannot be overcome by adding physical resources to a computer. Because of these
constraints, Terminal Services on Windows 2000 Server is less effective at utilizing the faster hardware
that is available today. For example, a computer running Windows Server 2003 Terminal Server that
can accommodate 280 knowledge workers can only accommodate 160 knowledge workers if it is
running Windows 2000 Server. Windows Server 2003 Terminal Server provides better default scaling
for today’s hardware. Figure 2 shows the maximum number of users supported on Windows 2000
Server against Windows Server 2003.

Figure 2 Maximum number of users supported before reaching kernel address space
limitations
On the other hand, on a computer where the maximum number of users is limited by physical
limitations, such as CPU saturation or physical memory availability, Windows Server 2003 Terminal
Server scaled to the same number of users as Windows 2000 Server. For example, if a 4 x 550 MHz
processor computer running Windows 2000 Server scales up to 170 users running the Knowledge
Worker scenario, Windows Server 2003 will scale to the same number of users.
Test Environment and Testing Tools
The testing was completed by Hewlett Packard and Microsoft using specific test scripts and tools.
The test laboratory used Hewlett Packard servers. All settings are defined in Appendix B:
Terminal Server Settings. Overviews of the server specifications are included in Appendix C: HP
Server Specifications.

Test Environment
The Terminal Services testing laboratory is shown in Figure 3. Windows Server 2003, Enterprise
Edition build 3790, was installed on these servers.

The servers tested with Windows Server 2003 Terminal Services were:
• HP DL 360G3

• 2 x Intel Xeon 2.4GHz CPUs

• 4 GB RAM.
• HP DL 560

• 4 x Intel Xeon MP 2 GHz CPUs

• 6 GB RAM

Other components of the testing laboratory included:

• Domain Controller and Test Controller: HP DL360 with 2 x P III 800 MHz CPUs and 2 GB
RAM running Windows Server 2003 Enterprise Edition. This server is the DHCP and DNS
server for the domain. It manages the 35 workstations running Windows 2000 Professional,
including script control, software distribution, and remote reset of the workstations.

• Workstations (35): Pentium III 800 MHz, 128 MB RAM, 8 GB hard disk with Windows 2000
Professional. Multiple Terminal Services Client sessions can be running on each of the 35
workstations

• Mail server and Web server: HP DL 360 with 2 x Pentium III 800 MHz CPUs and 2GB RAM with
Windows 2000 Advanced Server and Microsoft Exchange 2000 Service Pack 2. The server was
used for the Knowledge Worker Tests.
Figure 3 Testing Lab Environment

Testing Tools and Scripts

To assist with scalability testing, Microsoft developed the testing tools and scripts used on the client
computers to closely simulate an actual user session.

Testing Tools

Terminal Services Scalability Planning Tools (TSScaling) is a suite of tools that assists
organizations with Microsoft Windows Server 2003 Terminal Server capacity planning. They allow
organizations to easily place and manage simulated loads on a server. This in turn can allow an
organization to determine whether or not its environment is able to handle the load that the
organization expects to place on it.

The suite includes the following automation tools:

• Robosrv.exe, the tool that drives the server side of the load simulation testing. Together
RoboServer and RoboClient drive the server-client automation. RoboServer is typically
installed on the test controller computer, and must be running before an instance of
RoboClient can be started. After an instance of both RoboServer and RoboClient are running,
RoboServer commands the RoboClients to run scripts that load the terminal server at
operator-specified intervals.

• Robocli.exe, the tool that controls the client side of the load simulation testing. Together
RoboServer and RoboClient drive the server-client automation. RoboClient is typically installed on
the test client computers, and requires RoboServer to be running before an instance of RoboClient
can be started. RoboClient receives commands from RoboServer to run scripts that load the
terminal server at operator specified intervals.

The suite includes the following test tools:


• Qidle.exe Used in an automation environment, it determines whether any of the currently running
scripts have failed and require an administrator to intervene. QIdle determines this by periodically
checking to see if any of the sessions logged on to the terminal server has been idle more than a
specific period of time. If there are any idle sessions, QIdle notifies the administrator with a
beeping sound.

• Tbscript.exe is a script interpreter that drives the client side load simulation. It executes Visual
Basic Scripting Edition scripts and supports specific extensions for controlling the terminal server
client. Using these extensions, a user can create scripts that control keyboard/mouse input on the
client computer and synchronize executions based on the strings displayed by the applications
running inside the session.

The suite includes the following Help files:


• TBScript.doc: This file provides Terminal Server bench scripting documentation.

• TSScalingSetup.doc: this file provides a scalability test environment setup guide. It includes
information on the following topics:

• Required Hardware

• Domain Controller Setup

• Exchange Server Setup

• Terminal Server Setup

• Client Computer Setup

• List of Required Files

• TSScalingTesting.doc: this file provides a testing guide. It includes information about the following
topics:

• Procedure for installing Windows Server 2003

• Procedure for Installing Windows 2000

• Procedure for Configuring Windows 2000 after Install

• Procedure for Configuring Windows Server 2003 after Install

• Procedure for Installing and Configuring Office 2000

• Procedure for Installing and Configuring Office XP

• Setting Up Performance Counters

• Procedure for Deleting the Exchange Store

• Procedure for Running Config Scripts


• Selecting Test Configuration

• Special settings for Task Worker Test

• Procedure for Starting Test

• Evaluating the Response Time for User Actions

Testing Scripts

Two scripts were developed based on Gartner Group specifications1 for the Knowledge Worker
and Data Entry Worker as defined below.

Knowledge Workers

Knowledge workers are defined as workers who gather, add value to, and communicate
information in a decision support process. Cost of downtime is variable but highly visible. These
resources are driven by projects and ad-hoc needs towards flexible tasks. These workers make
their own decisions on what to work on and how to accomplish the task.

Example job tasks include: marketing, project management, sales, desktop publishing, decision
support, data mining, financial analysis, executive and supervisory management, design, and
authoring.

Data Entry Workers

Data Entry Workers input data into computer systems - example: transcription, typing, order entry,
clerical work, and manufacturing. Additionally, the Data Entry Worker script was tested in a
‘dedicated’ mode, by not starting a Windows Explorer shell for each user.

Gartner defines another class of worker – the High Performance Worker. Workers of this type
typically use specialized computing platforms and applications to perform their tasks, such as
genetic engineering, chip designing, quantum physics, 3D modeling, 3D animation, and
simulation. Because these types of applications would not be suitable to run on a terminal server,
this class of worker was not tested.

A detailed flowchart describing the functions of the scripts is contained in “Appendix A: Test Script
Flow ”. The utilities used to perform these tests are available on the Windows Server 2003
Resource Kit.

The scripts developed at Microsoft for these tests are interpretations of the Gartner Group user
definitions, and are provided “as is”. They will not work in your test environment without some
modifications, such as changing the various server names that are hard coded in the scripts to
match those in your test environment. The test tools are available at: at
http://www.microsoft.com/reskit.

1
 TCO Manager for Distributed Computing 4.0
Testing Methodology
Windows Server 2003, Enterprise Edition (member of the Windows Server 2003 family of
operating systems) and Office 2000 were installed using settings described in “Appendix B:
Terminal Server Settings.” An automated server and client workstation reset was performed
before each test-run in order to revert to a clean state for all the components.

Response times, based on user actions, were used to determine when or if a terminal server was
over-loaded. Client side scripts drive the user simulation and record the response times for a set
of simulated user actions.

The scripts contain many sequences. A sequence starts with the test script sending a key stroke
through the client to one of the applications running in the session. As a result of the key stroke, a
string is displayed by the application. For example, Ctrl-F opens the File menu that would then
display the Open string.

The response time is the time from the key stroke to the display of the string. In order to
accurately measure the response time, this measurement is calculated by taking two initial time
readings ti1 and ti2 from a reference time source before and after sending the key stroke. Ti1 is
the time when the test manager sends the instructions to the client and ti2 represents the client
sending the keystroke to the server. A third reading tf3 is made after the corresponding text string
is received by the client. Time is measured by milliseconds. Based on these values the response
time is estimated as belonging to the interval (tf3 – ti2, tf3 – ti1). In practice, the measurement
error (the time between ti1 and ti2) is less than 1 millisecond and the response values are
approximated to tf3 – ti1.

For each scenario, the Test Manager workstation started groups of ten client sessions on the
workstations with a 30-second interval between each session. After the group of ten client
sessions was started, a 5-minute stabilization period was observed in which no additional
sessions were started. After the stabilization period, the Knowledge Worker script starts the four
applications it will use in the test within five minutes; this prevents any interference between each
group of ten client sessions.

For each action, as the number of users log on, a degradation point is determined when the
response times increase to a value that is deemed to be significant:
• For actions that have an initial response time of less than 200 ms, the degradation point is
considered to be where the average response time is more then 200ms and 110% of the
initial value.

• For actions that have an initial response time of more than 200 ms, the degradation point is
considered to be the point were the average response time increase with 10% of the initial
value.

This criteria is based on the assumption that a user will not notice degradation in a response time
while this is lower then 200 ms.

There are several reasons for response time degradation. When degradation in response time is
attained for approximately the same number of users, CPU saturation is the main reason.
Response time degradation that can be attributed to actions relating to file IO, such as opening a
dialog box to select a file to open or save, is due to IO limitations. Also, some actions may exhibit
a “noisy” degradation where a small number of response times randomly having values
noticeable higher than average, without influencing the average value noticeably.

Figure 4 shows three examples of response time degradation on a CPU saturated system.

Figure 4: Example of Canary time by number and profile of users

The test harness supports the previous system of tests that determine system load threshold based on
a “canary” script that is running between the login groups while the system is stable. The canary script
is run before any users are logged onto the system and the time the script takes to complete (elapsed
time) is recorded. This elapsed time becomes the baseline and is deemed to be the baseline response
rate for a given configuration of server. This method would consider that maximum load was reached
when the total time needed for running the canary script is 10% higher than the initial value. The
response time method is considered to be more accurate since it measures the key parameter for the
actual user experience, it takes into account the login period impact and provides a richer data support
for decision making. The “canary” script method can still be more efficient for setups that support a
small number of users where the response time method will not provide a large enough sample of
response time values.

One key parameter that is also monitored is the total cycle time for a work unit performed by a user.
This must be a constant value especially for comparison tests (cross-platform or when testing for
special features). Tests that would run with different cycle times are effectively performing a different
amount of work in the same unit of time thus consuming more resources in the time unit and can not
be compared directly.
Analysis of the Results

Overview
Although the scripts used in these scenarios simulate tasks that a normal user could perform, the
users simulated in these tests are tireless—they never reduce their intensity level. The simulated
clients type at a normal rate, pause as if looking at dialog boxes, and scroll through mail
messages as if to read them, but they do not get up from their desks to get a cup of coffee, they
never stop working as if interrupted by a phone call, and they do not break for lunch. This
approach yields accurate but conservative results.

Memory Requirements and Utilization


In addition to the 128-MB base minimum memory requirements for a Windows Server 2003-
based server, the amount of memory needed per user for these scenarios is shown in Figure 5

Figure 5 Memory requirements by scenario

Determining the amount of memory necessary for a particular use of a terminal server is complex.
It is possible to measure how much memory an application has committed—the memory the
operating system has guaranteed the application that it can access. But the application will not
necessarily use all of that memory, and it certainly is not using all of that memory at any one time.
The subset of committed bytes that an application has accessed recently is referred to as the
working set of that process. Because the operating system can page the memory outside a
process’s working set to disk without a performance penalty to the application, the working set, if
used correctly, is a much better measure of the amount of memory needed.
The Process performance object's Working Set counter, used on the _Total instance of the
counter to measure all processes in the system, measures how many bytes have been recently
accessed by threads in the process. However, if the free memory in the computer is sufficient,
pages are left in the working set of a process even if they are not in use. If free memory falls
below a threshold, unused pages are trimmed from working sets.

The method used in these tests for determining memory requirements cannot be as simple as
observing a performance counter. It must account for the dynamic behavior of a memory-limited
system.

The most accurate method of calculating the amount of memory required per user is to analyze
the results of several performance counters (Memory\Pages Input/sec, Memory\Pages
Output/sec, Memory\Available Bytes and Process\Working Set(Total_)) in a memory-
constrained scenario. When a system has abundant physical RAM, the working set will initially
grow at a high rate, and pages will be left in the working set of a process even if they are not in
use. Eventually, when the total working set tends to exhaust the amount of physical memory, the
operating system will be forced to trim the unused portions of working set until enough pages are
made available to free up the memory pressure. This trimming of unused portions of the working
sets will occur when the applications collectively need more physical memory than is available, a
situation that requires the system to constantly page to maintain all the processes’ working sets.
In operating systems theory terminology, this constant paging state is referred to as thrashing.

Figure 6 shows the values of several relevant counters from a Knowledge Worker test when
performed on a server with 1024 MB of RAM installed.

Figure 6 Stages of memory usage

The results are very close to what is expected.

Zone 1 represents the abundant memory stage. This is when physical memory is greater than the
total amount of memory that applications need. In this zone, the operating system does not page
anything to disk, even seldom-used pages.

Zone 2 represents the stage when unused portions of the working sets are trimmed. In this stage
the operating system periodically trims the unused pages from the processes’ working sets
whenever the amount of available memory drops to a critical value. Each time the unused
portions are trimmed, the total working set value decreases, increasing the amount of available
memory which results in a significant number of pages being written to page file. As more
processes are created, more memory is needed to accommodate their working sets and the
number of unused pages that can be collected during the trimming process decreases (as
outlined by the dotted line). The pages input rate is mostly driven by pages required when
creating new processes. The average is typically below the rate of pages output. This state is
acceptable and applications should respond well because, in general, only unused pages are
being paged to disk.

Zone 3 represents the high pressure zone. The working sets are trimmed to a minimal value and
mostly contain pages that are frequented by the greater number of users. Page faults will likely
cause the ejection of a page that will need to be referenced in the future, thus increasing the
frequency of page faults. The output per second of pages will increase significantly and the page
output curve follows to some degree the shape of page input curve. The system does a very good
job of controlling degradation, almost linearly, but the paging activity will eventually increase to a
level where the response times are not acceptable.

In Figure 6 , it seems as though the amount of physical memory is greater than 1024 MB,
because the operating system does not start to trim working sets until the total required is well
above 1600 MB. This is the due to cross-process code sharing, which makes it appear as if there
is more memory used by working sets than actually available.

The amount of memory needed can be determined from the number of users at the point where
the page out activity starts increasing significantly (end of Zone 2 in Figure 6). The amount of
memory required per user can be estimated by dividing the total amount of memory in the system
by the number of users at the end of Zone 2. Such an estimate would not account for the memory
overhead required to support the operating system. A more precise measurement can be
obtained by running this test for two different memory configurations (for example, 512 and 1024
MB), determining the number of users at the end of Zone 2 and dividing the difference in memory
size (1024 – 512 in this case) to the difference of user number at end of zone 2. In practice, the
amount of memory required for the operating system can be estimated as the memory consumed
before the test starts.

Although a reasonable amount of paging is acceptable, paging naturally consumes a small


amount of the CPU and other resources. Because the maximum users that could be loaded onto
a system (Figure 1) were determined on systems with abundant physical RAM, a minimal amount
of paging occurred. The working set calculations assume that a reasonable amount of paging has
occurred to trim the unused portions of the working set, but this would only occur on a system
that was memory-constrained. If you take the base memory requirements and add it to the
number of users multiplied by the required working set, you end up with a system that is naturally
memory-constrained and therefore acceptable paging will occur. On such a system, expect a
slight decrease in performance due to the overhead of paging. This decrease in performance can
reduce the number of users who can be actively working on the system before the response time
degrades above the acceptable level.

Network Usage
Network usage overhead tends to be quite low on Terminal Server, because of protocol efficiency
and since, by default, the Terminal Services Client (mstsc.exe) uses data compression for all
connections. Network usage for the two scenarios is shown in Figure 7. This includes all traffic
coming in and going out of the terminal server for these scenarios.

Figure 7 Total network usage (including RDP and all other network traffic)
by scenario
Figure 7 shows network usage in bytes per user for the Knowledge Worker scenario. This is
taken from the Bytes Total/sec counter in the Network Interface performance object. This graph
illustrates how the bytes per user average were calculated, as it converges on a single number
when a sufficient amount of simulated users are running through their scripts. The number of user
sessions is plotted on the primary axis. The count includes both bytes received and sent by the
terminal server, using any network protocol.

Figure 8 Knowledge worker scenario network usage per user


and number of users against time

The network utilization numbers in these tests only reflect RDP traffic and a small amount of
traffic from the domain controller, Microsoft Exchange Server, IIS Server, and the test manager. In
these tests, the terminal server’s local hard drive is used to store all user data and profiles, no
roaming profiles or network home directories were used. In a normal terminal server environment
there will be more traffic on the network, especially if user profiles are not stored locally.

Effect of Increased Color Depth (16 bit)


Choosing 16-bit color depth for remote connection sessions instead of 8-bit slightly increases
RAM usage (~10% when compared to 8-bit remote sessions), increases network bandwidth
usage from 1150 bytes total/user to 1450 bytes total/user, but does not significantly affect the
CPU usage level or the kernel virtual space consumption.

Effect of File Redirection


Enabling file redirection has little impact in terms of CPU usage, memory consumption or kernel
virtual address consumption. When file redirection is enabled, the emulation scripts access the
data files from the client computer’s hard drive, and as expected, this increases network
bandwidth usage. This increase is proportional to the file size and frequency with which the files
are accessed.

In this scenario, each user accesses three files. The user opens and reads a 250 KB Excel
spreadsheet and saves a 16 KB Excel spreadsheet and a 23 KB Word file every 36 minutes. The
impact on the total bandwidth usage was an increase from 1150 bytes total/user to 1450 bytes
total/user.
Effect of Printer Redirection
Test results show network bandwidth usage is not significantly affected and the impact on other
key system parameters (memory usage, paged pool, system PTEs area) is negligible. Test
results also show that the printer redirection has a significant impact on the server CPU usage,
especially during the login sequence when the new printer is detected and installed by the
spooler service.

To assess the effect of enabling printer redirection on Terminal Server scalability, the Knowledge
Worker script was run in a configuration where an HP LaserJet 6P printer was installed on the
NULL port on each client computer and the clients where configured to redirect to the local printer
when connecting to the server. The script prints twice during the 36 minute work cycle: the first
print job is a large 250 KB Excel spreadsheet and the second print job is a 23 KB Word
document. If a DL 6400R system is CPU saturated at 160 users, then enabling printer redirection
lowers the supported user number to 140 users.

Effect of Logon Activity on CPU Utilization


In each of the tests, the CPU utilization graphs are similar to the one in Figure 9, in that they
consist of an ascending phase corresponding with the test scenario script starting on each client
workstation, with a slight increase of CPU-intensive logon activity followed by a stabilization
plateau after each set of 10 connections.

Figure 9: Example of plateau phases

Effect of Kernel Address Space Limitations


The 32-bit Windows platform is named after its 32-bit address space, meaning that up to 232 bytes
(4 GB) can be addressed at any one time, regardless of physical RAM. By default, 2 GB of this
address space is allocated to user-mode processes, and 2 GB is allocated to the kernel.
Although separate 2 GB regions of address space are used for user-mode processes in the
system, most of the 2 GB kernel area is global and remains the same regardless of the user-
mode process currently active.

The 2 GB of kernel area contains all system data structures and information. Therefore, the 2 GB
kernel address space area can impose a limit on the number of system data structures and the
amount of kernel information that can be stored on a system, regardless of physical memory.

There are three areas in the 2 GB kernel address space that have a significant impact on terminal
server scalability: paged pool area, system page table entries (PTEs) area and system file cache
area. The paged pool area holds memory allocation from kernel components and drivers that are
pageable, System PTEs hold kernel stack allocations – stacks created in kernel mode for each
thread to be used when that thread makes kernel calls – and page table data structures and the
system file cache holds mapped views to files opened in the system.

Although these different allocations share the same area, the partition between them is fixed at
system startup: If the system runs out of space in one of those areas, the other area cannot
donate space to it, and applications may begin to encounter unexpected errors. Therefore, when
you see a system that is experiencing unexpected errors or is unable to accept new logins,
without the system having some other resource limitation (such as CPU or disk limitations), it is
probably due to the paged pool area or the System PTE area running out of space.

Tests for several scenarios showed that the ratio of System PTEs to paged pool areas is (2.9-
4):1, where the smaller number is for the lightweight dedicated worker scenario that starts an
application directly and the larger number is for the complex Knowledge Worker scenario
involving starting many simultaneous user applications in the session. Choosing a smaller
number for System PTEs means that relatively more memory is allocated to paged pool area.
Because each user consumes at least three times more System PTE memory compared to
paged pool memory, choosing a smaller ratio lowers the number of users supported in a complex
scenario. For each user lost in complex scenarios, three more are supported for simple ones.
Based on these observations, the Windows Server 2003 default ratio was tuned to values closer
to the low end of the range: 3.25. This accommodates both simple and complex user scenarios,
to 90% of the optimal system configuration for each specific case. For specific scenarios, you can
tune this through the Registry, using the DWORD value PagedPoolSize under
HKEY_LOCAL_MACHINE\SYSTEM\CurrentControlSet\Control\Session Manager\Memory
Management specifies the desired paged pool memory size in bytes. This key controls the paged
pool size by changing the ratio between the paged pool and System PTEs areas while keeping
their combined size unchanged.

The system file cache area is another key region of kernel address space. It holds data structures
and data for files opened by various applications. When the system detects pressure on the
paged pool area (either consumption is above a threshold value – by default 80% of the total or
large pool allocations are failing), it starts a process that reclaims some of the space from the
system file cache to satisfy paged pool usage. If the file I/O activity on the system is high, the
probability that requests will find the desired file data in memory decreases, thus affecting
negatively the file access times. This can be monitored using the performance object Cache with
the Copy Read Hits % performance counter; the values for this counter should ideally not drop
consistently under 99%. In Figure 10, one can see how the slope of paged pool consumption
starts dropping right before point “a”, indicating that system cache mapped memory is reclaimed
and used to satisfy the paged pool allocation request. After point “a”, the cache hit ratio drops and
response times for opening the “Save As” dialog box in Excel increase from an average of 50 ms
to 200 ms.

Figure 10 Example of cache hit ratio degradation

Such a situation can be mitigated in two ways:

• You can provide plenty of memory for the system to allow pages which belong to the file
mappings destroyed in the reclaim process to be kept in memory and reused on
subsequent accesses. For the Knowledge Worker scenario, using the Physical Address
Extension (PAE) to allow access to 6 GB delayed the cache hit ratio degradation from 200
users to 270 users. This solution may involve trade-offs if PAE modes are required for
memory above the 4 GB limit.

• Increasing the amount of memory allocated to paged pool will delay the moment when the
system decides to start reclaiming memory from system cache to alleviate paged pool
pressure. The trade off with this situation is that the paged pool area size increase will
involve a decrease of the System PTEs size area and may lower the maximum number of
users supported.

Enabling Physical Address Extension (PAE) mode allows access to physical memory beyond the 4 GB
space accessible through normal 32-bit addressing. Although the 2 GB user mode space available to
each application does not increase, the total amount of memory available to applications does. The
PAE mode is a benefit to Terminal Server configurations where application memory consumption is
high. The normal 4 GB of addressable memory is not enough to accommodate the working set for all
applications and the system while keeping the amount of page faults to acceptable values. PAE mode
impacts the scalability in two ways:

• Because the Page Table Entries (PTEs) double in size, there is an initial increase in
memory consumption from the System PTEs area. This cost is about 20 MB for a system
with 8 GB of RAM.

• Because the paged pool area holds data structures that contain PTEs (typically memory
sections), paged pool usage per user increases which can be an issue for paged pool
limited systems.
Figure 11 PAE impact on kernel address space

In Figure 11 System PTEs and paged pool consumption are compared for a regular and 6 GB
PAE-enabled test using the Knowledge Worker scenario. The result is that the paged pool
consumption increased with ~13% per user and the amount of available System PTEs dropped
by ~15 MB.

Effect of Typing Rate on CPU Utilization


Changing the typing rate in these tests increases CPU utilization and has an effect on scalability,
with higher typing rates corresponding to fewer users.

In real-world situations, the expected typing rate of users should be taken into consideration when
sizing a system. In addition, users who open and close applications (instead of switching between
them) and users who move quickly between tasks will place a heavier load on a system.

Effect of Background Spelling- and Grammar-Checking


Based on the results of previous tests, background grammar-checking was disabled in Microsoft
Word for the Knowledge Worker scenarios. Background grammar-checking had a significant
negative impact on scalability, reducing the number of users supported on the four-way
Knowledge Worker scenario by about half. Microsoft is currently investigating this issue. If you
disable background grammar-checking, you can use foreground checking by pressing F7 from
within Word.
Performing Your Own Scaling Tests

To Test or Pilot?
The purpose of this document is to give you a starting point from which to base your own sizing
efforts. Unless you are prepared to spend large amounts of resources analyzing your users’ work
habits and capturing these actions into a simulated script, you will find that it is more effective to
go into a ’pilot’ mode after you have determined that your applications work in a Terminal Services
environment.

Once you have chosen a server configuration as a starting point (based on this white paper’s
findings), you can gradually add users to determine the maximum number that a system
configuration (terminal server/network architecture/infrastructure servers) can support.

It is recommended that you add small batches of users to the server at a time (in a similar fashion
to the testing methodology used in this paper) to determine when the system slows down to
unacceptable level. Obviously these batches of users should be added in intervals of hours or
days, rather than minutes, as there is likely to be a delay in the performance impact to the system
as each user becomes familiar with the new system.

Once you have determined the terminal server configuration, you can expand the scenario by
testing load balancing.

As an aid to understanding the various factors involved when running applications on a terminal
server, the following items should also be taken into consideration.

Determining Application Suitability


If some or all of your desktop computers are capable of running the application locally, consider
using application distribution technology such as Windows XP Professional and IntelliMirror®
management technologies, or Microsoft Systems Management Server. It is a better use of
resources to run a frequently-used productivity application on a LAN-connected, Windows-based
computer than on a terminal server attached to the same LAN. Applications that make extensive
use of graphics or multimedia (such as Windows Media™ Player, voice recognition, or CAD
applications), are not suited for running on a terminal server and may not scale effectively or even
work at all. Other issues such as how the application writes to the screen, and whether the
application uses large amounts of CPU while idle or when the user is typing will also determine its
suitability for use on a terminal server.

However, if your application is frequently updated, needs to be accessed from a computer


running an operating system other than Windows, or manipulates large amounts of data over a
low-bandwidth connection, then that application may be a good candidate for running on a
terminal server.

If it is determined that a terminal server is the most practical method of distributing the
application, consider just running the application on the terminal server, and not on each desktop.
This can save significant amounts of resources on the terminal server and may allow many more
users to log on simultaneously.
Characterization of Users
User usage patterns have a significant impact on terminal server performance and should be
considered carefully when sizing a terminal server. User usage characteristics will have a
different effect on a terminal server than what is expected on a traditional Windows-based
computer. In a computer-centric architecture, the speed at which a user inputs characters from
the keyboard will not have a significant impact on CPU utilization. The same cannot be said for a
terminal server. Because each character typed on the client requires processing on the terminal
server, and many users can be typing at one time, the speed at which the users enter characters
has a significant effect on scalability. Other factors such as whether all of your users logon at the
same time of day and how often they take breaks will also have an effect on overall system
responsiveness.

Network Environment
Understanding the network environment is especially important when designing a terminal server
solution that involves WAN communications. Even infrequent network slowdowns can cause
unacceptable performance to terminal server users. Both latency (the time it takes a packet to
reach the other end of the network) and bandwidth (the amount of data that can travel over the
network within a given period of time) are important factors. Because everything a user sees on
their screen is generated by the terminal server, high latency has a serious impact on the
perceived response of the system, while low bandwidth affects the time it takes to get large
chunks of data like bitmaps) to the user’s screen. Therefore, variables such as the typing rate of
the users, the amount of graphics used in an application, and how many users are working at any
one time over a WAN connection all factor into the equation when asking, “How many users can I
connect to a terminal server over such and such a connection?” The only reliable way of
determining this is to test it in your actual network, but if the latency over a WAN connection is
low, you can use the data from Figure 4 to estimate the average network bandwidth required by
each user. Keep in mind that the user experience very much depends on there being sufficient
bandwidth available when the application is writing large amounts of information to the screen.
Connecting over a low-bandwidth connection has no significant impact on terminal server scaling.
Appendix A: Test Script Flow Charts

Knowledge Worker Script


Typing Speed = 35 WPM

Definition: a worker who gathers, adds value to, and communicates information in a decision
support process. Cost of downtime is variable but highly visible. Projects and ad-hoc needs
towards flexible tasks drive these resources. These workers make their own decisions on what to
work on and how to accomplish the task. The usual tasks they perform are marketing, project
management, sales, desktop publishing, decision support, data mining, financial analysis,
executive and supervisory management, design, and authoring.

Connect User “smcxxx”


⇒ Start (Microsoft Excel) - Load massive Excel spreadsheet and print it
Open File c:\documents and settings\smcxxx\Carolinas Workbook.xls
Print
Close Document
Minimize Excel
⇒ Start (Outlook) - Send a new, short mail message
( email2 )

Minimize Outlook
⇒ Start (Internet Explorer)
URL http://tsexchange/tsperf/Functions_JScript.asp
Minimize Explorer
⇒ Start (Word) - Type a page of text
( Document2 )

Save
Print
Close Document
Minimize Word
⇒ Switch To (Excel)
Create a spreadsheet of sales vs months
( spreadsheet )

Create Graph
( Graph )
Save
Close Document
Minimize Excel
⇒ Switch To Process, (Outlook) - read message and respond
( Reply2 )

Minimize Outlook
Now, Toggle between apps in a loop
loop(forever)
⇒ Switch To Process, (Excel)
Open File c:\documents and settings\smcxxx\Carolinas Workbook.xls
Print
Close Document
Minimize Excel
⇒ Switch To Process, (Outlook) - Mail Message
( email2 )

Minimize Outlook
⇒ Switch To Process, (Internet Explorer)
Loop (2)
URL http://tsexchange/tsperf/Functions_JScript.asp
2 URL http://tsexchange/tsperf/Conditional_VBScript.asp
URL http://tsexchange/tsperf/Conditional_JScript.asp
URL http://tsexchange/tsperf/Arrays_VBScript.asp
URL http://tsexchange/tsperf/Arrays_JScript.asp
end of loop
Minimize Explorer
⇒ Switch To Process, (Word) - Type a page of text
( Document2 )

save
Print
Close Document
Minimize Word
⇒ Switch To Process, (Excel)
Create a spreadsheet of sales vs months
( spreadsheet )

Create Graph
( Graph )

Save
Close Document
Minimize Excel
Switch To Process, (Outlook) - read message and respond
( reply2 )

Minimize Outlook
End of loop
Logoff

Data Entry Worker Script


Typing Speed = 35 WPM

Definition: Workers who input data into computer systems including transcription, typists, order
entry, clerical, and manufacturing.

Connect User “smcxxx”

⇒ OpenFile – Open File in Excel

Loop (forever)

⇒ Clear spread sheet

⇒ Move to First Row

⇒ Type Column headers

Loop (10)

⇒ Enter 10 Data Rows in spread sheet

10
⇒ Select last 10 Rows

⇒ Format Data

⇒ Save File

End Loop

End Loop
Appendix B: Terminal Server Settings
• Operating system installation

• All drives formatted using NTFS

• Components

o Terminal Services enabled in Application-server mode

o All other components disabled except Accessories and Utilities, Network Monitor Tools,
and SNMP under Management and Monitoring Tools

• Networking left at default with Typical Network settings

• Server joined as a member to a Windows 2003 domain


• Page file initial and maximum size set to 4092 MB

• Registry set to 256 MB


• RDP protocol client settings

• Clipboard mapping, printer mapping, and LPT mapping disabled


• Office 2000 settings

• Office 2000 installed using default Terminal Server transforms file from Office 2000 Resource
Kit (termsrvr.mst)

• Outlook settings

• Mailbox on Exchange server

• Email options

o AutoSave of messages disabled

o Automatic name checking disabled

o AutoArchive disabled

o Word settings

• Background grammar-checking disabled

• Background saves disabled

• Save AutoRecover information disabled


• Printer settings

• HP LaserJet 6P created to print to NUL:

• Print notification messages disabled

• Spooler information event logging disabled


• User profiles
• Configuration script executed to pre-create cached profiles and run through Internet
Connection Wizard
• Performance logger

• Performance counters are logged on the terminal server itself


Appendix C: HP Server specifications

The ProLiant Advantage


In today’s online world, computing demands are less predictable than ever. Resources are
constantly being redeployed and reconfigured, consolidated and integrated; customers are
demanding increasingly customized services. Meanwhile, IT organizations are constrained by
space and budget limitations and a shortage of the qualified staff needed to configure, deploy and
troubleshoot their multiplying server farms.

HP understands that businesses must adapt to survive, doing more with less, responding faster
with fewer resources. With the ProLiant Advantage, IT organizations can adapt, conserve and
respond.

The ProLiant Advantage combines an adaptive infrastructure with solution integration and service
expertise.
• Adaptive infrastructure

The ProLiant Advantage starts with an adaptive infrastructure consisting of versatile, industry-
standard ProLiant servers, integrated Smart Array storage and ProLiant Essentials software.

Figure 12 The three layers of an adaptive infrastructure

• Solution integration

An adaptive infrastructure is only as good as its ability to effectively run leading operating
systems and applications. HP rigorously tests, optimizes and certifies ProLiant servers to run
applications from the leading independent software vendors.
• Services expertise
From planning and design to implementation and management, professionals from
HP Services can help customers achieve their business objectives quickly and cost-
effectively.

ProLiant Server Models


Built on years of solid computing innovation, ProLiant servers form the foundation of an Adaptive
Infrastructure. New-generation servers include:
• ML servers

ProLiant ML servers are designed to provide maximum internal storage and I/O flexibility for
rack or tower deployments. These workhorses can handle intensive applications ranging from
ERP to e-commerce hosting. With up to eight processors, up to 11 PCI slots, next-generation
PCI Hot Plug, and redundant power and cooling, ProLiant ML servers are ideal for high-
volume, 24 x 7 datacenter environments.
• DL servers

ProLiant DL servers deliver enterprise-class performance with robust, affordable multi-


processors in a slim (1U – 7U) form factor that is ideal for multi-server rack environments.
Optimized for clustering operations and attached external storage, ProLiant DL servers can
handle jobs as large as data warehousing or as small as Web hosting. They have become
the world’s most popular rack-mounted servers.
• BL servers

HP offers two families of ProLiant BL server blades: e-Class and p-Class.


e-Class These ultra-dense server blades are optimized for rapid deployment and provisioning
and, with up to 280 blades deployed in a single rack, they are ideal for space-constrained
enterprises and service providers. e-Class server blades can handle a wide range of data
center jobs, from front-end Web serving to other, more transaction-intensive applications.
Their industry-standard design ensures that these server blades can integrate seamlessly
into an existing IT infrastructure. Drag-and-drop configuration and rip-and-replace servicing
position these server blades at the cutting edge of adaptive computing.
p-Class The more powerful p-Class server blades deliver higher levels of performance and
availability for high-performance front-end, mid-tier and back-end applications.

ProLiant Servers in a SBC Environment


Deploying ProLiant servers in a Server-Based Computing (SBC) environment offers many
benefits to the customer:
• Lower application ownership costs

• Elimination of additional development, testing or deployment procedures for individual


applications

• Accelerated application deployment

• Extended application availability

• Enhanced security
• Improved data backup and recovery

• Improved end-user support

• Uniform desktop experience from any network access point

Server Blades

Using ProLiant BL server blades to deploy a server farm in a SBC environment can offer
additional benefits:
• Scalability

The server farm can be scaled out to distribute risk and load while minimizing the impact on
users.
• Performance

ProLiant BL server blades provide the performance necessary to support the most
demanding line-of-business applications.
• Rapid server deployment and redeployment

Rapid deployment capability allows the customer to save valuable time by quickly deploying
the BC solution, then quickly and dynamically responding to changing business needs.
It takes only seconds to install server blades and power supplies once the rack infrastructure
is in place. This allows dynamic scaling – without powering down the system.
• There is single-sided access to most pluggable components.

ProLiant Essentials Rapid Deployment Pack (RDP) software allows the administrator to pre-
configure each server bay before installing server blades. After installation, server blades
configure automatically, assuming the role assigned by the administrator.

RDP also provides rip-and-replace capability. When a server blade is replaced, the new server
blade automatically assumes the role assigned to the original server blade.

• Anytime, anywhere access

Integrated Lights-Out (iLO) Advanced management allows anytime, anywhere access to the server
farm for remote management.

• Reduced total cost of ownership

ProLiant BL20p server blades can further reduce Total Cost of Ownership (TCO) in the
following ways:
Consuming less power per user, which reduces utility and air-conditioning costs.
Supporting more users per rack, which reduces licensing and real estate costs.
Reducing cabling complexity.

Tested Servers
For this white paper, HP tested a number of the latest ProLiant servers to determine Windows
Server 2003 scalability. Table 3 provides overviews of these servers.
Table 3 Overview of Servers

Server Model Description


ProLiant BL10e Power-efficient, ultra-dense ProLiant BL10e server blades
are ideal for light worker loads in SBC environment. Up to
280 of these blades can be installed in a standard 42U
rack.
ProLiant BL20p ProLiant BL20p high-performance, high-availability server
blades are ideal for enterprise applications.
ProLiant BL40p High-performance, four-way ProLiant BL40p server blades
are ideal for mission-critical back-end applications.
ProLiant DL360 The ProLiant DL360 G3 server offers essential availability
G3 and management features with the computing power
required for space-constrained Internet and data center
applications. With their unsurpassed ability to scale out,
these servers are industry leaders.
Based on price and performance, the 1U ProLiant DL360
G3 server is an ideal solution for medium – heavy worker
loads in a SBC environment.
ProLiant DL360 The ProLiant DL360 G1 is a Generation One server,
G1 included in this testing as a legacy system.
ProLiant DL380 ProLiant DL380 G3 servers with Integrated Light-Out
G3 management and enterprise-class uptime are optimized
for a variety of applications – still in a 2U form factor.
Based on price and performance, the 1U ProLiant DL380
G3 server is an ideal solution for the SBC environment.
ProLiant DL560 Designed for enterprise computing, the highly available
ProLiant DL560 server delivers high levels of four-way
performance
Deployed in an innovative 2U form factor with optimal
power and cooling efficiency, the ProLiant DL560 server is
ideal for the data center environment.
Table 4 lists key features of the tested servers.

SBC Solution Sizing


To minimize risk, HP offers an automated, online tool that can help the customer size a SBC2
solution. The algorithms and methodology used by the sizer are based on exhaustive testing and
customer surveys.

2
 Also known as Thin Client Server Computing (TCSC)
Figure 13 TSCS Sizer User Specification dialog box

The sizer offers a quick, consistent methodology for determining the “best-fit” server for a specific
SBC solution. Based on information provided by the customer, the sizer generates a Bill of
Materials (BOM) for the selected solution.
Related Links
For more information about Windows Server 2003 Terminal Server, see the Web site at
http://www.microsoft.com/windowsserver2003/technologies/terminalservices/.

For the latest information about Windows Server 2003, see the Windows Server 2003 Web site
at http://www.microsoft.com/windowsserver2003

You might also like