Performance and Sizing Guidelines
Performance and Sizing Guidelines
2
Contents
2 Sizing Guidelines 15
2.1 Recommendation based on Logins per Second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 15
2.2 Recommendation based on Active Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.3 Recommendation based on Access Gateway Hits per Second . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4 Horizontal and Vertical Scaling . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.1 Login Performance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 16
2.4.2 Scalability of Active Users Sessions . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 19
2.4.3 Access Gateway Hits Scalability. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 21
2.4.4 Access Gateway Throughput Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 22
2.5 Sizing Recommendation for Analytics Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 24
2.5.1 Hardware Requirements for Analytics Server . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
2.5.2 Analytics Server Data Retention . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 25
A Additional Information 29
A.1 Test Strategy. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 29
A.1.1 Performance, Reliability, Scalability, and Failover Testing for Access Gateway . . . . . . . . . 29
A.1.2 Test Setup . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 30
A.1.3 Other Factors Influencing Performance Information . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 33
A.2 Tuning Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.2.1 Tuning Identity Server Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 34
A.2.2 Tuning Access Gateway Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 36
A.2.3 Web Socket Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 38
A.3 Test Environment: Identity Server as an OAuth 2.0 Identity Provider. . . . . . . . . . . . . . . . . . . . . . . . . 39
Server Hardware . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 39
Access Manager Tuning. . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Test Tools . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.4 Test Environment: Advanced Session Assurance . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
A.4.1 Hardware Configuration . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 40
Contents 3
A.4.2 Access Manager Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 41
A.4.3 Test Tool . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.4.4 Session Assurance Parameters . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.5 Test Environment: Vertical and Horizontal Scalability . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 42
A.5.1 Test Infrastructures . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 43
A.5.2 Test Configuration and Test Data . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
A.5.3 Access Manager Tuning . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . . 44
4
About this Book and the Library
This guide provides the performance and sizing recommendations for Access Manager. This
information helps you in deploying the correct configuration in your environment. The test results
are simulated in a lab environment.
On similar hardware, you may have different results. The test result may vary based on the
applications used, type of data, user store, and a number of other dependent components operating
in the environment. It is recommended to first verify the performance in your environment before
deploying the product in a high-scale environment.
For information about the test strategy, hardware, and software used in the tests, Section A.1, “Test
Strategy,” on page 29.
NOTE: Contact namsdk@[Link] for any query related to Access Manager SDK.
HTTPS Public (a user accessing single page in a 1700K requests per minute with a throughput of
session) 2000 Megabits per minute
HTTPS Public (a user accessing 10 pages in a session) 1400K requests per minute with a throughput of
5000 Megabits per minute
The following performance numbers are recorded in seconds to show how the system performs:
Concurrent Sessions in a 4-node Access Gateway 240K sessions in cluster (approximately 60K sessions
cluster per server)
Concurrent Sessions in a 4-node Identity Server 240K sessions in cluster (approximately 60K sessions
cluster per server)
HTTPS Authorizations with 10 page requests 2500 authorized pages per second
HTTPS Public (a user accessing a single page in a 2600K requests per minute with a throughput of
session) 2700 Megabits per minute
HTTPS Public (a user accessing 10 pages in a session) 1600K requests per minute with a throughput of
6200 Megabits per minute
The following performance numbers are recorded in seconds to show how the system performs:
Concurrent Sessions in a 4-node Access Gateway 260K sessions in the cluster (approximately 65K
cluster sessions per server)
Concurrent Sessions in a 4-node Identity Server 280K sessions in the cluster (approximately 70K
cluster sessions per server)
HTTPS Authorizations with 10 page request 2500 authorized pages per second
HTTPS Public (a user accessing a single page in a 2808K requests per minute with a throughput of
session) 3000 Megabits per minute
HTTPS Public (a user accessing 10 pages in a session) 1800K requests per minute with a throughput of
6600 Megabits per minute
The following performance numbers are recorded per second to show how the system performs:
Concurrent Sessions in a 4-node Access Gateway 560K sessions in the cluster (approximately 140K
cluster sessions per server)
Concurrent Sessions in a 4-node Identity Server 720K sessions in the cluster (approximately 180K
cluster sessions per server)
HTTPS Authorizations with 10 page request 2800 authorized pages per second
HTTPS Public (a user accessing a single page in a 1700K requests per minute with a throughput of
session) 1800 Megabits per minute
HTTPS Public (a user accessing 10 pages in a session) 1340K requests per minute with a throughput of
5000 Megabits per minute
These performance numbers are recorded in second to show how the system performs:
Concurrent Sessions in a 4-node Access Gateway 160K sessions in the cluster (approximately 40K
cluster sessions per server)
Concurrent Sessions in a 4-node Identity Server 400K sessions in the cluster (approximately 100K
cluster sessions per server)
HTTPS Authorizations with 10 page requests 1900 authorized pages per second
Client credentials flow Users request for an access token in the client 820 tokens per second
without a refresh token credentials flow without a refresh token.
Client credentials flow Users request for an access token in the client 800 tokens per second
with a refresh token credentials flow along with a refresh token.
Resource owners flow Users request for an access token in the resource 600 tokens per second
without refresh tokens owners flow without requesting for a refresh
tokens.
Resource owners flow Users request for an access token in the resource 200 tokens per second
with refresh tokens owners flow with refresh tokens.
Authorization code flow Authenticate and request for an authorization 120 tokens per second
without refresh tokens code and using the authorization code request for
an access token without requesting for refresh
tokens.
Authorization code flow Authenticate and request for an authorization 110 tokens per second
with refresh tokens code and using the authorization code request for
an access token with refresh tokens.
Implicit flow – access Request for an access token in the implicit flow. 140 tokens per second
tokens
Implicit flow – ID tokens Request for the ID token in implicit flow. 140 token per second
Implicit flow – Access Request for an access token and an ID token in the 130 tokens per second
token + ID tokens implicit flow.
Token validation Validate an access token against the tokeninfo 540 validations per second
endpoint.
Token refresh Getting an access token by submitting the refresh 460 token refreshes per
token. second
User Attributes Fetching the user attributes against the userinfo 540 requests per second
endpoint
For information about the test environment, see Section A.3, “Test Environment: Identity Server as
an OAuth 2.0 Identity Provider,” on page 39.
NOTE: To improve the performance of OAuth requests, scale Access Manager components
horizontally by adding additional components to the cluster.
This delay is due to the client side browser processing for the additional parameters. These
parameters do not impact the server side processing.
Section 1.6.1, “Impact of Enabling Advanced Session Assurance on Identity Server
Performance,” on page 13
Section 1.6.2, “Impact of Enabling Advanced Session Assurance on Access Gateway
Performance,” on page 13
NOTE: For information about the test environment, see Section A.4, “Test Environment: Advanced
Session Assurance,” on page 40.
Logins Per Second 230 logins per second 250 logins per second 8%
Access Gateway 130 requests per second 160 requests per second 18%
requests Per Second
Identity Servers 12
LDAP Servers 8
Policies/Roles 101
Accelerators 51
In this Chapter
Recommendation based on Logins per Second
Recommendation based on Active Sessions
Recommendation based on Access Gateway Hits per Second
Horizontal and Vertical Scaling
Sizing Recommendation for Analytics Server
2 Access Gateway
4 Access Gateway
6 Access Gateway
Sizing Guidelines 15
2.2 Recommendation based on Active Sessions
Less than 200,000 2 Identity Server and 2 Access Gateway 2 X CPU, 16 GB Memory
NOTE: For more information, see Test Environment: Vertical and Horizontal Scalability.
16 Sizing Guidelines
Login Performance with CPU Scaling
In this test, memory is kept constant at 32 GB and Tomcat is assigned with 16 GB in Identity Server
and Access Gateway. CPUs are increased in the following order 1, 2, 4, 8, and 16 and performance is
measured at each CPU level.
Logins Per Second
160
140
120
100
Logins Per Second
80
60
40
20
0 2 4 6 8 10 12 14 16 18
Number of CPUs
Sizing Guidelines 17
Login Performance with Memory Scaling
In this test, the number of CPU is kept constant at 16 for Identity Server and Access Gateway.
Memory is increased in the order 8 GB, 16 GB, 32 GB, and 64 GB. Also, Tomcat is assigned with 70%
of the available memory. Performance is measured at each memory level.
Logins Per Second
160
140
120
100
Logins Per Second
80
60
40
20
0 10 20 30 40 50 60 70
Memory in GB
18 Sizing Guidelines
Login Performance with Number of Nodes in a Cluster
In this test, each node is assigned 8 CPU and 16 GB memory. Performance is measured by increasing
the number of nodes in the cluster.
Logins Per Second
700
600
500
Logins Per Second
400
300
200
100
0 1 2 3 4 5 6 7
Sizing Guidelines 19
300000
250000
200000
Acve Sessions
150000
100000
50000
0 2 4 6 8 10 12 14 16 18
Number of CPUs
300000
250000
200000
Acve Sessions
150000
100000
50000
0 10 20 30 40 50 60 70
Memory in GB
20 Sizing Guidelines
2.4.3 Access Gateway Hits Scalability
Test: Accessing a public resource through Access Gateway. Public resources are static pages of size 60
KB containing several hyperlinks to the same originating web server. In this test, the number of hits
per second is measured.
“Access Gateway Hits with Scaling CPU” on page 21
“Access Gateway Hits with Scaling the Memory” on page 22
25000
20000
Hits Per Second
150000
10000
5000
0 2 4 6 8 10 12 14 16 18
Number of CPUs
Sizing Guidelines 21
Access Gateway Hits with Scaling the Memory
In this test, CPUs are kept constant at 16 for Access Gateway. Memory is increased in the order 8 GB,
16 GB, 32 GB, and 64 GB. Tomcat is assigned with 70% of the available memory. Performance is
measured at each memory level.
Hits Per Second
26500
26000
25500
Hits Per Second
25000
24500
24000
23500
0 10 20 30 40 50 60 70
Memory in GB
22 Sizing Guidelines
Access Gateway Throughput with Scaling CPU
In this test, memory is kept constant at 32 GB and Tomcat is assigned 16 GB for Access Gateway. The
number of CPU is increased in the order 1, 2, 4, 8, and 16. The performance is measured at each CPU
level.
Throughput (kbps)
50000
45000
40000
35000
30000
Throughput (kbps)
25000
20000
15000
10000
5000
0 2 4 6 8 10 12 14 16 18
Number of CPUs
Sizing Guidelines 23
Access Gateway Throughput by scaling the Memory
In this test, the number of CPU is kept constant at 16 for Access Gateway. Memory is increased in the
order – 8 GB, 16 GB, 32 GB, and 64 GB. Tomcat is assigned with 70% of the available memory.
Performance is measured at each memory level.
Throughput (kbps)
45500
45000
44500
44000
Throughput (kbps)
43500
43000
42500
42000
41500
0 10 20 30 40 50 60 70
Memory in GB
24 Sizing Guidelines
2.5.1 Hardware Requirements for Analytics Server
For the demonstration purpose, the 50 GB hard disk is required. For a production environment, the
hard disk requirement depends on the Access Manager login pattern for a day. For other system
requirements for Analytics Server, see “System Requirements: Analytics Server”.
The following recommendations consider only Analytics Server-specific Access Manager Audit
events. For information about Analytics Server events, see “Enabling Events for Each Graph” in the
Access Manager 4.5 Administration Guide.
Any change in Access Manager Audit events selection changes the disk requirement.
25000 logins per day 50000 logins per day 100000 logins per day
Sizing Guidelines 25
26 Sizing Guidelines
3 Access Gateway Performance in Access
3
Manager 4.4
Access Manager 4.4 onward, Access Gateway is upgraded to Apache 2.4. Therefore, Access Gateway
performance is significantly improved.
The following graphs show the overall public request performance improvement in Access Manager
4.4 over Access Manager 4.3:
HTTPS transactions per second
Additional Information 29
HTTPS traffic through a protected resource
HTTPS traffic through a protected resource with Form Fill
HTTPS traffic through a protected resource with Identity Injection
HTTPS traffic through a protected resource with policies that contain roles
HTTPS traffic through a protected resource with 10 additional page requests
The reliability testing includes the HTTPS traffic for 2 weeks through a stress test scenario
The scalability (clustering) testing includes the following scenarios:
2 x 4 x 4 (2 Administration Console servers, 4 Identity Server servers, and 4 Linux Access
Gateway servers)
2 x 4 x 4 (2 Administration Console servers, 4 Identity Server servers, and 4 Access Gateway
Appliance servers)
The failover testing includes the HTTP/HTTPS traffic continues after a component failover scenario
30 Additional Information
Server Components Operating System Hardware
Access Gateway Appliance (4 nodes) SLES11 SP3 CPU: 4 x 2.6 GHz and Memory: 16 GB
External eDirectory user store (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Apache2 Web Server (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Access Gateway Service (4 nodes) SLES12 CPU: 4 x 2.6 GHz and Memory: 16 GB
External eDirectory user store (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Apache2 Web Server (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
NOTE: In this performance testing, Access Gateway is installed on SLES 12 servers with BTRFS as a file
system. Identity Server is installed on SLES 12 with EXT3 as a file system (upgraded from SLES 11 SP3
to SLES12)
Additional Information 31
The design of the virtual machine is as follows:
Access Manager Appliance (4 nodes) SLES11 SP3 CPU: 8 x 3 GHz and Memory: 32 GB
External eDirectory user store (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Apache2 Web Server (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Administration Console (2 nodes) Windows Server 2012 R2 CPU: 2 x 3 GHz and Memory: 4 GB
Standard
Identity Servers (4 nodes) Windows Server 2012 R2 CPU: 4 x 3 GHz and Memory: 16 GB
Standard
Access Gateway Service (4 nodes) Windows Server 2012 R2 CPU: 4 x 2.6 GHz and Memory: 16 GB
Standard
External eDirectory userstore (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Apache2 Web Server (3 nodes) SLES11 SP1 CPU: 2 x 3 GHz and Memory: 4 GB
Load Balancers
The following L4 switches are used as load balancers for the testing:
Zeus ZXTM LB (software L4 switch)
Brocade ServerIron ADX 1000 (hardware L4 switch)
Alteon 3408 (hardware L4 switch)
Configuration Details
HTML pages are of approximately 50 KB with 50 small images embedded for all public page
tests.
32 Additional Information
A small HTML page of 200B with one hyperlink is used for authentication, authorization,
identity injection, and form fill performance tests. These tests do not cover the page rendering
performance.
Access Manager user stores configuration contains 20 threads with 100,000 users in a single
container. Multiple containers received the same performance, however these tests have been
conducted with optimization and fast hardware. If you do not optimize and increase the speed
of your hardware, performance will decrease. The primary user store used in the tests is
eDirectory 8.8.6.
Additional Information 33
LDAP User Stores: This component can cause slowness depending upon configuration,
hardware, and the layout of the directory. The user store is the most common problem with
performance. Therefore, testing must be done with the LDAP user stores that is used in the
environment. Expect adjustments if you are attempting to get the maximum speed out of the
cluster for the different LDAP user stores. eDirectory is primarily used throughout the testing to
give a baseline for the product.
Timeout: If you run a performance test, you must factor in sessions that are stored on the
server. The tests have a 5 minute timeouts so that the tests do not overrun the total users on
the system of 100,000 active sessions on the cluster. You must consider this while planning for
capacity testing on a cluster. Configuring the session timeout for a resource is dependent on the
security requirement. If security is not the concern, the following are some of the
recommendations to fine-tune the session timeout configuration to reap the best performance:
If users access a protected resource for a short duration and leave the session idle after
accessing few pages, configuring a short session timeout for such resources is
recommended. This enables the system to remove idle sessions faster from the system.
If users access a protected resource for a long duration, configuring a long session timeout
is recommended. It reduces the internal traffic to update the user access and improve the
overall performance of the system.
Users: Ensure that you have enough users on the system to run the performance test. If you run
50 threads of logins against Access Manager with each one using the same user to authenticate,
Access Manager matches each user and handles all 50 sessions as the sessions of one user. This
skews the test goals and results, because it is not a valid user scenario and invalidate the test
results.
34 Additional Information
Tomcat Connector Maximum Thread Setting
This parameter enables Identity Server to handle more threads simultaneously to improve the
performance. The thread number must be fine-tuned for every customer environment based on the
number of attributes attached to a user session. When each user session holds a large number of
attributes, each user session requires more heap memory. The available stack memory reduces as a
result. If the number of threads configured in this scenario is high, Tomcat tries to spawn more
threads and fails due to non-availability of the stack memory. You must fine-tune the number of
threads based on the attribute usage.
In [Link] file, set the value of maxThreads to 1000 for for 8443 as follows:
Linux: /opt/novell/nam/idp/conf/[Link]
Windows: C:\Program Files (x86)\Novell\Tomcat\conf\[Link]
<Connector NIDP_Name="connector" SSLEnabled="true" URIEncoding="utf-8"
acceptCount="100" address="x.x.x.x" ciphers="XX, XX ,XX, XX"
clientAuth="false" disableUploadTimeout="true" enableLookups="false"
keystoreFile="/opt/novell/devman/jcc/certs/idp/[Link]"
keystorePass="p2SnTyZPHn9qe66" maxThreads="1000" minSpareThreads="5"
port="8443" scheme="https" secure="true"
sslImplementationName="[Link]
mentation" sslProtocol="TLS"/>
This enables the Tomcat process to come up with 2 GB pre-allocated memory. If your Identity
Server machine has more than 4 GB memory, the recommendation is to allocate 50% to 75% of
the memory to Identity Server Tomcat. This needs to be fine-tuned based on each customer's
environment.
Set Identity Server Tomcat to 12288 for both Xms and Xmx.
Change the -[Link] value from 0 to a value between 5 and 15. This
parameter prevents user sessions from consuming all memory and ensures that free memory is
available for other internal Java processes to run. When this threshold is reached, the user
receives a 503 server busy message and a threshold error message is logged to the [Link]
file.
JAVA_OPTS="${JAVA_OPTS} -[Link]=10"
Additional Information 35
NOTE: On Windows, these values can be set by executing the [Link] file located at the
C:\Program Files (x86)\Novell\Tomcat\bin. Select the Java tab for setting the Initial
memory pool and Maximum memory pool values.
36 Additional Information
JAVA Memory Allocations
The Tomcat configuration file controls the amount of memory that Tomcat can allocate for Java.
If you have installed Access Gateway on a machine with the minimum 4 GB of memory, you can
modify the [Link] file to improve performance under heavy load as follows:
In /opt/novell/nam/mag/conf/[Link], replace values of Xms and Xmx to 2048:
JAVA_OPTS="-server -Xms2048m -Xmx2048m –Xss256k "
This enables the Tomcat process to come up with 2 GB pre-allocated memory.
If the Access Gateway Appliance machine has more than 4 GB memory, the recommendation is
to allocate 50% to 75% of the memory to ESP Tomcat. This needs to be fine-tuned based on
each customer environment.
Set Xms and Xmx to 12288 for ESP Tomcat.
Change the -[Link] value from 0 to a value between 5 and [Link] parameter
prevents user sessions from using up all memory and ensures that free memory is available for
other internal Java processes to function. When this threshold is reached, the user receives a
503 server busy message and a threshold error message is logged to the [Link] file.
JAVA_OPTS="${JAVA_OPTS} -[Link]=10"
NOTE: On Windows, these values can be set by executing the [Link] file located at the
C:\Program Files (x86)\Novell\Tomcat\bin directory. Select the Java tab for setting the
Initial memory pool and Maximum memory pool values.
This configuration is for the Appliance machine with the minimum 4 GB memory. If the Appliance
machine has more than 6 GB memory, set mpm_worker_module to match the following
configuration.
The performance tests are conducted with the following configuration when the Appliance machine
has 16 GB memory available:
Additional Information 37
<IfModule mpm_worker_module>
ThreadLimit 1000
StartServers 9
ServerLimit 10
MaxClients 9000
MinSpareThreads 9000
MaxSpareThreads 9000
ThreadsPerChild 1000
MaxRequestsPerChild 0
</IfModule>
If the available memory is less or more, you must fine-tune each of these configurations based on
your environment.
Access Gateway Service on Windows:
The mpm_winnt_module is located at C:\Program Files\Novell\apache\conf\extra\httpd-mpm. It is
by default configured with the following settings:
<IfModule mpm_winnt_module>
ThreadsPerChild 1920
MaxRequestsPerChild 0
</IfModule>
Modifying the default values do not have any impact on the performance.
In large scale Web-Socket deployments, Access Gateway may run out of the available maximum
number of open file descriptor after reaching the default maximum open file descriptor. It is
recommended to configure more number of open file descriptor in such cases. To find the
maximum number of open files for a process, run the following command on the Linux server to
know the maximum number of open files for the process:
#ulimit -n
38 Additional Information
[Link]:
Edit the following setting in [Link] at /etc/opt/novell/apache2/conf/
extra/[Link]:
<IfModule mpm_worker_module>
ThreadLimit 3000
StartServers 9
ServerLimit 10
MaxClients 30000
MinSpareThreads 9000
MaxSpareThreads 9000
ThreadsPerChild 3000
MaxRequestsPerChild 0
</IfModule>
Server Hardware
The tests are run on a virtualized lab with the following configuration:
Additional Information 39
Hardware Virtual Machines
Test Tools
Silk Performer 17.0
40 Additional Information
Hardware Virtual Machines
Web Server-2
Identity Server:
Tomcat is set with 8 GB memory in /opt/novell/nam/idp/conf/[Link]
Additional Information 41
Access Gateway:
Tomcat is set with 8 GB memory in /opt/novell/nam/mag/conf/[Link]
<IfModule mpm_worker_module>
ThreadLimit 300
StartServers 3
MaxClients 3000
MinSpareThreads 3000
MaxSpareThreads 3000
ThreadsPerChild 300
ServerLimit 10
MaxRequestsPerChild 0
</IfModule>
42 Additional Information
A.5.1 Test Infrastructures
The test lab consists of virtualized isolated environment where in test servers are running as virtual
machines on top of the VMWare ESXi Server.
Vertical Scaling: The following diagram illustrates the virtual machine layout for a vertical scaling
setup:
Identity Access
Server Gateway
CPUs: 16 CPUs: 16
Memory: 64 GB Memory: 64 GB
OS - SLES 12 SP1 OS - SLES 12 SP1
Horizontal Scaling: The following diagram illustrates the virtual machine layout for a horizontal
scaling setup:
Additional Information 43
A.5.2 Test Configuration and Test Data
For login performance tests, the eDirectory user store with 3 replicas is used. These replicas
have 100,000 users, which are synced across all replicas. A Secure Name Password Form
authentication contract is used with a session time out of 10 minutes.
For active sessions scaling tests, the eDirectory user store with a single replica having 1,000,000
users is used. A Secure Name Password Form authentication contract is used with a session
time out of 30 minutes.
For Access Gateway throughput and hit tests, 3 web servers with the static web pages of size 60
KB are used.
The performance tests are run with Borland Silk Performer version 16.5.
During the tests, the load test clients and the servers were using the
TSL_DHE_RSA_WITH_AES_128_CBC_SHA ciphers for SSL negotiation. Any change in the cipher
may impact the performance behavior of Access Manager components.
In vertical memory scaling tests, 70% of the total memory is given to Tomcat.
In horizontals scaling tests, 8 GB memory is allocated to Tomcat in each Access Gateway
configuration.
44 Additional Information