Performance Testing Requirement
Document
1. Objective
To evaluate the performance of the application under expected and peak load conditions by
measuring critical system metrics and ensuring responsiveness, stability, and reliability in
the Pre-Prod environment.
2. Key Performance Metrics
Metric Description
Response Time Time taken by the system to respond to
requests.
Throughput Number of transactions per second.
Error Rate Percentage of failed requests.
CPU and Memory Usage System resource consumption during load.
System Availability Uptime and responsiveness under test
conditions.
Database Performance Query response time and connection
stability (Postgres and Neo4j).
Latency Time delay in processing requests at DB
level.
BP Recommended Metrics Any additional business-critical KPIs
defined by the Business/Client.
3. Expected Results
- Throughput: Target is to achieve a throughput of X requests/sec (to be defined post
baseline run), with response times under 5 seconds, which must be better than TMF API.
- System should remain stable with max 300 concurrent users.
- Error rate should remain below 1%.
4. Environment Details
Parameter Value
Type Pre-Production
CPU 24
RAM 93 GB
Disk 1.5 TB
SWAP 31 GB
5. Test Plan
5.1 User Load Distribution
User Type Percentage Count (Max Users: 300)
Read Users 60% 180
Read/Write Users 40% 120
5.2 Load Pattern
Parameter Value
Ramp-Up Period 50 seconds (initial 50 users)
Test Duration 1 Hour
Peak Hours (MST) 7 AM – 5 PM
Quiet Hours (MST) 5 PM – 7 AM
6. Transactions Breakdown
6.1 Read Users – Device (Same logic for Location)
Transaction Users
Fetch device summary page 10
Fetch tab data grid 10
Global search 10
Get device via high-level API 10
Get bulk load created device 10
6.2 Write Users – Device (Same logic for Location)
Transaction Users
Create 5 devices using bulk load 10
Create single device 30
7. Endpoint Classification
Operation Type Endpoint Description
GET UI: Fetch summary pages, data grids, global
search, location/device details
POST Create Device / Location using single or
bulk APIs
8. Assumptions & Notes
- Tests will simulate UI-like behavior using GET and POST endpoints.
- Device and Location operations follow similar structure and test distribution.
- Performance baseline for TMF API will be provided for comparison.
- Realistic test data should be pre-populated in the Pre-Prod environment.
9. Metric Measurement Tools
Metric Measured Using
Response Time JMeter
Throughput JMeter
Error Rate JMeter
CPU Usage Grafana
Memory Usage Grafana
System Availability Grafana
Database Performance (Postgres & Neo4j) Grafana
Latency (DB Level) Grafana
BP Recommended Metrics Grafana / Custom Dashboards