0% found this document useful (0 votes)
15 views4 pages

What Is Load Testing

Uploaded by

Moein Az
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd
0% found this document useful (0 votes)
15 views4 pages

What Is Load Testing

Uploaded by

Moein Az
Copyright
© © All Rights Reserved
We take content rights seriously. If you suspect this is your content, claim it here.
Available Formats
Download as PDF, TXT or read online on Scribd

What is load testing?

Load testing is a type of performance test that measures how an application behaves under
expected (normal and peak) user load. The goal is to verify that the system meets its
performance requirements (throughput, latency, error rates) when subjected to realistic,
production-like traffic.

Why do load testing?

• Validate SLAs (e.g., p95 < 500 ms at 1k RPS).

• Find bottlenecks under normal/peak traffic (DB, CPU, network, connection pools).

• Verify capacity / scaling — that autoscaling, caches, and pools handle expected load.

• Prevent regressions — catch performance regressions before release.

How load testing differs from related tests

• Load test: apply expected/peak traffic and hold to check behavior.

• Stress test: push beyond expected limits to find breaking point.

• Spike test: introduce sudden traffic spikes to test autoscaling/handling.

• Soak (endurance) test: run expected load for long periods to find leaks/degeneration.

Typical load test workflow (practical)

1. Define objectives & SLAs (what traffic, which endpoints, acceptance criteria).

2. Select scenarios — the realistic user journeys to simulate (and their % distribution).

3. Prepare environment & data — production-like servers, seeded DB, isolated test
namespace.

4. Create scripts — reproduce user behavior, authentication, think-time, dynamic data.

5. Warm-up — short run to populate caches and JIT.

6. Execute — ramp to target, hold for required duration, record metrics.


7. Monitor & collect — client metrics (RPS, latency, errors) and server metrics (CPU,
memory, DB, GC, network).

8. Analyze — p50/p95/p99, errors, resource saturation, traces.

9. Fix, tune, repeat.

Key metrics to watch during a load test

• Throughput (RPS / TPS) — how many requests succeeded per second.

• Latency percentiles — p50, p95, p99 (tail latency matters).

• Error rate — % of failed requests and error codes.

• CPU / Memory / Disk I/O / Network on each component.

• DB metrics — connections, slow queries, locks.

• Connection pools / queue depths / thread utilization.

Simple example — load test with k6

Save as [Link] and run k6 run [Link]:

import http from 'k6/http';

import { check, sleep } from 'k6';

export let options = {

stages: [

{ duration: '1m', target: 50 }, // ramp to 50 VUs

{ duration: '4m', target: 200 }, // ramp to 200 VUs

{ duration: '10m', target: 200 }, // sustain 200 VUs (steady-state = load test)

{ duration: '2m', target: 0 }, // ramp down

],

thresholds: {
'http_req_duration': ['p(95)<700'],

'http_req_failed': ['rate<0.01'],

},

};

export default function () {

const res = [Link]('[Link]

check(res, { 'status 200': (r) => [Link] === 200 });

sleep([Link]() * 2 + 1); // think time

Notes: the sustain stage (10m at 200 VUs) is the core load test period — that’s where you verify
SLAs.

Environment & practical tips

• Use production-like infrastructure (instance types, DB size, network). Results from a tiny
testbed may be useless.

• Use distributed load generators if a single generator becomes a bottleneck.

• Isolate test data (don’t overwrite real customers).

• Monitor everything: app logs, APM traces, host metrics, DB. Correlate by timestamp and
request-id.

• Warm caches before measuring steady-state.

• Repeat runs and use percentiles (not just averages).

Common pitfalls

• Running from a single weak client that chokes before the SUT.

• Testing non-production-like environment or configs.

• Hardcoding IDs (caches or DB row contention).


• Ignoring background jobs or cron tasks that run during tests.

• Interpreting one-off spikes as permanent — run multiple times.

Quick checklist before you start

• Objectives & SLAs defined?

• Scenarios & user distribution defined?

• Realistic test data prepared?

• Monitoring & tracing in place?

• Load generators capable of target RPS?

• Rollback/mitigation plan if test affects prod?

You might also like