-
Notifications
You must be signed in to change notification settings - Fork 5.3k
Performance Issues with Envoy #5536
Description
Benchmarking Envoy and comparing performance against HAProxy
Setup:
LB : Envoy/HAProxy
Backend : Nginx
Benchmarking Tool : wrk (https://github.com/wg/wrk)
Envoy Config:
Concurrency = 4
static_resources:
listeners:
- name: test
address:
socket_address:
protocol: TCP
address: 0.0.0.0
port_value: 8090
filter_chains:
- filters:
- name: envoy.http_connection_manager
config:
stat_prefix: ingress_http
generate_request_id: false
route_config:
name: test_routes
virtual_hosts:
- name: test_service
domains:
- "*"
routes:
- match:
prefix: "/"
route:
cluster: test_backend
http_filters:
- name: envoy.router
config:
dynamic_stats: false
clusters:
- name: test_backend
connect_timeout: 0.25s
hosts:
- socket_address:
address: 172.16.x.x
port_value: 8000
HAProxy Config:
global
daemon
maxconn 10000
nbproc 4
defaults
mode http
timeout connect 5000ms
timeout client 50000ms
timeout server 50000ms
http-reuse aggressive
frontend test
bind *:8080
acl test1 path_beg /
use_backend test_backend if test1
backend test_backend
server server1 172.16.x.x:8000
Nginx Config
worker_processes 4;
worker_connections 1000;
server {
listen 8000 default_server;
server_name test.com;
access_log /var/log/nginx/test.access.log;
error_log /var/log/nginx/test.error.log;
location / {
return 200 'Woohoo!';
}
}
Benchmark Results
Envoy
$ wrk -c100 -d60s -t10 "http://172.16.x.x:8090/" --latency
Running 1m test @ http://172.16.x.x:8090/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 2.73ms 5.94ms 413.43ms 99.73%
Req/Sec 4.00k 1.43k 9.36k 63.22%
Latency Distribution
50% 2.16ms
75% 3.56ms
90% 4.50ms
99% 7.10ms
2388909 requests in 1.00m, 389.58MB read
Requests/sec: 39748.81
Transfer/sec: 6.48MB
HAProxy
$ wrk -c100 -d60s -t10 "http://172.16.x.x:8080/" --latency
Running 1m test @ http://172.16.x.x:8080/
10 threads and 100 connections
Thread Stats Avg Stdev Max +/- Stdev
Latency 1.14ms 526.36us 31.04ms 89.20%
Req/Sec 8.89k 1.79k 14.23k 63.36%
Latency Distribution
50% 1.05ms
75% 1.32ms
90% 1.63ms
99% 2.20ms
5315577 requests in 1.00m, 729.98MB read
Requests/sec: 88446.34
Transfer/sec: 12.15MB
Note:
- Both the lbs are running inside docker in host networking mode.
- In case of envoy benchmark, backend (nginx) CPU utilisation only reaches 60%
Could you please point out where am I going wrong? (As according to various online blogs, envoy seems to provide equivalent performance as HAProxy)