Merge pull request #240 from containous/update-benchmarks

update benchmarks with haproxy and latest results
This commit is contained in:
Vincent Demeester 2016-03-05 18:39:00 +01:00
commit a458018aa2
2 changed files with 57 additions and 114 deletions

View file

@ -16,7 +16,7 @@ It supports several backends ([Docker :whale:](https://www.docker.com/), [Mesos/
## Features ## Features
- It's fast - [It's fast](docs/index.md#benchmarks)
- No dependency hell, single binary made with go - No dependency hell, single binary made with go
- Simple json Rest API - Simple json Rest API
- Simple TOML file configuration - Simple TOML file configuration

View file

@ -1062,128 +1062,71 @@ Note that Træfɪk *will not watch for key changes in the `/traefik_configuratio
## <a id="benchmarks"></a> Benchmarks ## <a id="benchmarks"></a> Benchmarks
Here are some early Benchmarks between Nginx and Træfɪk acting as simple load balancers between two servers. Here are some early Benchmarks between Nginx, HA-Proxy and Træfɪk acting as simple load balancers between two servers.
- Nginx: - Nginx:
```sh ```sh
$ docker run -d -e VIRTUAL_HOST=test1.localhost emilevauge/whoami $ docker run -d -e VIRTUAL_HOST=test.nginx.localhost emilevauge/whoami
$ docker run -d -e VIRTUAL_HOST=test1.localhost emilevauge/whoami $ docker run -d -e VIRTUAL_HOST=test.nginx.localhost emilevauge/whoami
$ docker run --log-driver=none -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy $ docker run --log-driver=none -d -p 80:80 -v /var/run/docker.sock:/tmp/docker.sock:ro jwilder/nginx-proxy
$ ab -n 20000 -c 20 -r http://test1.localhost/ $ wrk -t12 -c400 -d60s -H "Host: test.nginx.localhost" --latency http://127.0.0.1:80
This is ApacheBench, Version 2.3 <$Revision: 1528965 $> Running 1m test @ http://127.0.0.1:80
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 12 threads and 400 connections
Licensed to The Apache Software Foundation, http://www.apache.org/ Thread Stats Avg Stdev Max +/- Stdev
Latency 162.61ms 203.34ms 1.72s 91.07%
Req/Sec 277.57 107.67 790.00 67.53%
Latency Distribution
50% 128.19ms
75% 218.22ms
90% 342.12ms
99% 1.08s
197991 requests in 1.00m, 82.32MB read
Socket errors: connect 0, read 0, write 0, timeout 18
Requests/sec: 3296.04
Transfer/sec: 1.37MB
```
Benchmarking test1.localhost (be patient) - HA-Proxy:
Completed 2000 requests
Completed 4000 requests
Completed 6000 requests
Completed 8000 requests
Completed 10000 requests
Completed 12000 requests
Completed 14000 requests
Completed 16000 requests
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
```
Server Software: nginx/1.9.2 $ docker run -d --name web1 -e VIRTUAL_HOST=test.haproxy.localhost emilevauge/whoami
Server Hostname: test1.localhost $ docker run -d --name web2 -e VIRTUAL_HOST=test.haproxy.localhost emilevauge/whoami
Server Port: 80 $ docker run -d -p 80:80 --link web1:web1 --link web2:web2 dockercloud/haproxy
$ wrk -t12 -c400 -d60s -H "Host: test.haproxy.localhost" --latency http://127.0.0.1:80
Document Path: / Running 1m test @ http://127.0.0.1:80
Document Length: 287 bytes 12 threads and 400 connections
Thread Stats Avg Stdev Max +/- Stdev
Concurrency Level: 20 Latency 158.08ms 187.88ms 1.75s 89.61%
Time taken for tests: 5.874 seconds Req/Sec 281.33 120.47 0.98k 65.88%
Complete requests: 20000 Latency Distribution
Failed requests: 0 50% 121.77ms
Total transferred: 8900000 bytes 75% 227.10ms
HTML transferred: 5740000 bytes 90% 351.98ms
Requests per second: 3404.97 [#/sec] (mean) 99% 1.01s
Time per request: 5.874 [ms] (mean) 200462 requests in 1.00m, 59.65MB read
Time per request: 0.294 [ms] (mean, across all concurrent requests) Requests/sec: 3337.66
Transfer rate: 1479.70 [Kbytes/sec] received Transfer/sec: 0.99MB
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.1 0 2
Processing: 0 6 2.4 6 35
Waiting: 0 5 2.3 5 33
Total: 0 6 2.4 6 36
Percentage of the requests served within a certain time (ms)
50% 6
66% 6
75% 7
80% 7
90% 9
95% 10
98% 12
99% 13
100% 36 (longest request)
``` ```
- Træfɪk: - Træfɪk:
```sh ```sh
docker run -d -l traefik.backend=test1 -l traefik.frontend.rule=Host -l traefik.frontend.value=test1.docker.localhost emilevauge/whoami $ docker run -d -l traefik.backend=test1 -l traefik.frontend.rule=Host -l traefik.frontend.value=test.traefik.localhost emilevauge/whoami
docker run -d -l traefik.backend=test1 -l traefik.frontend.rule=Host -l traefik.frontend.value=test1.docker.localhost emilevauge/whoami $ docker run -d -l traefik.backend=test1 -l traefik.frontend.rule=Host -l traefik.frontend.value=test.traefik.docker.localhost emilevauge/whoami
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/traefik.toml -v /var/run/docker.sock:/var/run/docker.sock containous/traefik $ docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/traefik.toml -v /var/run/docker.sock:/var/run/docker.sock containous/traefik
$ ab -n 20000 -c 20 -r http://test1.docker.localhost/ $ wrk -t12 -c400 -d60s -H "Host: test.traefik.docker.localhost" --latency http://127.0.0.1:80
This is ApacheBench, Version 2.3 <$Revision: 1528965 $> Running 1m test @ http://127.0.0.1:80
Copyright 1996 Adam Twiss, Zeus Technology Ltd, http://www.zeustech.net/ 12 threads and 400 connections
Licensed to The Apache Software Foundation, http://www.apache.org/ Thread Stats Avg Stdev Max +/- Stdev
Latency 132.93ms 121.89ms 1.20s 66.62%
Benchmarking test1.docker.localhost (be patient) Req/Sec 280.95 104.88 740.00 68.26%
Completed 2000 requests Latency Distribution
Completed 4000 requests 50% 128.71ms
Completed 6000 requests 75% 214.15ms
Completed 8000 requests 90% 281.45ms
Completed 10000 requests 99% 498.44ms
Completed 12000 requests 200734 requests in 1.00m, 80.02MB read
Completed 14000 requests Requests/sec: 3340.13
Completed 16000 requests Transfer/sec: 1.33MB
Completed 18000 requests
Completed 20000 requests
Finished 20000 requests
Server Software: .
Server Hostname: test1.docker.localhost
Server Port: 80
Document Path: /
Document Length: 312 bytes
Concurrency Level: 20
Time taken for tests: 6.545 seconds
Complete requests: 20000
Failed requests: 0
Total transferred: 8600000 bytes
HTML transferred: 6240000 bytes
Requests per second: 3055.60 [#/sec] (mean)
Time per request: 6.545 [ms] (mean)
Time per request: 0.327 [ms] (mean, across all concurrent requests)
Transfer rate: 1283.11 [Kbytes/sec] received
Connection Times (ms)
min mean[+/-sd] median max
Connect: 0 0 0.2 0 7
Processing: 1 6 2.2 6 22
Waiting: 1 6 2.1 6 21
Total: 1 7 2.2 6 22
Percentage of the requests served within a certain time (ms)
50% 6
66% 7
75% 8
80% 8
90% 9
95% 10
98% 11
99% 13
100% 22 (longest request)
``` ```