Minor improvements to documentation

This commit is contained in:
Colin Coller 2018-04-23 09:56:03 -04:00 committed by Traefiker Bot
parent 2975acdc82
commit 667a0c41ed
3 changed files with 81 additions and 58 deletions

View file

@ -262,7 +262,7 @@ This allows for setting headers such as `X-Script-Name` to be added to the reque
!!! warning
If the custom header name is the same as one header name of the request or response, it will be replaced.
In this example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, and the `X-Custom-Response-Header` added to the response.
In this example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request and the `X-Custom-Response-Header` header added to the response.
```toml
[frontends]
@ -276,7 +276,7 @@ In this example, all matches to the path `/cheese` will have the `X-Script-Name`
rule = "PathPrefixStrip:/cheese"
```
In this second example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, the `X-Custom-Request-Header` removed to the request and the `X-Custom-Response-Header` removed to the response.
In this second example, all matches to the path `/cheese` will have the `X-Script-Name` header added to the proxied request, the `X-Custom-Request-Header` header removed from the request, and the `X-Custom-Response-Header` header removed from the response.
```toml
[frontends]
@ -323,12 +323,49 @@ In this example, traffic routed through the first frontend will have the `X-Fram
A backend is responsible to load-balance the traffic coming from one or more frontends to a set of http servers.
#### Servers
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
!!! note
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
Here is an example of backends and servers definition:
```toml
[backends]
[backends.backend1]
# ...
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
# ...
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
```
- Two backends are defined: `backend1` and `backend2`
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1`.
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2`.
#### Load-balancing
Various methods of load-balancing are supported:
- `wrr`: Weighted Round Robin.
- `drr`: Dynamic Round Robin: increases weights on servers that perform better than others.
It also rolls back to original weights if the servers have changed.
#### Circuit breakers
A circuit breaker can also be applied to a backend, preventing high loads on failing servers.
Initial state is Standby. CB observes the statistics and does not modify the request.
In case the condition matches, CB enters Tripped state, where it responds with predefined code or redirects to another frontend.
@ -346,6 +383,26 @@ For example:
- `LatencyAtQuantileMS(50.0) > 50`: watch latency at quantile in milliseconds.
- `ResponseCodeRatio(500, 600, 0, 600) > 0.5`: ratio of response codes in ranges [500-600) and [0-600).
Here is an example of backends and servers definition:
```toml
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
```
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1` using default `wrr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
#### Maximum connections
To proactively prevent backends from being overwhelmed with high load, a maximum connection limit can also be applied to each backend.
Maximum connections can be configured by specifying an integer value for `maxconn.amount` and `maxconn.extractorfunc` which is a strategy used to determine how to categorize requests in order to evaluate the maximum connections.
@ -357,13 +414,14 @@ For example:
[backends.backend1.maxconn]
amount = 10
extractorfunc = "request.host"
# ...
```
- `backend1` will return `HTTP code 429 Too Many Requests` if there are already 10 requests in progress for the same Host header.
- Another possible value for `extractorfunc` is `client.ip` which will categorize requests based on client source ip.
- Lastly `extractorfunc` can take the value of `request.header.ANY_HEADER` which will categorize requests based on `ANY_HEADER` that you provide.
### Sticky sessions
#### Sticky sessions
Sticky sessions are supported with both load balancers.
When sticky sessions are enabled, a cookie is set on the initial request.
@ -371,7 +429,6 @@ The default cookie name is an abbreviation of a sha1 (ex: `_1d52e`).
On subsequent requests, the client will be directed to the backend stored in the cookie if it is still healthy.
If not, a new backend will be assigned.
```toml
[backends]
[backends.backend1]
@ -395,10 +452,10 @@ The deprecated way:
sticky = true
```
### Health Check
#### Health Check
A health check can be configured in order to remove a backend from LB rotation as long as it keeps returning HTTP status codes other than `200 OK` to HTTP GET requests periodically carried out by Traefik.
The check is defined by a pathappended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
The check is defined by a path appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how often the health check should be executed (the default being 30 seconds).
Each backend must respond to the health check within 5 seconds.
By default, the port of the backend server is used, however, this may be overridden.
@ -424,43 +481,6 @@ To use a different port for the healthcheck:
port = 8080
```
### Servers
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
!!! note
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
Here is an example of backends and servers definition:
```toml
[backends]
[backends.backend1]
[backends.backend1.circuitbreaker]
expression = "NetworkErrorRatio() > 0.5"
[backends.backend1.servers.server1]
url = "http://172.17.0.2:80"
weight = 10
[backends.backend1.servers.server2]
url = "http://172.17.0.3:80"
weight = 1
[backends.backend2]
[backends.backend2.LoadBalancer]
method = "drr"
[backends.backend2.servers.server1]
url = "http://172.17.0.4:80"
weight = 1
[backends.backend2.servers.server2]
url = "http://172.17.0.5:80"
weight = 2
```
- Two backends are defined: `backend1` and `backend2`
- `backend1` will forward the traffic to two servers: `http://172.17.0.2:80"` with weight `10` and `http://172.17.0.3:80` with weight `1` using default `wrr` load-balancing strategy.
- `backend2` will forward the traffic to two servers: `http://172.17.0.4:80"` with weight `1` and `http://172.17.0.5:80` with weight `2` using `drr` load-balancing strategy.
- a circuit breaker is added on `backend1` using the expression `NetworkErrorRatio() > 0.5`: watch error ratio over 10 second sliding window
## Configuration
Træfik's configuration has two parts:

View file

@ -19,13 +19,14 @@ Telling Træfik where your orchestrator is could be the _only_ configuration ste
Imagine that you have deployed a bunch of microservices with the help of an orchestrator (like Swarm or Kubernetes) or a service registry (like etcd or consul).
Now you want users to access these microservices, and you need a reverse proxy.
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice. In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
Traditional reverse-proxies require that you configure _each_ route that will connect paths and subdomains to _each_ microservice.
In an environment where you add, remove, kill, upgrade, or scale your services _many_ times a day, the task of keeping the routes up to date becomes tedious.
**This is when Træfik can help you!**
Træfik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
Træfik listens to your service registry/orchestrator API and instantly generates the routes so your microservices are connected to the outside world -- without further intervention from your part.
**Run Træfik and let it do the work for you!**
**Run Træfik and let it do the work for you!**
_(But if you'd rather configure some of your routes manually, Træfik supports that too!)_
![Architecture](img/architecture.png)
@ -90,19 +91,19 @@ services:
Start your `reverse-proxy` with the following command:
```shell
docker-compose up -d reverse-proxy
docker-compose up -d reverse-proxy
```
You can open a browser and go to [http://localhost:8080](http://localhost:8080) to see Træfik's dashboard (we'll go back there once we have launched a service in step 2).
### 2 — Launch a Service — Træfik Detects It and Creates a Route for You
### 2 — Launch a Service — Træfik Detects It and Creates a Route for You
Now that we have a Træfik instance up and running, we will deploy new services.
Now that we have a Træfik instance up and running, we will deploy new services.
Edit your `docker-compose.yml` file and add the following at the end of your file.
Edit your `docker-compose.yml` file and add the following at the end of your file.
```yaml
# ...
# ...
whoami:
image: emilevauge/whoami #A container that exposes an API to show it's IP address
labels:
@ -112,7 +113,7 @@ Edit your `docker-compose.yml` file and add the following at the end of your fil
The above defines `whoami`: a simple web service that outputs information about the machine it is deployed on (its IP address, host, and so on).
Start the `whoami` service with the following command:
```shell
docker-compose up -d whoami
```
@ -135,9 +136,9 @@ IP: 172.27.0.3
### 3 — Launch More Instances — Traefik Load Balances Them
Run more instances of your `whoami` service with the following command:
```shell
docker-compose up -d --scale whoami=2
docker-compose up -d --scale whoami=2
```
Go back to your browser ([http://localhost:8080](http://localhost:8080)) and see that Træfik has automatically detected the new instance of the container.
@ -164,9 +165,10 @@ IP: 172.27.0.4
### 4 — Enjoy Træfik's Magic
Now that you have a basic understanding of how Træfik can automatically create the routes to your services and load balance them, it might be time to dive into [the documentation](https://docs.traefik.io/) and let Træfik work for you! Whatever your infrastructure is, there is probably [an available Træfik backend](https://docs.traefik.io/configuration/backends/available) that will do the job.
Now that you have a basic understanding of how Træfik can automatically create the routes to your services and load balance them, it might be time to dive into [the documentation](/) and let Træfik work for you!
Whatever your infrastructure is, there is probably [an available Træfik backend](/#supported-backends) that will do the job.
Our recommendation would be to see for yourself how simple it is to enable HTTPS with [Træfik's let's encrypt integration](https://docs.traefik.io/user-guide/examples/#lets-encrypt-support) using the dedicated [user guide](https://docs.traefik.io/user-guide/docker-and-lets-encrypt/).
Our recommendation would be to see for yourself how simple it is to enable HTTPS with [Træfik's let's encrypt integration](/user-guide/examples/#lets-encrypt-support) using the dedicated [user guide](/user-guide/docker-and-lets-encrypt/).
## Resources
@ -196,4 +198,4 @@ Using the tiny Docker image:
```shell
docker run -d -p 8080:8080 -p 80:80 -v $PWD/traefik.toml:/etc/traefik/traefik.toml traefik
```
```

View file

@ -101,6 +101,7 @@ IP: 172.27.0.4
### 4 — Enjoy Træfik's Magic
Now that you have a basic understanding of how Træfik can automatically create the routes to your services and load balance them, it might be time to dive into [the documentation](https://docs.traefik.io/) and let Træfik work for you! Whatever your infrastructure is, there is probably [an available Træfik backend](https://docs.traefik.io/configuration/backends/available) that will do the job.
Now that you have a basic understanding of how Træfik can automatically create the routes to your services and load balance them, it might be time to dive into [the documentation](https://docs.traefik.io/) and let Træfik work for you!
Whatever your infrastructure is, there is probably [an available Træfik backend](https://docs.traefik.io/#supported-backends) that will do the job.
Our recommendation would be to see for yourself how simple it is to enable HTTPS with [Træfik's let's encrypt integration](https://docs.traefik.io/user-guide/examples/#lets-encrypt-support) using the dedicated [user guide](https://docs.traefik.io/user-guide/docker-and-lets-encrypt/).