Merge branch 'v1.5' into master
This commit is contained in:
commit
617b8b20f0
15 changed files with 513 additions and 70 deletions
23
CHANGELOG.md
23
CHANGELOG.md
|
@ -1,5 +1,28 @@
|
||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
|
## [v1.5.0-rc4](https://github.com/containous/traefik/tree/v1.5.0-rc4) (2018-01-04)
|
||||||
|
[All Commits](https://github.com/containous/traefik/compare/v1.5.0-rc3...v1.5.0-rc4)
|
||||||
|
|
||||||
|
**Bug fixes:**
|
||||||
|
- **[consulcatalog]** Use prefix for sticky and stickiness tags. ([#2624](https://github.com/containous/traefik/pull/2624) by [ldez](https://github.com/ldez))
|
||||||
|
- **[file,tls]** Send empty configuration from file provider ([#2609](https://github.com/containous/traefik/pull/2609) by [nmengin](https://github.com/nmengin))
|
||||||
|
- **[middleware,docker,k8s]** Fix custom headers template ([#2621](https://github.com/containous/traefik/pull/2621) by [ldez](https://github.com/ldez))
|
||||||
|
- **[middleware]** Don't panic if ResponseWriter does not implement CloseNotify ([#2651](https://github.com/containous/traefik/pull/2651) by [Juliens](https://github.com/Juliens))
|
||||||
|
- **[middleware]** We need to flush the end of the body when retry is streamed ([#2644](https://github.com/containous/traefik/pull/2644) by [Juliens](https://github.com/Juliens))
|
||||||
|
- **[tls]** Allow deleting dynamically all TLS certificates from an entryPoint ([#2603](https://github.com/containous/traefik/pull/2603) by [nmengin](https://github.com/nmengin))
|
||||||
|
- **[websocket]** Use gorilla readMessage and writeMessage instead of just an io.Copy ([#2650](https://github.com/containous/traefik/pull/2650) by [Juliens](https://github.com/Juliens))
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- **[consul,consulcatalog]** Split Consul and Consul Catalog documentation ([#2654](https://github.com/containous/traefik/pull/2654) by [ldez](https://github.com/ldez))
|
||||||
|
- **[docker/swarm]** Typo in docker.endpoint TCP port. ([#2626](https://github.com/containous/traefik/pull/2626) by [redhandpl](https://github.com/redhandpl))
|
||||||
|
- **[docker]** Add a note on how to add label to a docker compose file ([#2611](https://github.com/containous/traefik/pull/2611) by [jmaitrehenry](https://github.com/jmaitrehenry))
|
||||||
|
- **[k8s]** k8s guide: Leave note about assumed DaemonSet usage. ([#2634](https://github.com/containous/traefik/pull/2634) by [timoreimann](https://github.com/timoreimann))
|
||||||
|
- **[marathon]** Improve Marathon service label documentation. ([#2635](https://github.com/containous/traefik/pull/2635) by [timoreimann](https://github.com/timoreimann))
|
||||||
|
|
||||||
|
**Misc:**
|
||||||
|
- **[etcd,kv,tls]** Add tests for TLS dynamic configuration in ETCD3 ([#2606](https://github.com/containous/traefik/pull/2606) by [dahefanteng](https://github.com/dahefanteng))
|
||||||
|
- Merge v1.4.6 into v1.5 ([#2642](https://github.com/containous/traefik/pull/2642) by [ldez](https://github.com/ldez))
|
||||||
|
|
||||||
## [v1.4.6](https://github.com/containous/traefik/tree/v1.4.6) (2018-01-02)
|
## [v1.4.6](https://github.com/containous/traefik/tree/v1.4.6) (2018-01-02)
|
||||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.5...v1.4.6)
|
[All Commits](https://github.com/containous/traefik/compare/v1.4.5...v1.4.6)
|
||||||
|
|
||||||
|
|
|
@ -145,6 +145,20 @@ To enable constraints see [backend-specific constraints section](/configuration/
|
||||||
|
|
||||||
## Labels: overriding default behaviour
|
## Labels: overriding default behaviour
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
If you use a compose file, labels should be defined in the `deploy` part of your service.
|
||||||
|
|
||||||
|
This behavior is only enabled for docker-compose version 3+ ([Compose file reference](https://docs.docker.com/compose/compose-file/#labels-1)).
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
version: "3"
|
||||||
|
services:
|
||||||
|
whoami:
|
||||||
|
deploy:
|
||||||
|
labels:
|
||||||
|
traefik.docker.network: traefik
|
||||||
|
```
|
||||||
|
|
||||||
### On Containers
|
### On Containers
|
||||||
|
|
||||||
Labels can be used on containers to override default behaviour.
|
Labels can be used on containers to override default behaviour.
|
||||||
|
@ -265,11 +279,13 @@ Services labels can be used for overriding default behaviour
|
||||||
| `traefik.<service-name>.frontend.headers.isDevelopment=false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
|
| `traefik.<service-name>.frontend.headers.isDevelopment=false` | This will cause the `AllowedHosts`, `SSLRedirect`, and `STSSeconds`/`STSIncludeSubdomains` options to be ignored during development.<br>When deploying to production, be sure to set this to false. |
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
if a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
|
If a label is defined both as a `container label` and a `service label` (for example `traefik.<service-name>.port=PORT` and `traefik.port=PORT` ), the `service label` is used to defined the `<service-name>` property (`port` in the example).
|
||||||
|
|
||||||
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
|
It's possible to mix `container labels` and `service labels`, in this case `container labels` are used as default value for missing `service labels` but no frontends are going to be created with the `container labels`.
|
||||||
|
|
||||||
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
|
More details in this [example](/user-guide/docker-and-lets-encrypt/#labels).
|
||||||
|
|
||||||
!!! warning
|
!!! warning
|
||||||
when running inside a container, Træfik will need network access through:
|
When running inside a container, Træfik will need network access through:
|
||||||
|
|
||||||
`docker network connect <network> <traefik-container>`
|
`docker network connect <network> <traefik-container>`
|
||||||
|
|
|
@ -112,6 +112,8 @@ Annotations can be used on containers to override default behaviour for the whol
|
||||||
Override the default frontend endpoints.
|
Override the default frontend endpoints.
|
||||||
- `traefik.frontend.passTLSCert: true`
|
- `traefik.frontend.passTLSCert: true`
|
||||||
Override the default frontend PassTLSCert value. Default: `false`.
|
Override the default frontend PassTLSCert value. Default: `false`.
|
||||||
|
- `ingress.kubernetes.io/rewrite-target: /users`
|
||||||
|
Replaces each matched Ingress path with the specified one, and adds the old path to the `X-Replaced-Path` header.
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
Please note that `traefik.frontend.redirect.regex` and `traefik.frontend.redirect.replacement` do not have to be set if `traefik.frontend.redirect.entryPoint` is defined for the redirection (they will not be used in this case).
|
Please note that `traefik.frontend.redirect.regex` and `traefik.frontend.redirect.replacement` do not have to be set if `traefik.frontend.redirect.entryPoint` is defined for the redirection (they will not be used in this case).
|
||||||
|
|
289
docs/user-guide/cluster-docker-consul.md
Normal file
289
docs/user-guide/cluster-docker-consul.md
Normal file
|
@ -0,0 +1,289 @@
|
||||||
|
# Clustering / High Availability on Docker Swarm with Consul
|
||||||
|
|
||||||
|
This guide explains how to use Træfik in high availability mode in a Docker Swarm and with Let's Encrypt.
|
||||||
|
|
||||||
|
Why do we need Træfik in cluster mode? Running multiple instances should work out of the box?
|
||||||
|
|
||||||
|
If you want to use Let's Encrypt with Træfik, sharing configuration or TLS certificates between many Træfik instances, you need Træfik cluster/HA.
|
||||||
|
|
||||||
|
Ok, could we mount a shared volume used by all my instances? Yes, you can, but it will not work.
|
||||||
|
When you use Let's Encrypt, you need to store certificates, but not only.
|
||||||
|
When Træfik generates a new certificate, it configures a challenge and once Let's Encrypt will verify the ownership of the domain, it will ping back the challenge.
|
||||||
|
If the challenge is not knowing by other Træfik instances, the validation will fail.
|
||||||
|
|
||||||
|
For more information about challenge: [Automatic Certificate Management Environment (ACME)](https://github.com/ietf-wg-acme/acme/blob/master/draft-ietf-acme-acme.md#tls-with-server-name-indication-tls-sni)
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
You will need a working Docker Swarm cluster.
|
||||||
|
|
||||||
|
## Træfik configuration
|
||||||
|
|
||||||
|
In this guide, we will not use a TOML configuration file, but only command line flag.
|
||||||
|
With that, we can use the base image without mounting configuration file or building custom image.
|
||||||
|
|
||||||
|
What Træfik should do:
|
||||||
|
|
||||||
|
- Listen to 80 and 443
|
||||||
|
- Redirect HTTP traffic to HTTPS
|
||||||
|
- Generate SSL certificate when a domain is added
|
||||||
|
- Listen to Docker Swarm event
|
||||||
|
|
||||||
|
### EntryPoints configuration
|
||||||
|
|
||||||
|
TL;DR:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ traefik \
|
||||||
|
--entrypoints=Name:http Address::80 Redirect.EntryPoint:https \
|
||||||
|
--entrypoints=Name:https Address::443 TLS \
|
||||||
|
--defaultentrypoints=http,https
|
||||||
|
```
|
||||||
|
|
||||||
|
To listen to different ports, we need to create an entry point for each.
|
||||||
|
|
||||||
|
The CLI syntax is `--entrypoints=Name:a_name Address:an_ip_or_empty:a_port options`.
|
||||||
|
If you want to redirect traffic from one entry point to another, it's the option `Redirect.EntryPoint:entrypoint_name`.
|
||||||
|
|
||||||
|
By default, we don't want to configure all our services to listen on http and https, we add a default entry point configuration: `--defaultentrypoints=http,https`.
|
||||||
|
|
||||||
|
### Let's Encrypt configuration
|
||||||
|
|
||||||
|
TL;DR:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ traefik \
|
||||||
|
--acme \
|
||||||
|
--acme.storage=/etc/traefik/acme/acme.json \
|
||||||
|
--acme.entryPoint=https \
|
||||||
|
--acme.email=contact@mydomain.ca
|
||||||
|
```
|
||||||
|
|
||||||
|
Let's Encrypt needs 3 parameters: an entry point to listen to, a storage for certificates, and an email for the registration.
|
||||||
|
|
||||||
|
To enable Let's Encrypt support, you need to add `--acme` flag.
|
||||||
|
|
||||||
|
Now, Træfik needs to know where to store the certificates, we can choose between a key in a Key-Value store, or a file path: `--acme.storage=my/key` or `--acme.storage=/path/to/acme.json`.
|
||||||
|
|
||||||
|
For your email and the entry point, it's `--acme.entryPoint` and `--acme.email` flags.
|
||||||
|
|
||||||
|
### Docker configuration
|
||||||
|
|
||||||
|
TL;DR:
|
||||||
|
|
||||||
|
```shell
|
||||||
|
$ traefik \
|
||||||
|
--docker \
|
||||||
|
--docker.swarmmode \
|
||||||
|
--docker.domain=mydomain.ca \
|
||||||
|
--docker.watch
|
||||||
|
```
|
||||||
|
|
||||||
|
To enable docker and swarm-mode support, you need to add `--docker` and `--docker.swarmmode` flags.
|
||||||
|
To watch docker events, add `--docker.watch`.
|
||||||
|
|
||||||
|
### Full docker-compose file
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
version: "3"
|
||||||
|
services:
|
||||||
|
traefik:
|
||||||
|
image: traefik:1.5
|
||||||
|
command:
|
||||||
|
- "--web"
|
||||||
|
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
|
||||||
|
- "--entrypoints=Name:https Address::443 TLS"
|
||||||
|
- "--defaultentrypoints=http,https"
|
||||||
|
- "--acme"
|
||||||
|
- "--acme.storage=/etc/traefik/acme/acme.json"
|
||||||
|
- "--acme.entryPoint=https"
|
||||||
|
- "--acme.OnHostRule=true"
|
||||||
|
- "--acme.onDemand=false"
|
||||||
|
- "--acme.email=contact@mydomain.ca"
|
||||||
|
- "--docker"
|
||||||
|
- "--docker.swarmmode"
|
||||||
|
- "--docker.domain=mydomain.ca"
|
||||||
|
- "--docker.watch"
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
networks:
|
||||||
|
- webgateway
|
||||||
|
- traefik
|
||||||
|
ports:
|
||||||
|
- target: 80
|
||||||
|
published: 80
|
||||||
|
mode: host
|
||||||
|
- target: 443
|
||||||
|
published: 443
|
||||||
|
mode: host
|
||||||
|
- target: 8080
|
||||||
|
published: 8080
|
||||||
|
mode: host
|
||||||
|
deploy:
|
||||||
|
mode: global
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == manager
|
||||||
|
update_config:
|
||||||
|
parallelism: 1
|
||||||
|
delay: 10s
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
networks:
|
||||||
|
webgateway:
|
||||||
|
driver: overlay
|
||||||
|
external: true
|
||||||
|
traefik:
|
||||||
|
driver: overlay
|
||||||
|
```
|
||||||
|
|
||||||
|
## Migrate configuration to Consul
|
||||||
|
|
||||||
|
We created a special Træfik command to help configuring your Key Value store from a Træfik TOML configuration file and/or CLI flags.
|
||||||
|
|
||||||
|
## Deploy a Træfik cluster
|
||||||
|
|
||||||
|
The best way we found is to have an initializer service.
|
||||||
|
This service will push the config to Consul via the `storeconfig` sub-command.
|
||||||
|
|
||||||
|
This service will retry until finishing without error because Consul may not be ready when the service tries to push the configuration.
|
||||||
|
|
||||||
|
The initializer in a docker-compose file will be:
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
traefik_init:
|
||||||
|
image: traefik:1.5
|
||||||
|
command:
|
||||||
|
- "storeconfig"
|
||||||
|
- "--web"
|
||||||
|
[...]
|
||||||
|
- "--consul"
|
||||||
|
- "--consul.endpoint=consul:8500"
|
||||||
|
- "--consul.prefix=traefik"
|
||||||
|
networks:
|
||||||
|
- traefik
|
||||||
|
deploy:
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
depends_on:
|
||||||
|
- consul
|
||||||
|
```
|
||||||
|
|
||||||
|
And now, the Træfik part will only have the Consul configuration.
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
traefik:
|
||||||
|
image: traefik:1.5
|
||||||
|
depends_on:
|
||||||
|
- traefik_init
|
||||||
|
- consul
|
||||||
|
command:
|
||||||
|
- "--consul"
|
||||||
|
- "--consul.endpoint=consul:8500"
|
||||||
|
- "--consul.prefix=traefik"
|
||||||
|
[...]
|
||||||
|
```
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
For Træfik <1.5.0 add `acme.storage=traefik/acme/account` because Træfik is not reading it from Consul.
|
||||||
|
|
||||||
|
If you have some update to do, update the initializer service and re-deploy it.
|
||||||
|
The new configuration will be stored in Consul, and you need to restart the Træfik node: `docker service update --force traefik_traefik`.
|
||||||
|
|
||||||
|
## Full docker-compose file
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
version: "3.4"
|
||||||
|
services:
|
||||||
|
traefik_init:
|
||||||
|
image: traefik:1.5
|
||||||
|
command:
|
||||||
|
- "storeconfig"
|
||||||
|
- "--web"
|
||||||
|
- "--entrypoints=Name:http Address::80 Redirect.EntryPoint:https"
|
||||||
|
- "--entrypoints=Name:https Address::443 TLS"
|
||||||
|
- "--defaultentrypoints=http,https"
|
||||||
|
- "--acme"
|
||||||
|
- "--acme.storage=traefik/acme/account"
|
||||||
|
- "--acme.entryPoint=https"
|
||||||
|
- "--acme.OnHostRule=true"
|
||||||
|
- "--acme.onDemand=false"
|
||||||
|
- "--acme.email=contact@jmaitrehenry.ca"
|
||||||
|
- "--docker"
|
||||||
|
- "--docker.swarmmode"
|
||||||
|
- "--docker.domain=jmaitrehenry.ca"
|
||||||
|
- "--docker.watch"
|
||||||
|
- "--consul"
|
||||||
|
- "--consul.endpoint=consul:8500"
|
||||||
|
- "--consul.prefix=traefik"
|
||||||
|
networks:
|
||||||
|
- traefik
|
||||||
|
deploy:
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
depends_on:
|
||||||
|
- consul
|
||||||
|
traefik:
|
||||||
|
image: traefik:1.5
|
||||||
|
depends_on:
|
||||||
|
- traefik_init
|
||||||
|
- consul
|
||||||
|
command:
|
||||||
|
- "--consul"
|
||||||
|
- "--consul.endpoint=consul:8500"
|
||||||
|
- "--consul.prefix=traefik"
|
||||||
|
volumes:
|
||||||
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
|
networks:
|
||||||
|
- webgateway
|
||||||
|
- traefik
|
||||||
|
ports:
|
||||||
|
- target: 80
|
||||||
|
published: 80
|
||||||
|
mode: host
|
||||||
|
- target: 443
|
||||||
|
published: 443
|
||||||
|
mode: host
|
||||||
|
- target: 8080
|
||||||
|
published: 8080
|
||||||
|
mode: host
|
||||||
|
deploy:
|
||||||
|
mode: global
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == manager
|
||||||
|
update_config:
|
||||||
|
parallelism: 1
|
||||||
|
delay: 10s
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
consul:
|
||||||
|
image: consul
|
||||||
|
command: agent -server -bootstrap-expect=1
|
||||||
|
volumes:
|
||||||
|
- consul-data:/consul/data
|
||||||
|
environment:
|
||||||
|
- CONSUL_LOCAL_CONFIG={"datacenter":"us_east2","server":true}
|
||||||
|
- CONSUL_BIND_INTERFACE=eth0
|
||||||
|
- CONSUL_CLIENT_INTERFACE=eth0
|
||||||
|
deploy:
|
||||||
|
replicas: 1
|
||||||
|
placement:
|
||||||
|
constraints:
|
||||||
|
- node.role == manager
|
||||||
|
restart_policy:
|
||||||
|
condition: on-failure
|
||||||
|
networks:
|
||||||
|
- traefik
|
||||||
|
|
||||||
|
networks:
|
||||||
|
webgateway:
|
||||||
|
driver: overlay
|
||||||
|
external: true
|
||||||
|
traefik:
|
||||||
|
driver: overlay
|
||||||
|
|
||||||
|
volumes:
|
||||||
|
consul-data:
|
||||||
|
driver: [not local]
|
||||||
|
```
|
|
@ -257,7 +257,7 @@ curl $(minikube ip)
|
||||||
404 page not found
|
404 page not found
|
||||||
```
|
```
|
||||||
|
|
||||||
If you decided to use the deployment, then you need to target the correct NodePort, which can be seen then you execute `kubectl get services --namespace=kube-system`.
|
If you decided to use the deployment, then you need to target the correct NodePort, which can be seen when you execute `kubectl get services --namespace=kube-system`.
|
||||||
|
|
||||||
```sh
|
```sh
|
||||||
curl $(minikube ip):<NODEPORT>
|
curl $(minikube ip):<NODEPORT>
|
||||||
|
@ -269,6 +269,8 @@ curl $(minikube ip):<NODEPORT>
|
||||||
!!! note
|
!!! note
|
||||||
We expect to see a 404 response here as we haven't yet given Træfik any configuration.
|
We expect to see a 404 response here as we haven't yet given Træfik any configuration.
|
||||||
|
|
||||||
|
All further examples below assume a DaemonSet installation. Deployment users will need to append the NodePort when constructing requests.
|
||||||
|
|
||||||
## Deploy Træfik using Helm Chart
|
## Deploy Træfik using Helm Chart
|
||||||
|
|
||||||
Instead of installing Træfik via an own object, you can also use the Træfik Helm chart.
|
Instead of installing Træfik via an own object, you can also use the Træfik Helm chart.
|
||||||
|
|
|
@ -111,7 +111,7 @@ And there, the same global configuration in the Key-value Store (using `prefix =
|
||||||
| `/traefik/entrypoints/https/tls/certificates/0/keyfile` | `integration/fixtures/https/snitest.com.key` |
|
| `/traefik/entrypoints/https/tls/certificates/0/keyfile` | `integration/fixtures/https/snitest.com.key` |
|
||||||
| `/traefik/entrypoints/https/tls/certificates/1/certfile` | `--BEGIN CERTIFICATE--<cert file content>--END CERTIFICATE--` |
|
| `/traefik/entrypoints/https/tls/certificates/1/certfile` | `--BEGIN CERTIFICATE--<cert file content>--END CERTIFICATE--` |
|
||||||
| `/traefik/entrypoints/https/tls/certificates/1/keyfile` | `--BEGIN CERTIFICATE--<key file content>--END CERTIFICATE--` |
|
| `/traefik/entrypoints/https/tls/certificates/1/keyfile` | `--BEGIN CERTIFICATE--<key file content>--END CERTIFICATE--` |
|
||||||
| `/traefik/entrypoints/other-https/address` | `:4443`
|
| `/traefik/entrypoints/other-https/address` | `:4443` |
|
||||||
| `/traefik/consul/endpoint` | `127.0.0.1:8500` |
|
| `/traefik/consul/endpoint` | `127.0.0.1:8500` |
|
||||||
| `/traefik/consul/watch` | `true` |
|
| `/traefik/consul/watch` | `true` |
|
||||||
| `/traefik/consul/prefix` | `traefik` |
|
| `/traefik/consul/prefix` | `traefik` |
|
||||||
|
@ -342,9 +342,10 @@ And there, the same dynamic configuration in a KV Store (using `prefix = "traefi
|
||||||
|
|
||||||
| Key | Value |
|
| Key | Value |
|
||||||
|----------------------------------------------------|-----------------------|
|
|----------------------------------------------------|-----------------------|
|
||||||
| `/traefik/tlsconfiguration/2/entrypoints` | `https,other-https` |
|
| `/traefik/tlsconfiguration/2/entrypoints` | `https,other-https` |
|
||||||
| `/traefik/tlsconfiguration/2/certificate/certfile` | `<cert file content>` |
|
| `/traefik/tlsconfiguration/2/certificate/certfile` | `<cert file content>` |
|
||||||
| `/traefik/tlsconfiguration/2/certificate/certfile` | `<key file content>` |
|
| `/traefik/tlsconfiguration/2/certificate/certfile` | `<key file content>` |
|
||||||
|
|
||||||
### Atomic configuration changes
|
### Atomic configuration changes
|
||||||
|
|
||||||
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
|
Træfik can watch the backends/frontends configuration changes and generate its configuration automatically.
|
||||||
|
@ -379,9 +380,9 @@ Here, although the `/traefik_configurations/2/...` keys have been set, the old c
|
||||||
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
||||||
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
|
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
|
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
|
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.3:80` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
||||||
|
|
||||||
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically.
|
Once the `/traefik/alias` key is updated, the new `/traefik_configurations/2` configuration becomes active atomically.
|
||||||
|
|
||||||
|
@ -393,9 +394,9 @@ Here, we have a 50% balance between the `http://172.17.0.3:80` and the `http://1
|
||||||
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
| `/traefik_configurations/1/backends/backend1/servers/server1/url` | `http://172.17.0.2:80` |
|
||||||
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
|
| `/traefik_configurations/1/backends/backend1/servers/server1/weight` | `10` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.3:80` |
|
| `/traefik_configurations/2/backends/backend1/servers/server1/url` | `http://172.17.0.3:80` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
|
| `/traefik_configurations/2/backends/backend1/servers/server1/weight` | `5` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
|
| `/traefik_configurations/2/backends/backend1/servers/server2/url` | `http://172.17.0.4:80` |
|
||||||
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
| `/traefik_configurations/2/backends/backend1/servers/server2/weight` | `5` |
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
|
Træfik *will not watch for key changes in the `/traefik_configurations` prefix*. It will only watch for changes in the `/traefik/alias`.
|
||||||
|
|
6
glide.lock
generated
6
glide.lock
generated
|
@ -1,4 +1,4 @@
|
||||||
hash: d7f811ac4a011308c6e1f73b618215dee90dae6cace9511f66d4b63d916a337a
|
hash: 929364465b74114d3860b1301b9015f5badb731f21f9b74d40e100f763a6ee3f
|
||||||
updated: 2017-12-15T10:34:41.246378337+01:00
|
updated: 2017-12-15T10:34:41.246378337+01:00
|
||||||
imports:
|
imports:
|
||||||
- name: cloud.google.com/go
|
- name: cloud.google.com/go
|
||||||
|
@ -426,7 +426,7 @@ imports:
|
||||||
repo: https://github.com/ijc25/Gotty.git
|
repo: https://github.com/ijc25/Gotty.git
|
||||||
vcs: git
|
vcs: git
|
||||||
- name: github.com/NYTimes/gziphandler
|
- name: github.com/NYTimes/gziphandler
|
||||||
version: d6f46609c7629af3a02d791a4666866eed3cbd3e
|
version: 47ca22a0aeea4c9ceddfb935d818d636d934c312
|
||||||
- name: github.com/ogier/pflag
|
- name: github.com/ogier/pflag
|
||||||
version: 45c278ab3607870051a2ea9040bb85fcb8557481
|
version: 45c278ab3607870051a2ea9040bb85fcb8557481
|
||||||
- name: github.com/opencontainers/go-digest
|
- name: github.com/opencontainers/go-digest
|
||||||
|
@ -520,7 +520,7 @@ imports:
|
||||||
- name: github.com/VividCortex/gohistogram
|
- name: github.com/VividCortex/gohistogram
|
||||||
version: 51564d9861991fb0ad0f531c99ef602d0f9866e6
|
version: 51564d9861991fb0ad0f531c99ef602d0f9866e6
|
||||||
- name: github.com/vulcand/oxy
|
- name: github.com/vulcand/oxy
|
||||||
version: 7b6e758ab449705195df638765c4ca472248908a
|
version: 812cebb8c764f2a78cb806267648b8728b4599ad
|
||||||
repo: https://github.com/containous/oxy.git
|
repo: https://github.com/containous/oxy.git
|
||||||
vcs: git
|
vcs: git
|
||||||
subpackages:
|
subpackages:
|
||||||
|
|
|
@ -14,7 +14,7 @@ import:
|
||||||
- package: github.com/containous/traefik-extra-service-fabric
|
- package: github.com/containous/traefik-extra-service-fabric
|
||||||
version: v1.0.5
|
version: v1.0.5
|
||||||
- package: github.com/vulcand/oxy
|
- package: github.com/vulcand/oxy
|
||||||
version: 7b6e758ab449705195df638765c4ca472248908a
|
version: 812cebb8c764f2a78cb806267648b8728b4599ad
|
||||||
repo: https://github.com/containous/oxy.git
|
repo: https://github.com/containous/oxy.git
|
||||||
vcs: git
|
vcs: git
|
||||||
subpackages:
|
subpackages:
|
||||||
|
|
|
@ -22,7 +22,7 @@ responseHeaderTimeout = "300ms"
|
||||||
[backends.backend1.servers.server1]
|
[backends.backend1.servers.server1]
|
||||||
# Non-routable IP address that should always deliver a dial timeout.
|
# Non-routable IP address that should always deliver a dial timeout.
|
||||||
# See: https://stackoverflow.com/questions/100841/artificially-create-a-connection-timeout-error#answer-904609
|
# See: https://stackoverflow.com/questions/100841/artificially-create-a-connection-timeout-error#answer-904609
|
||||||
url = "http://10.255.255.1"
|
url = "http://50.255.255.1"
|
||||||
[backends.backend2]
|
[backends.backend2]
|
||||||
[backends.backend2.servers.server2]
|
[backends.backend2.servers.server2]
|
||||||
url = "http://{{.TimeoutEndpoint}}:9000"
|
url = "http://{{.TimeoutEndpoint}}:9000"
|
||||||
|
|
|
@ -52,18 +52,18 @@ func NewErrorPagesHandler(errorPage *types.ErrorPage, backendURL string) (*Error
|
||||||
}
|
}
|
||||||
|
|
||||||
func (ep *ErrorPagesHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, next http.HandlerFunc) {
|
func (ep *ErrorPagesHandler) ServeHTTP(w http.ResponseWriter, req *http.Request, next http.HandlerFunc) {
|
||||||
recorder := newRetryResponseRecorder()
|
recorder := newRetryResponseRecorder(w)
|
||||||
recorder.responseWriter = w
|
|
||||||
next.ServeHTTP(recorder, req)
|
next.ServeHTTP(recorder, req)
|
||||||
|
|
||||||
w.WriteHeader(recorder.Code)
|
w.WriteHeader(recorder.GetCode())
|
||||||
//check the recorder code against the configured http status code ranges
|
//check the recorder code against the configured http status code ranges
|
||||||
for _, block := range ep.HTTPCodeRanges {
|
for _, block := range ep.HTTPCodeRanges {
|
||||||
if recorder.Code >= block[0] && recorder.Code <= block[1] {
|
if recorder.GetCode() >= block[0] && recorder.GetCode() <= block[1] {
|
||||||
log.Errorf("Caught HTTP Status Code %d, returning error page", recorder.Code)
|
log.Errorf("Caught HTTP Status Code %d, returning error page", recorder.GetCode())
|
||||||
finalURL := strings.Replace(ep.BackendURL, "{status}", strconv.Itoa(recorder.Code), -1)
|
finalURL := strings.Replace(ep.BackendURL, "{status}", strconv.Itoa(recorder.GetCode()), -1)
|
||||||
if newReq, err := http.NewRequest(http.MethodGet, finalURL, nil); err != nil {
|
if newReq, err := http.NewRequest(http.MethodGet, finalURL, nil); err != nil {
|
||||||
w.Write([]byte(http.StatusText(recorder.Code)))
|
w.Write([]byte(http.StatusText(recorder.GetCode())))
|
||||||
} else {
|
} else {
|
||||||
ep.errorPageForwarder.ServeHTTP(w, newReq)
|
ep.errorPageForwarder.ServeHTTP(w, newReq)
|
||||||
}
|
}
|
||||||
|
@ -73,5 +73,5 @@ func (ep *ErrorPagesHandler) ServeHTTP(w http.ResponseWriter, req *http.Request,
|
||||||
|
|
||||||
//did not catch a configured status code so proceed with the request
|
//did not catch a configured status code so proceed with the request
|
||||||
utils.CopyHeaders(w.Header(), recorder.Header())
|
utils.CopyHeaders(w.Header(), recorder.Header())
|
||||||
w.Write(recorder.Body.Bytes())
|
w.Write(recorder.GetBody().Bytes())
|
||||||
}
|
}
|
||||||
|
|
|
@ -14,7 +14,7 @@ import (
|
||||||
|
|
||||||
// Compile time validation responseRecorder implements http interfaces correctly.
|
// Compile time validation responseRecorder implements http interfaces correctly.
|
||||||
var (
|
var (
|
||||||
_ Stateful = &retryResponseRecorder{}
|
_ Stateful = &retryResponseRecorderWithCloseNotify{}
|
||||||
)
|
)
|
||||||
|
|
||||||
// Retry is a middleware that retries requests
|
// Retry is a middleware that retries requests
|
||||||
|
@ -48,21 +48,21 @@ func (retry *Retry) ServeHTTP(rw http.ResponseWriter, r *http.Request) {
|
||||||
// when proxying the HTTP requests to the backends. This happens in the custom RecordingErrorHandler.
|
// when proxying the HTTP requests to the backends. This happens in the custom RecordingErrorHandler.
|
||||||
newCtx := context.WithValue(r.Context(), defaultNetErrCtxKey, &netErrorOccurred)
|
newCtx := context.WithValue(r.Context(), defaultNetErrCtxKey, &netErrorOccurred)
|
||||||
|
|
||||||
recorder := newRetryResponseRecorder()
|
recorder := newRetryResponseRecorder(rw)
|
||||||
recorder.responseWriter = rw
|
|
||||||
|
|
||||||
retry.next.ServeHTTP(recorder, r.WithContext(newCtx))
|
retry.next.ServeHTTP(recorder, r.WithContext(newCtx))
|
||||||
|
|
||||||
// It's a stream request and the body gets already sent to the client.
|
// It's a stream request and the body gets already sent to the client.
|
||||||
// Therefore we should not send the response a second time.
|
// Therefore we should not send the response a second time.
|
||||||
if recorder.streamingResponseStarted {
|
if recorder.IsStreamingResponseStarted() {
|
||||||
|
recorder.Flush()
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
|
|
||||||
if !netErrorOccurred || attempts >= retry.attempts {
|
if !netErrorOccurred || attempts >= retry.attempts {
|
||||||
utils.CopyHeaders(rw.Header(), recorder.Header())
|
utils.CopyHeaders(rw.Header(), recorder.Header())
|
||||||
rw.WriteHeader(recorder.Code)
|
rw.WriteHeader(recorder.GetCode())
|
||||||
rw.Write(recorder.Body.Bytes())
|
rw.Write(recorder.GetBody().Bytes())
|
||||||
break
|
break
|
||||||
}
|
}
|
||||||
attempts++
|
attempts++
|
||||||
|
@ -114,9 +114,31 @@ func (l RetryListeners) Retried(req *http.Request, attempt int) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
// retryResponseRecorder is an implementation of http.ResponseWriter that
|
type retryResponseRecorder interface {
|
||||||
|
http.ResponseWriter
|
||||||
|
http.Flusher
|
||||||
|
GetCode() int
|
||||||
|
GetBody() *bytes.Buffer
|
||||||
|
IsStreamingResponseStarted() bool
|
||||||
|
}
|
||||||
|
|
||||||
|
// newRetryResponseRecorder returns an initialized retryResponseRecorder.
|
||||||
|
func newRetryResponseRecorder(rw http.ResponseWriter) retryResponseRecorder {
|
||||||
|
recorder := &retryResponseRecorderWithoutCloseNotify{
|
||||||
|
HeaderMap: make(http.Header),
|
||||||
|
Body: new(bytes.Buffer),
|
||||||
|
Code: http.StatusOK,
|
||||||
|
responseWriter: rw,
|
||||||
|
}
|
||||||
|
if _, ok := rw.(http.CloseNotifier); ok {
|
||||||
|
return &retryResponseRecorderWithCloseNotify{recorder}
|
||||||
|
}
|
||||||
|
return recorder
|
||||||
|
}
|
||||||
|
|
||||||
|
// retryResponseRecorderWithoutCloseNotify is an implementation of http.ResponseWriter that
|
||||||
// records its mutations for later inspection.
|
// records its mutations for later inspection.
|
||||||
type retryResponseRecorder struct {
|
type retryResponseRecorderWithoutCloseNotify struct {
|
||||||
Code int // the HTTP response code from WriteHeader
|
Code int // the HTTP response code from WriteHeader
|
||||||
HeaderMap http.Header // the HTTP response headers
|
HeaderMap http.Header // the HTTP response headers
|
||||||
Body *bytes.Buffer // if non-nil, the bytes.Buffer to append written data to
|
Body *bytes.Buffer // if non-nil, the bytes.Buffer to append written data to
|
||||||
|
@ -126,17 +148,19 @@ type retryResponseRecorder struct {
|
||||||
streamingResponseStarted bool
|
streamingResponseStarted bool
|
||||||
}
|
}
|
||||||
|
|
||||||
// newRetryResponseRecorder returns an initialized retryResponseRecorder.
|
type retryResponseRecorderWithCloseNotify struct {
|
||||||
func newRetryResponseRecorder() *retryResponseRecorder {
|
*retryResponseRecorderWithoutCloseNotify
|
||||||
return &retryResponseRecorder{
|
}
|
||||||
HeaderMap: make(http.Header),
|
|
||||||
Body: new(bytes.Buffer),
|
// CloseNotify returns a channel that receives at most a
|
||||||
Code: http.StatusOK,
|
// single value (true) when the client connection has gone
|
||||||
}
|
// away.
|
||||||
|
func (rw *retryResponseRecorderWithCloseNotify) CloseNotify() <-chan bool {
|
||||||
|
return rw.responseWriter.(http.CloseNotifier).CloseNotify()
|
||||||
}
|
}
|
||||||
|
|
||||||
// Header returns the response headers.
|
// Header returns the response headers.
|
||||||
func (rw *retryResponseRecorder) Header() http.Header {
|
func (rw *retryResponseRecorderWithoutCloseNotify) Header() http.Header {
|
||||||
m := rw.HeaderMap
|
m := rw.HeaderMap
|
||||||
if m == nil {
|
if m == nil {
|
||||||
m = make(http.Header)
|
m = make(http.Header)
|
||||||
|
@ -145,8 +169,20 @@ func (rw *retryResponseRecorder) Header() http.Header {
|
||||||
return m
|
return m
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (rw *retryResponseRecorderWithoutCloseNotify) GetCode() int {
|
||||||
|
return rw.Code
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rw *retryResponseRecorderWithoutCloseNotify) GetBody() *bytes.Buffer {
|
||||||
|
return rw.Body
|
||||||
|
}
|
||||||
|
|
||||||
|
func (rw *retryResponseRecorderWithoutCloseNotify) IsStreamingResponseStarted() bool {
|
||||||
|
return rw.streamingResponseStarted
|
||||||
|
}
|
||||||
|
|
||||||
// Write always succeeds and writes to rw.Body, if not nil.
|
// Write always succeeds and writes to rw.Body, if not nil.
|
||||||
func (rw *retryResponseRecorder) Write(buf []byte) (int, error) {
|
func (rw *retryResponseRecorderWithoutCloseNotify) Write(buf []byte) (int, error) {
|
||||||
if rw.err != nil {
|
if rw.err != nil {
|
||||||
return 0, rw.err
|
return 0, rw.err
|
||||||
}
|
}
|
||||||
|
@ -154,24 +190,17 @@ func (rw *retryResponseRecorder) Write(buf []byte) (int, error) {
|
||||||
}
|
}
|
||||||
|
|
||||||
// WriteHeader sets rw.Code.
|
// WriteHeader sets rw.Code.
|
||||||
func (rw *retryResponseRecorder) WriteHeader(code int) {
|
func (rw *retryResponseRecorderWithoutCloseNotify) WriteHeader(code int) {
|
||||||
rw.Code = code
|
rw.Code = code
|
||||||
}
|
}
|
||||||
|
|
||||||
// Hijack hijacks the connection
|
// Hijack hijacks the connection
|
||||||
func (rw *retryResponseRecorder) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
func (rw *retryResponseRecorderWithoutCloseNotify) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||||
return rw.responseWriter.(http.Hijacker).Hijack()
|
return rw.responseWriter.(http.Hijacker).Hijack()
|
||||||
}
|
}
|
||||||
|
|
||||||
// CloseNotify returns a channel that receives at most a
|
|
||||||
// single value (true) when the client connection has gone
|
|
||||||
// away.
|
|
||||||
func (rw *retryResponseRecorder) CloseNotify() <-chan bool {
|
|
||||||
return rw.responseWriter.(http.CloseNotifier).CloseNotify()
|
|
||||||
}
|
|
||||||
|
|
||||||
// Flush sends any buffered data to the client.
|
// Flush sends any buffered data to the client.
|
||||||
func (rw *retryResponseRecorder) Flush() {
|
func (rw *retryResponseRecorderWithoutCloseNotify) Flush() {
|
||||||
if !rw.streamingResponseStarted {
|
if !rw.streamingResponseStarted {
|
||||||
utils.CopyHeaders(rw.responseWriter.Header(), rw.Header())
|
utils.CopyHeaders(rw.responseWriter.Header(), rw.Header())
|
||||||
rw.responseWriter.WriteHeader(rw.Code)
|
rw.responseWriter.WriteHeader(rw.Code)
|
||||||
|
|
|
@ -7,6 +7,8 @@ import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
)
|
)
|
||||||
|
|
||||||
func TestRetry(t *testing.T) {
|
func TestRetry(t *testing.T) {
|
||||||
|
@ -134,3 +136,69 @@ type countingRetryListener struct {
|
||||||
func (l *countingRetryListener) Retried(req *http.Request, attempt int) {
|
func (l *countingRetryListener) Retried(req *http.Request, attempt int) {
|
||||||
l.timesCalled++
|
l.timesCalled++
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestRetryWithFlush(t *testing.T) {
|
||||||
|
next := http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.WriteHeader(200)
|
||||||
|
rw.Write([]byte("FULL "))
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
rw.Write([]byte("DATA"))
|
||||||
|
})
|
||||||
|
|
||||||
|
retry := NewRetry(1, next, &countingRetryListener{})
|
||||||
|
responseRecorder := httptest.NewRecorder()
|
||||||
|
|
||||||
|
retry.ServeHTTP(responseRecorder, &http.Request{})
|
||||||
|
|
||||||
|
if responseRecorder.Body.String() != "FULL DATA" {
|
||||||
|
t.Errorf("Wrong body %q want %q", responseRecorder.Body.String(), "FULL DATA")
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestNewRetryResponseRecorder(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
rw http.ResponseWriter
|
||||||
|
expected http.ResponseWriter
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "Without Close Notify",
|
||||||
|
rw: httptest.NewRecorder(),
|
||||||
|
expected: &retryResponseRecorderWithoutCloseNotify{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "With Close Notify",
|
||||||
|
rw: &mockRWCloseNotify{},
|
||||||
|
expected: &retryResponseRecorderWithCloseNotify{},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
rec := newRetryResponseRecorder(test.rw)
|
||||||
|
|
||||||
|
assert.IsType(t, rec, test.expected)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
type mockRWCloseNotify struct{}
|
||||||
|
|
||||||
|
func (m *mockRWCloseNotify) CloseNotify() <-chan bool {
|
||||||
|
panic("implement me")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockRWCloseNotify) Header() http.Header {
|
||||||
|
panic("implement me")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockRWCloseNotify) Write([]byte) (int, error) {
|
||||||
|
panic("implement me")
|
||||||
|
}
|
||||||
|
|
||||||
|
func (m *mockRWCloseNotify) WriteHeader(int) {
|
||||||
|
panic("implement me")
|
||||||
|
}
|
||||||
|
|
|
@ -98,5 +98,6 @@ pages:
|
||||||
- 'Key-value Store Configuration': 'user-guide/kv-config.md'
|
- 'Key-value Store Configuration': 'user-guide/kv-config.md'
|
||||||
- 'Clustering/HA': 'user-guide/cluster.md'
|
- 'Clustering/HA': 'user-guide/cluster.md'
|
||||||
- 'gRPC Example': 'user-guide/grpc.md'
|
- 'gRPC Example': 'user-guide/grpc.md'
|
||||||
|
- 'Traefik cluster example with Swarm': 'user-guide/cluster-docker-consul.md'
|
||||||
- Benchmarks: benchmarks.md
|
- Benchmarks: benchmarks.md
|
||||||
- 'Archive': 'archive.md'
|
- 'Archive': 'archive.md'
|
||||||
|
|
17
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
17
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
|
@ -84,6 +84,14 @@ type GzipResponseWriter struct {
|
||||||
contentTypes []string // Only compress if the response is one of these content-types. All are accepted if empty.
|
contentTypes []string // Only compress if the response is one of these content-types. All are accepted if empty.
|
||||||
}
|
}
|
||||||
|
|
||||||
|
type GzipResponseWriterWithCloseNotify struct {
|
||||||
|
*GzipResponseWriter
|
||||||
|
}
|
||||||
|
|
||||||
|
func (w *GzipResponseWriterWithCloseNotify) CloseNotify() <-chan bool {
|
||||||
|
return w.ResponseWriter.(http.CloseNotifier).CloseNotify()
|
||||||
|
}
|
||||||
|
|
||||||
// Write appends data to the gzip writer.
|
// Write appends data to the gzip writer.
|
||||||
func (w *GzipResponseWriter) Write(b []byte) (int, error) {
|
func (w *GzipResponseWriter) Write(b []byte) (int, error) {
|
||||||
// If content type is not set.
|
// If content type is not set.
|
||||||
|
@ -264,7 +272,6 @@ func GzipHandlerWithOpts(opts ...option) (func(http.Handler) http.Handler, error
|
||||||
|
|
||||||
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
return http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
w.Header().Add(vary, acceptEncoding)
|
w.Header().Add(vary, acceptEncoding)
|
||||||
|
|
||||||
if acceptsGzip(r) {
|
if acceptsGzip(r) {
|
||||||
gw := &GzipResponseWriter{
|
gw := &GzipResponseWriter{
|
||||||
ResponseWriter: w,
|
ResponseWriter: w,
|
||||||
|
@ -274,7 +281,13 @@ func GzipHandlerWithOpts(opts ...option) (func(http.Handler) http.Handler, error
|
||||||
}
|
}
|
||||||
defer gw.Close()
|
defer gw.Close()
|
||||||
|
|
||||||
h.ServeHTTP(gw, r)
|
if _, ok := w.(http.CloseNotifier); ok {
|
||||||
|
gwcn := GzipResponseWriterWithCloseNotify{gw}
|
||||||
|
h.ServeHTTP(gwcn, r)
|
||||||
|
} else {
|
||||||
|
h.ServeHTTP(gw, r)
|
||||||
|
}
|
||||||
|
|
||||||
} else {
|
} else {
|
||||||
h.ServeHTTP(w, r)
|
h.ServeHTTP(w, r)
|
||||||
}
|
}
|
||||||
|
|
37
vendor/github.com/vulcand/oxy/forward/fwd.go
generated
vendored
37
vendor/github.com/vulcand/oxy/forward/fwd.go
generated
vendored
|
@ -5,7 +5,6 @@ package forward
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"crypto/tls"
|
"crypto/tls"
|
||||||
"io"
|
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"net/http/httputil"
|
"net/http/httputil"
|
||||||
|
@ -214,7 +213,7 @@ func (f *Forwarder) ServeHTTP(w http.ResponseWriter, req *http.Request) {
|
||||||
if f.log.Level >= log.DebugLevel {
|
if f.log.Level >= log.DebugLevel {
|
||||||
logEntry := f.log.WithField("Request", utils.DumpHttpRequest(req))
|
logEntry := f.log.WithField("Request", utils.DumpHttpRequest(req))
|
||||||
logEntry.Debug("vulcand/oxy/forward: begin ServeHttp on request")
|
logEntry.Debug("vulcand/oxy/forward: begin ServeHttp on request")
|
||||||
defer logEntry.Debug("vulcand/oxy/forward: competed ServeHttp on request")
|
defer logEntry.Debug("vulcand/oxy/forward: completed ServeHttp on request")
|
||||||
}
|
}
|
||||||
|
|
||||||
if f.stateListener != nil {
|
if f.stateListener != nil {
|
||||||
|
@ -333,27 +332,27 @@ func (f *httpForwarder) serveWebSocket(w http.ResponseWriter, req *http.Request,
|
||||||
defer targetConn.Close()
|
defer targetConn.Close()
|
||||||
|
|
||||||
errc := make(chan error, 2)
|
errc := make(chan error, 2)
|
||||||
replicate := func(dst io.Writer, src io.Reader, dstName string, srcName string) {
|
|
||||||
_, errCopy := io.Copy(dst, src)
|
replicateWebsocketConn := func(dst, src *websocket.Conn, dstName, srcName string) {
|
||||||
if errCopy != nil {
|
var err error
|
||||||
f.log.Errorf("vulcand/oxy/forward/websocket: Error when copying from %s to %s using io.Copy: %v", srcName, dstName, errCopy)
|
for {
|
||||||
} else {
|
msgType, msg, err := src.ReadMessage()
|
||||||
f.log.Infof("vulcand/oxy/forward/websocket: Copying from %s to %s using io.Copy completed without error.", srcName, dstName)
|
if err != nil {
|
||||||
|
f.log.Errorf("vulcand/oxy/forward/websocket: Error when copying from %s to %s using ReadMessage: %v", srcName, dstName, err)
|
||||||
|
break
|
||||||
|
}
|
||||||
|
err = dst.WriteMessage(msgType, msg)
|
||||||
|
if err != nil {
|
||||||
|
f.log.Errorf("vulcand/oxy/forward/websocket: Error when copying from %s to %s using WriteMessage: %v", srcName, dstName, err)
|
||||||
|
break
|
||||||
|
}
|
||||||
}
|
}
|
||||||
errc <- errCopy
|
errc <- err
|
||||||
}
|
}
|
||||||
|
|
||||||
go replicate(targetConn.UnderlyingConn(), underlyingConn.UnderlyingConn(), "backend", "client")
|
go replicateWebsocketConn(underlyingConn, targetConn, "client", "backend")
|
||||||
|
go replicateWebsocketConn(targetConn, underlyingConn, "backend", "client")
|
||||||
|
|
||||||
// Try to read the first message
|
|
||||||
msgType, msg, err := targetConn.ReadMessage()
|
|
||||||
if err != nil {
|
|
||||||
log.Errorf("vulcand/oxy/forward/websocket: Couldn't read first message : %v", err)
|
|
||||||
} else {
|
|
||||||
underlyingConn.WriteMessage(msgType, msg)
|
|
||||||
}
|
|
||||||
|
|
||||||
go replicate(underlyingConn.UnderlyingConn(), targetConn.UnderlyingConn(), "client", "backend")
|
|
||||||
<-errc
|
<-errc
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue