Merge v1.4.0-rc2 into master
This commit is contained in:
commit
9fba37b409
64 changed files with 829 additions and 289 deletions
2
.github/ISSUE_TEMPLATE.md
vendored
2
.github/ISSUE_TEMPLATE.md
vendored
|
@ -23,7 +23,7 @@ If you intend to ask a support question: DO NOT FILE AN ISSUE.
|
||||||
HOW TO WRITE A GOOD ISSUE?
|
HOW TO WRITE A GOOD ISSUE?
|
||||||
|
|
||||||
- Respect the issue template as more as possible.
|
- Respect the issue template as more as possible.
|
||||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||||
- The title must be short and descriptive.
|
- The title must be short and descriptive.
|
||||||
- Explain the conditions which led you to write this issue: the context.
|
- Explain the conditions which led you to write this issue: the context.
|
||||||
- The context should lead to something, an idea or a problem that you’re facing.
|
- The context should lead to something, an idea or a problem that you’re facing.
|
||||||
|
|
35
CHANGELOG.md
35
CHANGELOG.md
|
@ -1,5 +1,34 @@
|
||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
|
## [v1.4.0-rc2](https://github.com/containous/traefik/tree/v1.4.0-rc2) (2017-09-08)
|
||||||
|
[All Commits](https://github.com/containous/traefik/compare/v1.4.0-rc1...v1.4.0-rc2)
|
||||||
|
|
||||||
|
**Enhancements:**
|
||||||
|
- **[authentication,consul]** Add Basic auth for consul catalog ([#2027](https://github.com/containous/traefik/pull/2027) by [mmatur](https://github.com/mmatur))
|
||||||
|
- **[authentication,ecs]** Add basic auth for ecs ([#2026](https://github.com/containous/traefik/pull/2026) by [mmatur](https://github.com/mmatur))
|
||||||
|
- **[logs]** Send traefik logs to stdout instead stderr ([#2054](https://github.com/containous/traefik/pull/2054) by [marco-jantke](https://github.com/marco-jantke))
|
||||||
|
- **[websocket]** Add test for SSL TERMINATION in Websocket IT ([#2063](https://github.com/containous/traefik/pull/2063) by [Juliens](https://github.com/Juliens))
|
||||||
|
|
||||||
|
**Bug fixes:**
|
||||||
|
- **[consul]** Fix consul catalog refresh problems ([#2089](https://github.com/containous/traefik/pull/2089) by [Juliens](https://github.com/Juliens))
|
||||||
|
- **[logs,middleware]** Access log default values ([#2061](https://github.com/containous/traefik/pull/2061) by [ldez](https://github.com/ldez))
|
||||||
|
- **[metrics]** prometheus, HTTP method and utf8 ([#2081](https://github.com/containous/traefik/pull/2081) by [ldez](https://github.com/ldez))
|
||||||
|
- **[rancher]** fix rancher api environment get ([#2053](https://github.com/containous/traefik/pull/2053) by [SantoDE](https://github.com/SantoDE))
|
||||||
|
- **[websocket]** RawPath and Transfer TLSConfig in websocket ([#2088](https://github.com/containous/traefik/pull/2088) by [Juliens](https://github.com/Juliens))
|
||||||
|
- Fix error in prepareServer ([#2076](https://github.com/containous/traefik/pull/2076) by [emilevauge](https://github.com/emilevauge))
|
||||||
|
|
||||||
|
**Documentation:**
|
||||||
|
- **[acme,provider]** Fix whitespaces ([#2075](https://github.com/containous/traefik/pull/2075) by [chulkilee](https://github.com/chulkilee))
|
||||||
|
- **[ecs]** Fix IAM policy sid. ([#2066](https://github.com/containous/traefik/pull/2066) by [charlieoleary](https://github.com/charlieoleary))
|
||||||
|
- **[k8s]** Fix invalid service yaml example ([#2059](https://github.com/containous/traefik/pull/2059) by [kairen](https://github.com/kairen))
|
||||||
|
- **[mesos]** fix: documentation Mesos. ([#2029](https://github.com/containous/traefik/pull/2029) by [ldez](https://github.com/ldez))
|
||||||
|
- Update cluster.md ([#2073](https://github.com/containous/traefik/pull/2073) by [kmbremner](https://github.com/kmbremner))
|
||||||
|
- Enhance documentation. ([#2048](https://github.com/containous/traefik/pull/2048) by [ldez](https://github.com/ldez))
|
||||||
|
- doc: add notes on server urls with path ([#2045](https://github.com/containous/traefik/pull/2045) by [chulkilee](https://github.com/chulkilee))
|
||||||
|
- Enhance security headers doc. ([#2042](https://github.com/containous/traefik/pull/2042) by [ldez](https://github.com/ldez))
|
||||||
|
- HTTPS for images, video and links in docs. ([#2041](https://github.com/containous/traefik/pull/2041) by [ldez](https://github.com/ldez))
|
||||||
|
- Fix error pages configuration. ([#2038](https://github.com/containous/traefik/pull/2038) by [ldez](https://github.com/ldez))
|
||||||
|
|
||||||
## [v1.4.0-rc1](https://github.com/containous/traefik/tree/v1.4.0-rc1) (2017-08-28)
|
## [v1.4.0-rc1](https://github.com/containous/traefik/tree/v1.4.0-rc1) (2017-08-28)
|
||||||
[All Commits](https://github.com/containous/traefik/compare/v1.3.0-rc1...v1.4.0-rc1)
|
[All Commits](https://github.com/containous/traefik/compare/v1.3.0-rc1...v1.4.0-rc1)
|
||||||
|
|
||||||
|
@ -143,6 +172,12 @@
|
||||||
- Merge current v1.3 to master ([#1643](https://github.com/containous/traefik/pull/1643) by [ldez](https://github.com/ldez))
|
- Merge current v1.3 to master ([#1643](https://github.com/containous/traefik/pull/1643) by [ldez](https://github.com/ldez))
|
||||||
- Merge v1.3.0-rc2 master ([#1613](https://github.com/containous/traefik/pull/1613) by [emilevauge](https://github.com/emilevauge))
|
- Merge v1.3.0-rc2 master ([#1613](https://github.com/containous/traefik/pull/1613) by [emilevauge](https://github.com/emilevauge))
|
||||||
|
|
||||||
|
## [v1.3.8](https://github.com/containous/traefik/tree/v1.3.8) (2017-09-07)
|
||||||
|
[All Commits](https://github.com/containous/traefik/compare/v1.3.7...v1.3.8)
|
||||||
|
|
||||||
|
**Bug fixes:**
|
||||||
|
- **[middleware]** Compress and Webscocket ([#2079](https://github.com/containous/traefik/pull/2079) by [ldez](https://github.com/ldez))
|
||||||
|
|
||||||
## [v1.3.7](https://github.com/containous/traefik/tree/v1.3.7) (2017-08-25)
|
## [v1.3.7](https://github.com/containous/traefik/tree/v1.3.7) (2017-08-25)
|
||||||
[All Commits](https://github.com/containous/traefik/compare/v1.3.6...v1.3.7)
|
[All Commits](https://github.com/containous/traefik/compare/v1.3.6...v1.3.7)
|
||||||
|
|
||||||
|
|
|
@ -52,7 +52,7 @@ GOHOSTOS="linux"
|
||||||
GOOS="linux"
|
GOOS="linux"
|
||||||
GOPATH="/home/<yourusername>/go"
|
GOPATH="/home/<yourusername>/go"
|
||||||
GORACE=""
|
GORACE=""
|
||||||
## more go env's will be listed
|
## more go env's will be listed
|
||||||
```
|
```
|
||||||
|
|
||||||
##### Build Træfik
|
##### Build Træfik
|
||||||
|
@ -63,7 +63,7 @@ Once your environment is set up and the Træfik repository cloned you can build
|
||||||
cd ~/go/src/github.com/containous/traefik
|
cd ~/go/src/github.com/containous/traefik
|
||||||
|
|
||||||
# Get go-bindata. Please note, the ellipses are required
|
# Get go-bindata. Please note, the ellipses are required
|
||||||
go get github.com/jteeuwen/go-bindata/...
|
go get github.com/jteeuwen/go-bindata/...
|
||||||
|
|
||||||
# Start build
|
# Start build
|
||||||
go generate
|
go generate
|
||||||
|
@ -73,7 +73,7 @@ go build ./cmd/traefik
|
||||||
# run other commands like tests
|
# run other commands like tests
|
||||||
```
|
```
|
||||||
|
|
||||||
You will find the Træfik executable in the `~/go/src/github.com/containous/traefik` folder as `traefik`.
|
You will find the Træfik executable in the `~/go/src/github.com/containous/traefik` folder as `traefik`.
|
||||||
|
|
||||||
### Setting up `glide` and `glide-vc` for dependency management
|
### Setting up `glide` and `glide-vc` for dependency management
|
||||||
|
|
||||||
|
@ -180,7 +180,7 @@ INFO - Cleaning site directory
|
||||||
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
|
Please keep in mind that the GitHub issue tracker is not intended as a general support forum, but for reporting bugs and feature requests.
|
||||||
|
|
||||||
For end-user related support questions, refer to one of the following:
|
For end-user related support questions, refer to one of the following:
|
||||||
- the Traefik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
|
- the Traefik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
|
||||||
- [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
- [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
||||||
|
|
||||||
### Title
|
### Title
|
||||||
|
@ -190,7 +190,7 @@ The title must be short and descriptive. (~60 characters)
|
||||||
### Description
|
### Description
|
||||||
|
|
||||||
- Respect the issue template as much as possible. [template](.github/ISSUE_TEMPLATE.md)
|
- Respect the issue template as much as possible. [template](.github/ISSUE_TEMPLATE.md)
|
||||||
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
- If it's possible use the command `traefik bug`. See https://www.youtube.com/watch?v=Lyz62L8m93I.
|
||||||
- Explain the conditions which led you to write this issue: the context.
|
- Explain the conditions which led you to write this issue: the context.
|
||||||
- The context should lead to something, an idea or a problem that you’re facing.
|
- The context should lead to something, an idea or a problem that you’re facing.
|
||||||
- Remain clear and concise.
|
- Remain clear and concise.
|
||||||
|
|
|
@ -37,7 +37,7 @@ If you want to preserve commits you must add `bot/merge-method-rebase` before `s
|
||||||
|
|
||||||
The status `status/4-merge-in-progress` is only for the bot.
|
The status `status/4-merge-in-progress` is only for the bot.
|
||||||
|
|
||||||
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
|
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
|
||||||
In this case you must solve conflicts/CI/... and after you only need to remove `bot/need-human-merge`.
|
In this case you must solve conflicts/CI/... and after you only need to remove `bot/need-human-merge`.
|
||||||
|
|
||||||
|
|
||||||
|
|
2
Makefile
2
Makefile
|
@ -11,7 +11,7 @@ TRAEFIK_ENVS := \
|
||||||
-e CI \
|
-e CI \
|
||||||
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
|
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
|
||||||
|
|
||||||
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/' | grep -v '^integration/vendor/')
|
SRCS = $(shell git ls-files '*.go' | grep -v '^vendor/')
|
||||||
|
|
||||||
BIND_DIR := "dist"
|
BIND_DIR := "dist"
|
||||||
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
|
TRAEFIK_MOUNT := -v "$(CURDIR)/$(BIND_DIR):/go/src/github.com/containous/traefik/$(BIND_DIR)"
|
||||||
|
|
|
@ -92,7 +92,7 @@ Run it and forget it!
|
||||||
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers. If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
|
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers. If you are looking for a more comprehensive and real use-case example, you can also check [Play-With-Docker](http://training.play-with-docker.com/traefik-load-balancing/) to see how to load balance between multiple nodes.
|
||||||
|
|
||||||
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com/).
|
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com/).
|
||||||
You will learn Træfik basics in less than 10 minutes.
|
You will learn Træfik basics in less than 10 minutes.
|
||||||
|
|
||||||
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
|
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
|
||||||
|
|
||||||
|
@ -134,13 +134,13 @@ git clone https://github.com/containous/traefik
|
||||||
## Documentation
|
## Documentation
|
||||||
|
|
||||||
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
|
You can find the complete documentation at [https://docs.traefik.io](https://docs.traefik.io).
|
||||||
A collection of contributions around Træfik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
|
A collection of contributions around Træfik can be found at [https://awesome.traefik.io](https://awesome.traefik.io).
|
||||||
|
|
||||||
|
|
||||||
## Support
|
## Support
|
||||||
|
|
||||||
To get basic support, you can:
|
To get basic support, you can:
|
||||||
- join the Træfik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
|
- join the Træfik community Slack channel: [![Join the chat at https://traefik.herokuapp.com](https://img.shields.io/badge/style-register-green.svg?style=social&label=Slack)](https://traefik.herokuapp.com)
|
||||||
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
- use [Stack Overflow](https://stackoverflow.com/questions/tagged/traefik) (using the `traefik` tag)
|
||||||
|
|
||||||
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
If you prefer commercial support, please contact [containo.us](https://containo.us) by mail: <mailto:support@containo.us>.
|
||||||
|
|
|
@ -6,22 +6,22 @@
|
||||||
#
|
#
|
||||||
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
|
# Usage - dumpcerts.sh /etc/traefik/acme.json /etc/ssl/
|
||||||
#
|
#
|
||||||
# Dependencies -
|
# Dependencies -
|
||||||
# util-linux
|
# util-linux
|
||||||
# openssl
|
# openssl
|
||||||
# jq
|
# jq
|
||||||
# The MIT License (MIT)
|
# The MIT License (MIT)
|
||||||
#
|
#
|
||||||
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
# Permission is hereby granted, free of charge, to any person obtaining a copy
|
||||||
# of this software and associated documentation files (the "Software"), to deal
|
# of this software and associated documentation files (the "Software"), to deal
|
||||||
# in the Software without restriction, including without limitation the rights
|
# in the Software without restriction, including without limitation the rights
|
||||||
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
# to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
||||||
# copies of the Software, and to permit persons to whom the Software is
|
# copies of the Software, and to permit persons to whom the Software is
|
||||||
# furnished to do so, subject to the following conditions:
|
# furnished to do so, subject to the following conditions:
|
||||||
#
|
#
|
||||||
# The above copyright notice and this permission notice shall be included in
|
# The above copyright notice and this permission notice shall be included in
|
||||||
# all copies or substantial portions of the Software.
|
# all copies or substantial portions of the Software.
|
||||||
#
|
#
|
||||||
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
# THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
||||||
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
# IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
||||||
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
# FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
||||||
|
@ -78,7 +78,7 @@ We need to read this file to explode the JSON bundle... exiting.
|
||||||
|
|
||||||
${USAGE}" >&2
|
${USAGE}" >&2
|
||||||
exit 2
|
exit 2
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
|
||||||
if [ ! -d "${certdir}" ]; then
|
if [ ! -d "${certdir}" ]; then
|
||||||
|
@ -88,7 +88,7 @@ We need a directory in which to explode the JSON bundle... exiting.
|
||||||
|
|
||||||
${USAGE}" >&2
|
${USAGE}" >&2
|
||||||
exit 4
|
exit 4
|
||||||
fi
|
fi
|
||||||
|
|
||||||
jq=$(command -v jq) || exit_jq
|
jq=$(command -v jq) || exit_jq
|
||||||
|
|
||||||
|
@ -126,7 +126,7 @@ trap 'umask ${oldumask}' EXIT
|
||||||
#
|
#
|
||||||
# openssl:
|
# openssl:
|
||||||
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
|
# echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----" \
|
||||||
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
|
# | openssl rsa -inform pem -out "${pdir}/letsencrypt.key"
|
||||||
#
|
#
|
||||||
# and sed:
|
# and sed:
|
||||||
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
|
# echo "-----BEGIN RSA PRIVATE KEY-----" > "${pdir}/letsencrypt.key"
|
||||||
|
@ -142,7 +142,7 @@ echo -e "-----BEGIN RSA PRIVATE KEY-----\n${priv}\n-----END RSA PRIVATE KEY-----
|
||||||
|
|
||||||
# Process the certificates for each of the domains in acme.json
|
# Process the certificates for each of the domains in acme.json
|
||||||
for domain in $(jq -r '.DomainsCertificate.Certs[].Certificate.Domain' acme.json); do
|
for domain in $(jq -r '.DomainsCertificate.Certs[].Certificate.Domain' acme.json); do
|
||||||
# Traefik stores a cert bundle for each domain. Within this cert
|
# Traefik stores a cert bundle for each domain. Within this cert
|
||||||
# bundle there is both proper the certificate and the Let's Encrypt CA
|
# bundle there is both proper the certificate and the Let's Encrypt CA
|
||||||
echo "Extracting cert bundle for ${domain}"
|
echo "Extracting cert bundle for ${domain}"
|
||||||
cert=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
|
cert=$(jq -e -r --arg domain "$domain" '.DomainsCertificate.Certs[].Certificate |
|
||||||
|
|
|
@ -355,7 +355,7 @@ requests periodically carried out by Traefik. The check is defined by a path
|
||||||
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
|
appended to the backend URL and an interval (given in a format understood by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration)) specifying how
|
||||||
often the health check should be executed (the default being 30 seconds).
|
often the health check should be executed (the default being 30 seconds).
|
||||||
Each backend must respond to the health check within 5 seconds.
|
Each backend must respond to the health check within 5 seconds.
|
||||||
By default, the port of the backend server is used, however, this may be overridden.
|
By default, the port of the backend server is used, however, this may be overridden.
|
||||||
|
|
||||||
A recovering backend returning 200 OK responses again is being returned to the
|
A recovering backend returning 200 OK responses again is being returned to the
|
||||||
LB rotation pool.
|
LB rotation pool.
|
||||||
|
@ -381,7 +381,10 @@ To use a different port for the healthcheck:
|
||||||
|
|
||||||
### Servers
|
### Servers
|
||||||
|
|
||||||
Servers are simply defined using a `URL`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
|
Servers are simply defined using a `url`. You can also apply a custom `weight` to each server (this will be used by load-balancing).
|
||||||
|
|
||||||
|
!!! note
|
||||||
|
Paths in `url` are ignored. Use `Modifier` to specify paths instead.
|
||||||
|
|
||||||
Here is an example of backends and servers definition:
|
Here is an example of backends and servers definition:
|
||||||
|
|
||||||
|
@ -497,9 +500,9 @@ Usage:
|
||||||
traefik [command] [--flag=flag_argument]
|
traefik [command] [--flag=flag_argument]
|
||||||
```
|
```
|
||||||
|
|
||||||
List of Træfik available commands with description :
|
List of Træfik available commands with description :
|
||||||
|
|
||||||
- `version` : Print version
|
- `version` : Print version
|
||||||
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
|
- `storeconfig` : Store the static traefik configuration into a Key-value stores. Please refer to the [Store Træfik configuration](/user-guide/kv-config/#store-trfk-configuration) section to get documentation on it.
|
||||||
- `bug`: The easiest way to submit a pre-filled issue.
|
- `bug`: The easiest way to submit a pre-filled issue.
|
||||||
- `healthcheck`: Calls traefik `/ping` to check health.
|
- `healthcheck`: Calls traefik `/ping` to check health.
|
||||||
|
|
|
@ -65,8 +65,8 @@ http {
|
||||||
keepalive_requests 10000;
|
keepalive_requests 10000;
|
||||||
types_hash_max_size 2048;
|
types_hash_max_size 2048;
|
||||||
|
|
||||||
open_file_cache max=200000 inactive=300s;
|
open_file_cache max=200000 inactive=300s;
|
||||||
open_file_cache_valid 300s;
|
open_file_cache_valid 300s;
|
||||||
open_file_cache_min_uses 2;
|
open_file_cache_min_uses 2;
|
||||||
open_file_cache_errors on;
|
open_file_cache_errors on;
|
||||||
|
|
||||||
|
|
|
@ -33,4 +33,4 @@ prefix = "/traefik"
|
||||||
# Optional
|
# Optional
|
||||||
#
|
#
|
||||||
filename = "boltdb.tmpl"
|
filename = "boltdb.tmpl"
|
||||||
```
|
```
|
||||||
|
|
|
@ -113,4 +113,5 @@ Additional settings can be defined using Consul Catalog tags:
|
||||||
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
|
| `traefik.frontend.rule=Host:test.traefik.io` | Override the default frontend rule (Default: `Host:{{.ServiceName}}.{{.Domain}}`). |
|
||||||
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
| `traefik.frontend.passHostHeader=true` | Forward client `Host` header to the backend. |
|
||||||
| `traefik.frontend.priority=10` | Override default frontend priority |
|
| `traefik.frontend.priority=10` | Override default frontend priority |
|
||||||
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
| `traefik.frontend.entryPoints=http,https` | Assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||||
|
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||||
|
|
|
@ -168,4 +168,4 @@ exposedbydefault = false
|
||||||
!!! warning
|
!!! warning
|
||||||
when running inside a container, Træfik will need network access through:
|
when running inside a container, Træfik will need network access through:
|
||||||
|
|
||||||
`docker network connect <network> <traefik-container>`
|
`docker network connect <network> <traefik-container>`
|
||||||
|
|
|
@ -53,7 +53,7 @@ SecretAccessKey = "123"
|
||||||
Endpoint = "http://localhost:8080"
|
Endpoint = "http://localhost:8080"
|
||||||
```
|
```
|
||||||
|
|
||||||
Items in the `dynamodb` table must have three attributes:
|
Items in the `dynamodb` table must have three attributes:
|
||||||
|
|
||||||
- `id` (string): The id is the primary key.
|
- `id` (string): The id is the primary key.
|
||||||
- `name`(string): The name is used as the name of the frontend or backend.
|
- `name`(string): The name is used as the name of the frontend or backend.
|
||||||
|
|
|
@ -78,17 +78,18 @@ SecretAccessKey = "123"
|
||||||
|
|
||||||
Labels can be used on task containers to override default behaviour:
|
Labels can be used on task containers to override default behaviour:
|
||||||
|
|
||||||
| Label | Description |
|
| Label | Description |
|
||||||
|----------------------------------------------|------------------------------------------------------------------------------------------|
|
|---------------------------------------------------|------------------------------------------------------------------------------------------|
|
||||||
| `traefik.protocol=https` | override the default `http` protocol |
|
| `traefik.protocol=https` | override the default `http` protocol |
|
||||||
| `traefik.weight=10` | assign this weight to the container |
|
| `traefik.weight=10` | assign this weight to the container |
|
||||||
| `traefik.enable=false` | disable this container in Træfik |
|
| `traefik.enable=false` | disable this container in Træfik |
|
||||||
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
| `traefik.backend.loadbalancer.method=drr` | override the default `wrr` load balancer algorithm |
|
||||||
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions |
|
| `traefik.backend.loadbalancer.sticky=true` | enable backend sticky sessions |
|
||||||
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
| `traefik.frontend.rule=Host:test.traefik.io` | override the default frontend rule (Default: `Host:{containerName}.{domain}`). |
|
||||||
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
| `traefik.frontend.passHostHeader=true` | forward client `Host` header to the backend. |
|
||||||
| `traefik.frontend.priority=10` | override default frontend priority |
|
| `traefik.frontend.priority=10` | override default frontend priority |
|
||||||
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
| `traefik.frontend.entryPoints=http,https` | assign this frontend to entry points `http` and `https`. Overrides `defaultEntryPoints`. |
|
||||||
|
| `traefik.frontend.auth.basic=EXPR` | Sets basic authentication for that frontend in CSV format: `User:Hash,User:Hash` |
|
||||||
|
|
||||||
If `AccessKeyID`/`SecretAccessKey` is not given credentials will be resolved in the following order:
|
If `AccessKeyID`/`SecretAccessKey` is not given credentials will be resolved in the following order:
|
||||||
|
|
||||||
|
@ -103,7 +104,7 @@ Træfik needs the following policy to read ECS information:
|
||||||
"Version": "2012-10-17",
|
"Version": "2012-10-17",
|
||||||
"Statement": [
|
"Statement": [
|
||||||
{
|
{
|
||||||
"Sid": "Traefik ECS read access",
|
"Sid": "TraefikECSReadAccess",
|
||||||
"Effect": "Allow",
|
"Effect": "Allow",
|
||||||
"Action": [
|
"Action": [
|
||||||
"ecs:ListClusters",
|
"ecs:ListClusters",
|
||||||
|
|
|
@ -148,7 +148,7 @@ filename = "rules.toml"
|
||||||
## Multiple .toml Files
|
## Multiple .toml Files
|
||||||
|
|
||||||
You could have multiple `.toml` files in a directory:
|
You could have multiple `.toml` files in a directory:
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[file]
|
[file]
|
||||||
directory = "/path/to/config/"
|
directory = "/path/to/config/"
|
||||||
|
|
|
@ -101,7 +101,7 @@ secretKey = "xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx"
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
If Traefik needs access to the Rancher API, you need to set the `endpoint`, `accesskey` and `secretkey` parameters.
|
If Traefik needs access to the Rancher API, you need to set the `endpoint`, `accesskey` and `secretkey` parameters.
|
||||||
|
|
||||||
To enable traefik to fetch information about the Environment it's deployed in only, you need to create an `Environment API Key`.
|
To enable traefik to fetch information about the Environment it's deployed in only, you need to create an `Environment API Key`.
|
||||||
This can be found within the API Key advanced options.
|
This can be found within the API Key advanced options.
|
||||||
|
|
||||||
|
|
|
@ -86,7 +86,7 @@ You can enable Traefik to export internal metrics to different monitoring system
|
||||||
- DataDog
|
- DataDog
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# DataDog metrics exporter type
|
# DataDog metrics exporter type
|
||||||
[web.metrics.datadog]
|
[web.metrics.datadog]
|
||||||
Address = "localhost:8125"
|
Address = "localhost:8125"
|
||||||
Pushinterval = "10s"
|
Pushinterval = "10s"
|
||||||
|
|
|
@ -35,4 +35,4 @@ prefix = "traefik"
|
||||||
# filename = "zookeeper.tmpl"
|
# filename = "zookeeper.tmpl"
|
||||||
```
|
```
|
||||||
|
|
||||||
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on traefik KV structure.
|
Please refer to the [Key Value storage structure](/user-guide/kv-config/#key-value-storage-structure) section to get documentation on traefik KV structure.
|
||||||
|
|
|
@ -154,7 +154,7 @@ logLevel = "ERROR"
|
||||||
|
|
||||||
### Access Logs
|
### Access Logs
|
||||||
|
|
||||||
Access logs are written when `[accessLog]` is defined.
|
Access logs are written when `[accessLog]` is defined.
|
||||||
By default it will write to stdout and produce logs in the textual Common Log Format (CLF), extended with additional fields.
|
By default it will write to stdout and produce logs in the textual Common Log Format (CLF), extended with additional fields.
|
||||||
|
|
||||||
To enable access logs using the default settings just add the `[accessLog]` entry.
|
To enable access logs using the default settings just add the `[accessLog]` entry.
|
||||||
|
@ -197,7 +197,7 @@ This allows the logs to be rotated and processed by an external program, such as
|
||||||
|
|
||||||
Custom error pages can be returned, in lieu of the default, according to frontend-configured ranges of HTTP Status codes.
|
Custom error pages can be returned, in lieu of the default, according to frontend-configured ranges of HTTP Status codes.
|
||||||
In the example below, if a 503 status is returned from the frontend "website", the custom error page at http://2.3.4.5/503.html is returned with the actual status code set in the HTTP header.
|
In the example below, if a 503 status is returned from the frontend "website", the custom error page at http://2.3.4.5/503.html is returned with the actual status code set in the HTTP header.
|
||||||
Note, the `503.html` page itself is not hosted on traefik, but some other infrastructure.
|
Note, the `503.html` page itself is not hosted on traefik, but some other infrastructure.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
[frontends]
|
[frontends]
|
||||||
|
@ -275,13 +275,13 @@ The configured status code ranges are inclusive; that is, in the above example,
|
||||||
# If zero, no timeout exists.
|
# If zero, no timeout exists.
|
||||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||||
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
||||||
#
|
#
|
||||||
# Optional
|
# Optional
|
||||||
# Default: "0s"
|
# Default: "0s"
|
||||||
#
|
#
|
||||||
# readTimeout = "5s"
|
# readTimeout = "5s"
|
||||||
|
|
||||||
# writeTimeout is the maximum duration before timing out writes of the response. It covers the time from the end of
|
# writeTimeout is the maximum duration before timing out writes of the response. It covers the time from the end of
|
||||||
# the request header read to the end of the response write.
|
# the request header read to the end of the response write.
|
||||||
# If zero, no timeout exists.
|
# If zero, no timeout exists.
|
||||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||||
|
@ -289,7 +289,7 @@ The configured status code ranges are inclusive; that is, in the above example,
|
||||||
#
|
#
|
||||||
# Optional
|
# Optional
|
||||||
# Default: "0s"
|
# Default: "0s"
|
||||||
#
|
#
|
||||||
# writeTimeout = "5s"
|
# writeTimeout = "5s"
|
||||||
|
|
||||||
# idleTimeout is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
|
# idleTimeout is the maximum duration an idle (keep-alive) connection will remain idle before closing itself.
|
||||||
|
@ -310,30 +310,30 @@ The configured status code ranges are inclusive; that is, in the above example,
|
||||||
```toml
|
```toml
|
||||||
[forwardingTimeouts]
|
[forwardingTimeouts]
|
||||||
|
|
||||||
# dialTimeout is the amount of time to wait until a connection to a backend server can be established.
|
# dialTimeout is the amount of time to wait until a connection to a backend server can be established.
|
||||||
# If zero, no timeout exists.
|
# If zero, no timeout exists.
|
||||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||||
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
||||||
#
|
#
|
||||||
# Optional
|
# Optional
|
||||||
# Default: "30s"
|
# Default: "30s"
|
||||||
#
|
#
|
||||||
# dialTimeout = "30s"
|
# dialTimeout = "30s"
|
||||||
|
|
||||||
# responseHeaderTimeout is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
|
# responseHeaderTimeout is the amount of time to wait for a server's response headers after fully writing the request (including its body, if any).
|
||||||
# If zero, no timeout exists.
|
# If zero, no timeout exists.
|
||||||
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
# Can be provided in a format supported by [time.ParseDuration](https://golang.org/pkg/time/#ParseDuration) or as raw
|
||||||
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
# values (digits). If no units are provided, the value is parsed assuming seconds.
|
||||||
#
|
#
|
||||||
# Optional
|
# Optional
|
||||||
# Default: "0s"
|
# Default: "0s"
|
||||||
#
|
#
|
||||||
# responseHeaderTimeout = "0s"
|
# responseHeaderTimeout = "0s"
|
||||||
```
|
```
|
||||||
|
|
||||||
### Idle Timeout (deprecated)
|
### Idle Timeout (deprecated)
|
||||||
|
|
||||||
Use [respondingTimeouts](/configuration/commons/#responding-timeouts) instead of `IdleTimeout`.
|
Use [respondingTimeouts](/configuration/commons/#responding-timeouts) instead of `IdleTimeout`.
|
||||||
In the case both settings are configured, the deprecated option will be overwritten.
|
In the case both settings are configured, the deprecated option will be overwritten.
|
||||||
|
|
||||||
`IdleTimeout` is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself.
|
`IdleTimeout` is the maximum amount of time an idle (keep-alive) connection will remain idle before closing itself.
|
||||||
|
@ -344,7 +344,7 @@ If no units are provided, the value is parsed assuming seconds.
|
||||||
|
|
||||||
```toml
|
```toml
|
||||||
# IdleTimeout
|
# IdleTimeout
|
||||||
#
|
#
|
||||||
# DEPRECATED - see [respondingTimeouts] section.
|
# DEPRECATED - see [respondingTimeouts] section.
|
||||||
#
|
#
|
||||||
# Optional
|
# Optional
|
||||||
|
@ -388,7 +388,7 @@ filename = "my_custom_config_template.tpml"
|
||||||
|
|
||||||
The template files can be written using functions provided by:
|
The template files can be written using functions provided by:
|
||||||
|
|
||||||
- [go template](https://golang.org/pkg/text/template/)
|
- [go template](https://golang.org/pkg/text/template/)
|
||||||
- [sprig library](https://masterminds.github.io/sprig/)
|
- [sprig library](https://masterminds.github.io/sprig/)
|
||||||
|
|
||||||
Example:
|
Example:
|
||||||
|
|
|
@ -71,7 +71,7 @@ Run it and forget it!
|
||||||
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
You can have a quick look at Træfik in this [Katacoda tutorial](https://www.katacoda.com/courses/traefik/deploy-load-balancer) that shows how to load balance requests between multiple Docker containers.
|
||||||
|
|
||||||
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com).
|
Here is a talk given by [Emile Vauge](https://github.com/emilevauge) at [GopherCon 2017](https://gophercon.com).
|
||||||
You will learn Træfik basics in less than 10 minutes.
|
You will learn Træfik basics in less than 10 minutes.
|
||||||
|
|
||||||
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
|
[![Traefik GopherCon 2017](https://img.youtube.com/vi/RgudiksfL-k/0.jpg)](https://www.youtube.com/watch?v=RgudiksfL-k)
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
# Clustering / High Availability (beta)
|
# Clustering / High Availability (beta)
|
||||||
|
|
||||||
This guide explains how tu use Træfik in high availability mode.
|
This guide explains how to use Træfik in high availability mode.
|
||||||
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
|
In order to deploy and configure multiple Træfik instances, without copying the same configuration file on each instance, we will use a distributed Key-Value store.
|
||||||
|
|
||||||
## Prerequisites
|
## Prerequisites
|
||||||
|
@ -18,4 +18,4 @@ Once your Træfik configuration is uploaded on your KV store, you can start each
|
||||||
A Træfik cluster is based on a manager/worker model.
|
A Træfik cluster is based on a manager/worker model.
|
||||||
When starting, Træfik will elect a manager.
|
When starting, Træfik will elect a manager.
|
||||||
If this instance fails, another manager will be automatically elected.
|
If this instance fails, another manager will be automatically elected.
|
||||||
|
|
||||||
|
|
|
@ -36,7 +36,7 @@ consul:
|
||||||
- "8301"
|
- "8301"
|
||||||
- "8301/udp"
|
- "8301/udp"
|
||||||
- "8302"
|
- "8302"
|
||||||
- "8302/udp"
|
- "8302/udp"
|
||||||
|
|
||||||
whoami1:
|
whoami1:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
|
@ -303,11 +303,11 @@ Wait, I thought we added the sticky flag to `whoami1`? Traefik relies on a cook
|
||||||
First you need to add `whoami1.traefik` to your hosts file:
|
First you need to add `whoami1.traefik` to your hosts file:
|
||||||
|
|
||||||
```shell
|
```shell
|
||||||
if [ -n "$(grep whoami1.traefik /etc/hosts)" ];
|
if [ -n "$(grep whoami1.traefik /etc/hosts)" ];
|
||||||
then
|
then
|
||||||
echo "whoami1.traefik already exists (make sure the ip is current)";
|
echo "whoami1.traefik already exists (make sure the ip is current)";
|
||||||
else
|
else
|
||||||
sudo -- sh -c -e "echo '$(docker-machine ip manager)\twhoami1.traefik' >> /etc/hosts";
|
sudo -- sh -c -e "echo '$(docker-machine ip manager)\twhoami1.traefik' >> /etc/hosts";
|
||||||
fi
|
fi
|
||||||
```
|
```
|
||||||
|
|
||||||
|
|
|
@ -56,4 +56,4 @@ services:
|
||||||
- "mesos-slave:172.17.0.1"
|
- "mesos-slave:172.17.0.1"
|
||||||
environment:
|
environment:
|
||||||
- MARATHON_ZK=zk://zookeeper:2181/marathon
|
- MARATHON_ZK=zk://zookeeper:2181/marathon
|
||||||
- MARATHON_MASTER=zk://zookeeper:2181/mesos
|
- MARATHON_MASTER=zk://zookeeper:2181/mesos
|
||||||
|
|
|
@ -3,4 +3,4 @@ apiVersion: v1
|
||||||
metadata:
|
metadata:
|
||||||
name: kube-system
|
name: kube-system
|
||||||
labels:
|
labels:
|
||||||
name: kube-system
|
name: kube-system
|
||||||
|
|
|
@ -37,4 +37,4 @@
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
]
|
]
|
||||||
}
|
}
|
||||||
|
|
|
@ -66,6 +66,30 @@ func (s *ConsulCatalogSuite) registerService(name string, address string, port i
|
||||||
return err
|
return err
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (s *ConsulCatalogSuite) registerAgentService(name string, address string, port int, tags []string) error {
|
||||||
|
agent := s.consulClient.Agent()
|
||||||
|
err := agent.ServiceRegister(
|
||||||
|
&api.AgentServiceRegistration{
|
||||||
|
ID: address,
|
||||||
|
Tags: tags,
|
||||||
|
Name: name,
|
||||||
|
Address: address,
|
||||||
|
Port: port,
|
||||||
|
Check: &api.AgentServiceCheck{
|
||||||
|
HTTP: "http://" + address,
|
||||||
|
Interval: "10s",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConsulCatalogSuite) deregisterAgentService(address string) error {
|
||||||
|
agent := s.consulClient.Agent()
|
||||||
|
err := agent.ServiceDeregister(address)
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
|
||||||
func (s *ConsulCatalogSuite) deregisterService(name string, address string) error {
|
func (s *ConsulCatalogSuite) deregisterService(name string, address string) error {
|
||||||
catalog := s.consulClient.Catalog()
|
catalog := s.consulClient.Catalog()
|
||||||
_, err := catalog.Deregister(
|
_, err := catalog.Deregister(
|
||||||
|
@ -104,7 +128,7 @@ func (s *ConsulCatalogSuite) TestSingleService(c *check.C) {
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
defer cmd.Process.Kill()
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx")
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
@ -114,7 +138,7 @@ func (s *ConsulCatalogSuite) TestSingleService(c *check.C) {
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
req.Host = "test.consul.localhost"
|
req.Host = "test.consul.localhost"
|
||||||
|
|
||||||
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
err = try.Request(req, 10*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -129,7 +153,7 @@ func (s *ConsulCatalogSuite) TestExposedByDefaultFalseSingleService(c *check.C)
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
defer cmd.Process.Kill()
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx")
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
@ -154,7 +178,7 @@ func (s *ConsulCatalogSuite) TestExposedByDefaultFalseSimpleServiceMultipleNode(
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
defer cmd.Process.Kill()
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx")
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
nginx2 := s.composeProject.Container(c, "nginx2")
|
nginx2 := s.composeProject.Container(c, "nginx2")
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
||||||
|
@ -184,14 +208,14 @@ func (s *ConsulCatalogSuite) TestExposedByDefaultTrueSimpleServiceMultipleNode(c
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
defer cmd.Process.Kill()
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx")
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
nginx2 := s.composeProject.Container(c, "nginx2")
|
nginx2 := s.composeProject.Container(c, "nginx2")
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{})
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1"})
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 80, []string{})
|
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 80, []string{"name=nginx2"})
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
defer s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
defer s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
|
@ -201,4 +225,98 @@ func (s *ConsulCatalogSuite) TestExposedByDefaultTrueSimpleServiceMultipleNode(c
|
||||||
|
|
||||||
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1", "nginx2"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConsulCatalogSuite) TestRefreshConfigWithMultipleNodeWithoutHealthCheck(c *check.C) {
|
||||||
|
cmd, _ := s.cmdTraefik(
|
||||||
|
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||||
|
"--consulCatalog",
|
||||||
|
"--consulCatalog.exposedByDefault=true",
|
||||||
|
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||||
|
"--consulCatalog.domain=consul.localhost")
|
||||||
|
err := cmd.Start()
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
|
nginx2 := s.composeProject.Container(c, "nginx2")
|
||||||
|
|
||||||
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1"})
|
||||||
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
|
err = s.registerAgentService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1"})
|
||||||
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering agent service"))
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
req.Host = "test.consul.localhost"
|
||||||
|
|
||||||
|
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 80, []string{"name=nginx2"})
|
||||||
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1", "nginx2"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 80, []string{"name=nginx2"})
|
||||||
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
defer s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1", "nginx2"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func (s *ConsulCatalogSuite) TestBasicAuthSimpleService(c *check.C) {
|
||||||
|
cmd, output := s.cmdTraefik(
|
||||||
|
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
||||||
|
"--consulCatalog",
|
||||||
|
"--consulCatalog.exposedByDefault=true",
|
||||||
|
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
||||||
|
"--consulCatalog.domain=consul.localhost")
|
||||||
|
err := cmd.Start()
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
|
defer func() {
|
||||||
|
s.displayTraefikLog(c, output)
|
||||||
|
}()
|
||||||
|
|
||||||
|
nginx := s.composeProject.Container(c, "nginx1")
|
||||||
|
|
||||||
|
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{
|
||||||
|
"traefik.frontend.auth.basic=test:$2a$06$O5NksJPAcgrC9MuANkSoE.Xe9DSg7KcLLFYNr1Lj6hPcMmvgwxhme,test2:$2y$10$xP1SZ70QbZ4K2bTGKJOhpujkpcLxQcB3kEPF6XAV19IdcqsZTyDEe",
|
||||||
|
})
|
||||||
|
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
||||||
|
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
req.Host = "test.consul.localhost"
|
||||||
|
|
||||||
|
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusUnauthorized), try.HasBody())
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
req.SetBasicAuth("test", "test")
|
||||||
|
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
req.SetBasicAuth("test2", "test2")
|
||||||
|
err = try.Request(req, 5*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,46 +1,46 @@
|
||||||
################################################################
|
################################################################
|
||||||
# Global configuration
|
# Global configuration
|
||||||
################################################################
|
################################################################
|
||||||
traefikLogsFile = "traefik.log"
|
traefikLogsFile = "traefik.log"
|
||||||
accessLogsFile = "access.log"
|
accessLogsFile = "access.log"
|
||||||
logLevel = "ERROR"
|
logLevel = "ERROR"
|
||||||
defaultEntryPoints = ["http"]
|
defaultEntryPoints = ["http"]
|
||||||
[entryPoints]
|
[entryPoints]
|
||||||
[entryPoints.http]
|
[entryPoints.http]
|
||||||
address = ":8000"
|
address = ":8000"
|
||||||
|
|
||||||
################################################################
|
################################################################
|
||||||
# Web configuration backend
|
# Web configuration backend
|
||||||
################################################################
|
################################################################
|
||||||
[web]
|
[web]
|
||||||
address = ":7888"
|
address = ":7888"
|
||||||
|
|
||||||
################################################################
|
################################################################
|
||||||
# File configuration backend
|
# File configuration backend
|
||||||
################################################################
|
################################################################
|
||||||
[file]
|
[file]
|
||||||
|
|
||||||
################################################################
|
################################################################
|
||||||
# rules
|
# rules
|
||||||
################################################################
|
################################################################
|
||||||
[backends]
|
[backends]
|
||||||
[backends.backend1]
|
[backends.backend1]
|
||||||
[backends.backend1.servers.server1]
|
[backends.backend1.servers.server1]
|
||||||
url = "http://127.0.0.1:8081"
|
url = "http://127.0.0.1:8081"
|
||||||
[backends.backend2]
|
[backends.backend2]
|
||||||
[backends.backend2.LoadBalancer]
|
[backends.backend2.LoadBalancer]
|
||||||
method = "drr"
|
method = "drr"
|
||||||
[backends.backend2.servers.server1]
|
[backends.backend2.servers.server1]
|
||||||
url = "http://127.0.0.1:8082"
|
url = "http://127.0.0.1:8082"
|
||||||
[backends.backend2.servers.server2]
|
[backends.backend2.servers.server2]
|
||||||
url = "http://127.0.0.1:8083"
|
url = "http://127.0.0.1:8083"
|
||||||
[frontends]
|
[frontends]
|
||||||
[frontends.frontend1]
|
[frontends.frontend1]
|
||||||
backend = "backend1"
|
backend = "backend1"
|
||||||
[frontends.frontend1.routes.test_1]
|
[frontends.frontend1.routes.test_1]
|
||||||
rule = "Path: /test1"
|
rule = "Path: /test1"
|
||||||
[frontends.frontend2]
|
[frontends.frontend2]
|
||||||
backend = "backend2"
|
backend = "backend2"
|
||||||
passHostHeader = true
|
passHostHeader = true
|
||||||
[frontends.frontend2.routes.test_2]
|
[frontends.frontend2.routes.test_2]
|
||||||
rule = "Path: /test2"
|
rule = "Path: /test2"
|
||||||
|
|
|
@ -34,4 +34,4 @@ sudo openssl genrsa -out "$SSL_DIR/wildcard.key" 2048
|
||||||
sudo openssl req -new -subj "$(echo -n "$SUBJ" | tr "\n" "/")" -key "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.csr" -passin pass:$PASSPHRASE
|
sudo openssl req -new -subj "$(echo -n "$SUBJ" | tr "\n" "/")" -key "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.csr" -passin pass:$PASSPHRASE
|
||||||
sudo openssl x509 -req -days 3650 -in "$SSL_DIR/wildcard.csr" -signkey "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.crt"
|
sudo openssl x509 -req -days 3650 -in "$SSL_DIR/wildcard.csr" -signkey "$SSL_DIR/wildcard.key" -out "$SSL_DIR/wildcard.crt"
|
||||||
sudo rm -f "$SSL_DIR/wildcard.csr"
|
sudo rm -f "$SSL_DIR/wildcard.csr"
|
||||||
```
|
```
|
||||||
|
|
|
@ -11,6 +11,6 @@ logLevel = "DEBUG"
|
||||||
endpoint = "{{.ConsulHost}}:8500"
|
endpoint = "{{.ConsulHost}}:8500"
|
||||||
watch = true
|
watch = true
|
||||||
prefix = "traefik"
|
prefix = "traefik"
|
||||||
|
|
||||||
[web]
|
[web]
|
||||||
address = ":8081"
|
address = ":8081"
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
defaultEntryPoints = ["http"]
|
defaultEntryPoints = ["http"]
|
||||||
logLevel = "DEBUG"
|
logLevel = "DEBUG"
|
||||||
|
|
||||||
|
[web]
|
||||||
|
address = ":8080"
|
||||||
|
|
||||||
[entryPoints]
|
[entryPoints]
|
||||||
[entryPoints.http]
|
[entryPoints.http]
|
||||||
address = ":8000"
|
address = ":8000"
|
||||||
|
|
|
@ -12,4 +12,4 @@ logLevel = "DEBUG"
|
||||||
endpoint = "{{.DockerHost}}"
|
endpoint = "{{.DockerHost}}"
|
||||||
|
|
||||||
domain = "docker.localhost"
|
domain = "docker.localhost"
|
||||||
exposedbydefault = true
|
exposedbydefault = true
|
||||||
|
|
|
@ -38,4 +38,4 @@ fblo6RBxUQ==
|
||||||
[frontends.frontend1]
|
[frontends.frontend1]
|
||||||
backend = "backend1"
|
backend = "backend1"
|
||||||
[frontends.frontend1.routes.test_1]
|
[frontends.frontend1.routes.test_1]
|
||||||
rule = "Path: /ping"
|
rule = "Path: /ping"
|
||||||
|
|
|
@ -22,4 +22,4 @@ RootCAs = [ "fixtures/https/rootcas/local.crt"]
|
||||||
[frontends.frontend1]
|
[frontends.frontend1]
|
||||||
backend = "backend1"
|
backend = "backend1"
|
||||||
[frontends.frontend1.routes.test_1]
|
[frontends.frontend1.routes.test_1]
|
||||||
rule = "Path: /ping"
|
rule = "Path: /ping"
|
||||||
|
|
|
@ -11,4 +11,4 @@ AAAAATANBgkqhkiG9w0BAQsFAAOBgQCEcetwO59EWk7WiJsG4x8SY+UIAA+flUI9
|
||||||
tyC4lNhbcF2Idq9greZwbYCqTTTr2XiRNSMLCOjKyI7ukPoPjo16ocHj+P3vZGfs
|
tyC4lNhbcF2Idq9greZwbYCqTTTr2XiRNSMLCOjKyI7ukPoPjo16ocHj+P3vZGfs
|
||||||
h1fIw3cSS2OolhloGw/XM6RWPWtPAlGykKLciQrBru5NAPvCMsb/I1DAceTiotQM
|
h1fIw3cSS2OolhloGw/XM6RWPWtPAlGykKLciQrBru5NAPvCMsb/I1DAceTiotQM
|
||||||
fblo6RBxUQ==
|
fblo6RBxUQ==
|
||||||
-----END CERTIFICATE-----
|
-----END CERTIFICATE-----
|
||||||
|
|
|
@ -2,4 +2,4 @@ defaultEntryPoints = ["http"]
|
||||||
|
|
||||||
[entryPoints]
|
[entryPoints]
|
||||||
[entryPoints.http]
|
[entryPoints.http]
|
||||||
address = ":8000"
|
address = ":8000"
|
||||||
|
|
27
integration/fixtures/websocket/config_https.toml
Normal file
27
integration/fixtures/websocket/config_https.toml
Normal file
|
@ -0,0 +1,27 @@
|
||||||
|
defaultEntryPoints = ["wss"]
|
||||||
|
|
||||||
|
logLevel = "DEBUG"
|
||||||
|
|
||||||
|
[entryPoints]
|
||||||
|
[entryPoints.wss]
|
||||||
|
address = ":8000"
|
||||||
|
[entryPoints.wss.tls]
|
||||||
|
[[entryPoints.wss.tls.certificates]]
|
||||||
|
CertFile = "resources/tls/local.cert"
|
||||||
|
KeyFile = "resources/tls/local.key"
|
||||||
|
|
||||||
|
[web]
|
||||||
|
address = ":8080"
|
||||||
|
|
||||||
|
[file]
|
||||||
|
|
||||||
|
[backends]
|
||||||
|
[backends.backend1]
|
||||||
|
[backends.backend1.servers.server1]
|
||||||
|
url = "{{ .WebsocketServer }}"
|
||||||
|
|
||||||
|
[frontends]
|
||||||
|
[frontends.frontend1]
|
||||||
|
backend = "backend1"
|
||||||
|
[frontends.frontend1.routes.test_1]
|
||||||
|
rule = "Path:/ws"
|
|
@ -10,16 +10,16 @@ consul:
|
||||||
- "8301"
|
- "8301"
|
||||||
- "8301/udp"
|
- "8301/udp"
|
||||||
- "8302"
|
- "8302"
|
||||||
- "8302/udp"
|
- "8302/udp"
|
||||||
|
|
||||||
whoami1:
|
whoami1:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami2:
|
whoami2:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami3:
|
whoami3:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami4:
|
whoami4:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
|
@ -11,7 +11,7 @@ consul:
|
||||||
- "8301/udp"
|
- "8301/udp"
|
||||||
- "8302"
|
- "8302"
|
||||||
- "8302/udp"
|
- "8302/udp"
|
||||||
nginx:
|
nginx1:
|
||||||
image: nginx:alpine
|
image: nginx:alpine
|
||||||
nginx2:
|
nginx2:
|
||||||
image: nginx:alpine
|
image: nginx:alpine
|
||||||
|
|
|
@ -9,6 +9,6 @@ consul:
|
||||||
- "8301"
|
- "8301"
|
||||||
- "8301/udp"
|
- "8301/udp"
|
||||||
- "8302"
|
- "8302"
|
||||||
- "8302/udp"
|
- "8302/udp"
|
||||||
volumes:
|
volumes:
|
||||||
- ../tls:/configs
|
- ../tls:/configs
|
||||||
|
|
|
@ -1,14 +1,14 @@
|
||||||
etcd:
|
etcd:
|
||||||
image: containous/docker-etcd
|
image: containous/docker-etcd
|
||||||
|
|
||||||
whoami1:
|
whoami1:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami2:
|
whoami2:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami3:
|
whoami3:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
||||||
whoami4:
|
whoami4:
|
||||||
image: emilevauge/whoami
|
image: emilevauge/whoami
|
||||||
|
|
|
@ -6,4 +6,4 @@
|
||||||
"cert_file": "/configs/consul.cert",
|
"cert_file": "/configs/consul.cert",
|
||||||
"key_file": "/configs/consul.key",
|
"key_file": "/configs/consul.key",
|
||||||
"verify_outgoing": true
|
"verify_outgoing": true
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,6 +1,9 @@
|
||||||
package integration
|
package integration
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"crypto/tls"
|
||||||
|
"crypto/x509"
|
||||||
|
"io/ioutil"
|
||||||
"net"
|
"net"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
|
@ -232,3 +235,58 @@ func (suite *WebsocketSuite) TestWrongOriginIgnoredByServer(c *check.C) {
|
||||||
c.Assert(string(msg), checker.Equals, "OK")
|
c.Assert(string(msg), checker.Equals, "OK")
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (suite *WebsocketSuite) TestSSLTermination(c *check.C) {
|
||||||
|
var upgrader = gorillawebsocket.Upgrader{} // use default options
|
||||||
|
|
||||||
|
srv := httptest.NewServer(http.HandlerFunc(func(w http.ResponseWriter, r *http.Request) {
|
||||||
|
c, err := upgrader.Upgrade(w, r, nil)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
defer c.Close()
|
||||||
|
for {
|
||||||
|
mt, message, err := c.ReadMessage()
|
||||||
|
if err != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
err = c.WriteMessage(mt, message)
|
||||||
|
if err != nil {
|
||||||
|
break
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
file := suite.adaptFile(c, "fixtures/websocket/config_https.toml", struct {
|
||||||
|
WebsocketServer string
|
||||||
|
}{
|
||||||
|
WebsocketServer: srv.URL,
|
||||||
|
})
|
||||||
|
|
||||||
|
defer os.Remove(file)
|
||||||
|
cmd, _ := suite.cmdTraefik(withConfigFile(file), "--debug")
|
||||||
|
|
||||||
|
err := cmd.Start()
|
||||||
|
c.Assert(err, check.IsNil)
|
||||||
|
defer cmd.Process.Kill()
|
||||||
|
|
||||||
|
// wait for traefik
|
||||||
|
err = try.GetRequest("http://127.0.0.1:8080/api/providers", 10*time.Second, try.BodyContains("127.0.0.1"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
//Add client self-signed cert
|
||||||
|
roots := x509.NewCertPool()
|
||||||
|
certContent, err := ioutil.ReadFile("./resources/tls/local.cert")
|
||||||
|
roots.AppendCertsFromPEM(certContent)
|
||||||
|
gorillawebsocket.DefaultDialer.TLSClientConfig = &tls.Config{
|
||||||
|
RootCAs: roots,
|
||||||
|
}
|
||||||
|
conn, _, err := gorillawebsocket.DefaultDialer.Dial("wss://127.0.0.1:8000/ws", nil)
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
err = conn.WriteMessage(gorillawebsocket.TextMessage, []byte("OK"))
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
|
||||||
|
_, msg, err := conn.ReadMessage()
|
||||||
|
c.Assert(err, checker.IsNil)
|
||||||
|
c.Assert(string(msg), checker.Equals, "OK")
|
||||||
|
}
|
||||||
|
|
|
@ -18,6 +18,7 @@ var (
|
||||||
|
|
||||||
func init() {
|
func init() {
|
||||||
logger = logrus.StandardLogger().WithFields(logrus.Fields{})
|
logger = logrus.StandardLogger().WithFields(logrus.Fields{})
|
||||||
|
logrus.SetOutput(os.Stdout)
|
||||||
}
|
}
|
||||||
|
|
||||||
// Context sets the Context of the logger
|
// Context sets the Context of the logger
|
||||||
|
|
|
@ -9,7 +9,10 @@ import (
|
||||||
)
|
)
|
||||||
|
|
||||||
// default format for time presentation
|
// default format for time presentation
|
||||||
const commonLogTimeFormat = "02/Jan/2006:15:04:05 -0700"
|
const (
|
||||||
|
commonLogTimeFormat = "02/Jan/2006:15:04:05 -0700"
|
||||||
|
defaultValue = "-"
|
||||||
|
)
|
||||||
|
|
||||||
// CommonLogFormatter provides formatting in the Traefik common log format
|
// CommonLogFormatter provides formatting in the Traefik common log format
|
||||||
type CommonLogFormatter struct{}
|
type CommonLogFormatter struct{}
|
||||||
|
@ -21,27 +24,26 @@ func (f *CommonLogFormatter) Format(entry *logrus.Entry) ([]byte, error) {
|
||||||
timestamp := entry.Data[StartUTC].(time.Time).Format(commonLogTimeFormat)
|
timestamp := entry.Data[StartUTC].(time.Time).Format(commonLogTimeFormat)
|
||||||
elapsedMillis := entry.Data[Duration].(time.Duration).Nanoseconds() / 1000000
|
elapsedMillis := entry.Data[Duration].(time.Duration).Nanoseconds() / 1000000
|
||||||
|
|
||||||
_, err := fmt.Fprintf(b, "%s - %s [%s] \"%s %s %s\" %d %d %s %s %d %s %s %dms\n",
|
_, err := fmt.Fprintf(b, "%s - %s [%s] \"%s %s %s\" %v %v %s %s %v %s %s %dms\n",
|
||||||
entry.Data[ClientHost],
|
entry.Data[ClientHost],
|
||||||
entry.Data[ClientUsername],
|
entry.Data[ClientUsername],
|
||||||
timestamp,
|
timestamp,
|
||||||
entry.Data[RequestMethod],
|
entry.Data[RequestMethod],
|
||||||
entry.Data[RequestPath],
|
entry.Data[RequestPath],
|
||||||
entry.Data[RequestProtocol],
|
entry.Data[RequestProtocol],
|
||||||
entry.Data[OriginStatus],
|
toLog(entry.Data[OriginStatus]),
|
||||||
entry.Data[OriginContentSize],
|
toLog(entry.Data[OriginContentSize]),
|
||||||
toLogString(entry.Data["request_Referer"]),
|
toLog(entry.Data["request_Referer"]),
|
||||||
toLogString(entry.Data["request_User-Agent"]),
|
toLog(entry.Data["request_User-Agent"]),
|
||||||
entry.Data[RequestCount],
|
toLog(entry.Data[RequestCount]),
|
||||||
toLogString(entry.Data[FrontendName]),
|
toLog(entry.Data[FrontendName]),
|
||||||
toLogString(entry.Data[BackendURL]),
|
toLog(entry.Data[BackendURL]),
|
||||||
elapsedMillis)
|
elapsedMillis)
|
||||||
|
|
||||||
return b.Bytes(), err
|
return b.Bytes(), err
|
||||||
}
|
}
|
||||||
|
|
||||||
func toLogString(v interface{}) string {
|
func toLog(v interface{}) interface{} {
|
||||||
defaultValue := "-"
|
|
||||||
if v == nil {
|
if v == nil {
|
||||||
return defaultValue
|
return defaultValue
|
||||||
}
|
}
|
||||||
|
@ -54,7 +56,7 @@ func toLogString(v interface{}) string {
|
||||||
return quoted(s.String(), defaultValue)
|
return quoted(s.String(), defaultValue)
|
||||||
|
|
||||||
default:
|
default:
|
||||||
return defaultValue
|
return v
|
||||||
}
|
}
|
||||||
|
|
||||||
}
|
}
|
||||||
|
|
114
middlewares/accesslog/logger_formatters_test.go
Normal file
114
middlewares/accesslog/logger_formatters_test.go
Normal file
|
@ -0,0 +1,114 @@
|
||||||
|
package accesslog
|
||||||
|
|
||||||
|
import (
|
||||||
|
"net/http"
|
||||||
|
"testing"
|
||||||
|
"time"
|
||||||
|
|
||||||
|
"github.com/Sirupsen/logrus"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
)
|
||||||
|
|
||||||
|
func TestCommonLogFormatter_Format(t *testing.T) {
|
||||||
|
clf := CommonLogFormatter{}
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
data map[string]interface{}
|
||||||
|
expectedLog string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "OriginStatus & OriginContentSize are nil",
|
||||||
|
data: map[string]interface{}{
|
||||||
|
StartUTC: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC),
|
||||||
|
Duration: 123 * time.Second,
|
||||||
|
ClientHost: "10.0.0.1",
|
||||||
|
ClientUsername: "Client",
|
||||||
|
RequestMethod: http.MethodGet,
|
||||||
|
RequestPath: "/foo",
|
||||||
|
RequestProtocol: "http",
|
||||||
|
OriginStatus: nil,
|
||||||
|
OriginContentSize: nil,
|
||||||
|
"request_Referer": "",
|
||||||
|
"request_User-Agent": "",
|
||||||
|
RequestCount: 0,
|
||||||
|
FrontendName: "",
|
||||||
|
BackendURL: "",
|
||||||
|
},
|
||||||
|
expectedLog: `10.0.0.1 - Client [10/Nov/2009:23:00:00 +0000] "GET /foo http" - - - - 0 - - 123000ms
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "all data",
|
||||||
|
data: map[string]interface{}{
|
||||||
|
StartUTC: time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC),
|
||||||
|
Duration: 123 * time.Second,
|
||||||
|
ClientHost: "10.0.0.1",
|
||||||
|
ClientUsername: "Client",
|
||||||
|
RequestMethod: http.MethodGet,
|
||||||
|
RequestPath: "/foo",
|
||||||
|
RequestProtocol: "http",
|
||||||
|
OriginStatus: 123,
|
||||||
|
OriginContentSize: 132,
|
||||||
|
"request_Referer": "referer",
|
||||||
|
"request_User-Agent": "agent",
|
||||||
|
RequestCount: nil,
|
||||||
|
FrontendName: "foo",
|
||||||
|
BackendURL: "http://10.0.0.2/toto",
|
||||||
|
},
|
||||||
|
expectedLog: `10.0.0.1 - Client [10/Nov/2009:23:00:00 +0000] "GET /foo http" 123 132 "referer" "agent" - "foo" "http://10.0.0.2/toto" 123000ms
|
||||||
|
`,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
entry := &logrus.Entry{Data: test.data}
|
||||||
|
|
||||||
|
raw, err := clf.Format(entry)
|
||||||
|
assert.NoError(t, err)
|
||||||
|
|
||||||
|
assert.Equal(t, test.expectedLog, string(raw))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_toLog(t *testing.T) {
|
||||||
|
|
||||||
|
testCases := []struct {
|
||||||
|
name string
|
||||||
|
value interface{}
|
||||||
|
expectedLog interface{}
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
name: "",
|
||||||
|
value: 1,
|
||||||
|
expectedLog: 1,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "",
|
||||||
|
value: "foo",
|
||||||
|
expectedLog: `"foo"`,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
name: "",
|
||||||
|
value: nil,
|
||||||
|
expectedLog: "-",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.name, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
lg := toLog(test.value)
|
||||||
|
|
||||||
|
assert.Equal(t, test.expectedLog, lg)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -4,7 +4,9 @@ import (
|
||||||
"net/http"
|
"net/http"
|
||||||
"strconv"
|
"strconv"
|
||||||
"time"
|
"time"
|
||||||
|
"unicode/utf8"
|
||||||
|
|
||||||
|
"github.com/containous/traefik/log"
|
||||||
"github.com/containous/traefik/metrics"
|
"github.com/containous/traefik/metrics"
|
||||||
gokitmetrics "github.com/go-kit/kit/metrics"
|
gokitmetrics "github.com/go-kit/kit/metrics"
|
||||||
)
|
)
|
||||||
|
@ -17,7 +19,7 @@ type MetricsWrapper struct {
|
||||||
}
|
}
|
||||||
|
|
||||||
// NewMetricsWrapper return a MetricsWrapper struct with
|
// NewMetricsWrapper return a MetricsWrapper struct with
|
||||||
// a given Metrics implementation e.g Prometheuss
|
// a given Metrics implementation
|
||||||
func NewMetricsWrapper(registry metrics.Registry, service string) *MetricsWrapper {
|
func NewMetricsWrapper(registry metrics.Registry, service string) *MetricsWrapper {
|
||||||
var metricsWrapper = MetricsWrapper{
|
var metricsWrapper = MetricsWrapper{
|
||||||
registry: registry,
|
registry: registry,
|
||||||
|
@ -32,7 +34,7 @@ func (m *MetricsWrapper) ServeHTTP(rw http.ResponseWriter, r *http.Request, next
|
||||||
prw := &responseRecorder{rw, http.StatusOK}
|
prw := &responseRecorder{rw, http.StatusOK}
|
||||||
next(prw, r)
|
next(prw, r)
|
||||||
|
|
||||||
reqLabels := []string{"service", m.serviceName, "code", strconv.Itoa(prw.statusCode), "method", r.Method}
|
reqLabels := []string{"service", m.serviceName, "code", strconv.Itoa(prw.statusCode), "method", getMethod(r)}
|
||||||
m.registry.ReqsCounter().With(reqLabels...).Add(1)
|
m.registry.ReqsCounter().With(reqLabels...).Add(1)
|
||||||
|
|
||||||
reqDurationLabels := []string{"service", m.serviceName, "code", strconv.Itoa(prw.statusCode)}
|
reqDurationLabels := []string{"service", m.serviceName, "code", strconv.Itoa(prw.statusCode)}
|
||||||
|
@ -48,6 +50,14 @@ func NewMetricsRetryListener(retryMetrics retryMetrics, backendName string) Retr
|
||||||
return &MetricsRetryListener{retryMetrics: retryMetrics, backendName: backendName}
|
return &MetricsRetryListener{retryMetrics: retryMetrics, backendName: backendName}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getMethod(r *http.Request) string {
|
||||||
|
if !utf8.ValidString(r.Method) {
|
||||||
|
log.Warnf("Invalid HTTP method encoding: %s", r.Method)
|
||||||
|
return "NON_UTF8_HTTP_METHOD"
|
||||||
|
}
|
||||||
|
return r.Method
|
||||||
|
}
|
||||||
|
|
||||||
// MetricsRetryListener is an implementation of the RetryListener interface to
|
// MetricsRetryListener is an implementation of the RetryListener interface to
|
||||||
// record RequestMetrics about retry attempts.
|
// record RequestMetrics about retry attempts.
|
||||||
type MetricsRetryListener struct {
|
type MetricsRetryListener struct {
|
||||||
|
|
|
@ -77,7 +77,7 @@ func (a nodeSorter) Less(i int, j int) bool {
|
||||||
return lentr.Service.Port < rentr.Service.Port
|
return lentr.Service.Port < rentr.Service.Port
|
||||||
}
|
}
|
||||||
|
|
||||||
func getChangedKeys(currState map[string][]string, prevState map[string][]string) ([]string, []string) {
|
func getChangedServiceKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||||
currKeySet := fun.Set(fun.Keys(currState).([]string)).(map[string]bool)
|
currKeySet := fun.Set(fun.Keys(currState).([]string)).(map[string]bool)
|
||||||
prevKeySet := fun.Set(fun.Keys(prevState).([]string)).(map[string]bool)
|
prevKeySet := fun.Set(fun.Keys(prevState).([]string)).(map[string]bool)
|
||||||
|
|
||||||
|
@ -87,13 +87,36 @@ func getChangedKeys(currState map[string][]string, prevState map[string][]string
|
||||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func getChangedServiceNodeKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||||
|
var addedNodeKeys []string
|
||||||
|
var removedNodeKeys []string
|
||||||
|
for key, value := range currState {
|
||||||
|
if prevValue, ok := prevState[key]; ok {
|
||||||
|
addedKeys, removedKeys := getChangedHealthyKeys(value.Nodes, prevValue.Nodes)
|
||||||
|
addedNodeKeys = append(addedKeys)
|
||||||
|
removedNodeKeys = append(removedKeys)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
return addedNodeKeys, removedNodeKeys
|
||||||
|
}
|
||||||
|
|
||||||
|
func getChangedHealthyKeys(currState []string, prevState []string) ([]string, []string) {
|
||||||
|
currKeySet := fun.Set(currState).(map[string]bool)
|
||||||
|
prevKeySet := fun.Set(prevState).(map[string]bool)
|
||||||
|
|
||||||
|
addedKeys := fun.Difference(currKeySet, prevKeySet).(map[string]bool)
|
||||||
|
removedKeys := fun.Difference(prevKeySet, currKeySet).(map[string]bool)
|
||||||
|
|
||||||
|
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||||
|
}
|
||||||
|
|
||||||
func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
||||||
health := p.client.Health()
|
health := p.client.Health()
|
||||||
catalog := p.client.Catalog()
|
catalog := p.client.Catalog()
|
||||||
|
|
||||||
safe.Go(func() {
|
safe.Go(func() {
|
||||||
// variable to hold previous state
|
// variable to hold previous state
|
||||||
var flashback map[string][]string
|
var flashback []string
|
||||||
|
|
||||||
options := &api.QueryOptions{WaitTime: DefaultWatchWaitTime}
|
options := &api.QueryOptions{WaitTime: DefaultWatchWaitTime}
|
||||||
|
|
||||||
|
@ -105,14 +128,20 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||||
}
|
}
|
||||||
|
|
||||||
// Listening to changes that leads to `passing` state or degrades from it.
|
// Listening to changes that leads to `passing` state or degrades from it.
|
||||||
// The call is used just as a trigger for further actions
|
healthyState, meta, err := health.State("passing", options)
|
||||||
// (intentionally there is no interest in the received data).
|
|
||||||
_, meta, err := health.State("passing", options)
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.WithError(err).Error("Failed to retrieve health checks")
|
log.WithError(err).Error("Failed to retrieve health checks")
|
||||||
return
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
var current []string
|
||||||
|
if healthyState != nil {
|
||||||
|
for _, healthy := range healthyState {
|
||||||
|
current = append(current, healthy.ServiceID)
|
||||||
|
}
|
||||||
|
|
||||||
|
}
|
||||||
|
|
||||||
// If LastIndex didn't change then it means `Get` returned
|
// If LastIndex didn't change then it means `Get` returned
|
||||||
// because of the WaitTime and the key didn't changed.
|
// because of the WaitTime and the key didn't changed.
|
||||||
if options.WaitIndex == meta.LastIndex {
|
if options.WaitIndex == meta.LastIndex {
|
||||||
|
@ -132,30 +161,38 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||||
// Thus it is required to do extra check for changes...
|
// Thus it is required to do extra check for changes...
|
||||||
addedKeys, removedKeys := getChangedKeys(data, flashback)
|
addedKeys, removedKeys := getChangedHealthyKeys(current, flashback)
|
||||||
|
|
||||||
if len(addedKeys) > 0 {
|
if len(addedKeys) > 0 {
|
||||||
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
||||||
watchCh <- data
|
watchCh <- data
|
||||||
flashback = data
|
flashback = current
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(removedKeys) > 0 {
|
if len(removedKeys) > 0 {
|
||||||
log.WithField("MissingServices", removedKeys).Debug("Health State change detected.")
|
log.WithField("MissingServices", removedKeys).Debug("Health State change detected.")
|
||||||
watchCh <- data
|
watchCh <- data
|
||||||
flashback = data
|
flashback = current
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Service represent a Consul service.
|
||||||
|
type Service struct {
|
||||||
|
Name string
|
||||||
|
Tags []string
|
||||||
|
Nodes []string
|
||||||
|
}
|
||||||
|
|
||||||
func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh chan<- map[string][]string) {
|
||||||
catalog := p.client.Catalog()
|
catalog := p.client.Catalog()
|
||||||
|
|
||||||
safe.Go(func() {
|
safe.Go(func() {
|
||||||
|
current := make(map[string]Service)
|
||||||
// variable to hold previous state
|
// variable to hold previous state
|
||||||
var flashback map[string][]string
|
var flashback map[string]Service
|
||||||
|
|
||||||
options := &api.QueryOptions{WaitTime: DefaultWatchWaitTime}
|
options := &api.QueryOptions{WaitTime: DefaultWatchWaitTime}
|
||||||
|
|
||||||
|
@ -179,26 +216,55 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||||
options.WaitIndex = meta.LastIndex
|
options.WaitIndex = meta.LastIndex
|
||||||
|
|
||||||
if data != nil {
|
if data != nil {
|
||||||
|
|
||||||
|
for key, value := range data {
|
||||||
|
nodes, _, err := catalog.Service(key, "", options)
|
||||||
|
if err != nil {
|
||||||
|
log.Errorf("Failed to get detail of service %s: %s", key, err)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
nodesID := getServiceIds(nodes)
|
||||||
|
if service, ok := current[key]; ok {
|
||||||
|
service.Tags = value
|
||||||
|
service.Nodes = nodesID
|
||||||
|
} else {
|
||||||
|
service := Service{
|
||||||
|
Name: key,
|
||||||
|
Tags: value,
|
||||||
|
Nodes: nodesID,
|
||||||
|
}
|
||||||
|
current[key] = service
|
||||||
|
}
|
||||||
|
}
|
||||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||||
// Thus it is required to do extra check for changes...
|
// Thus it is required to do extra check for changes...
|
||||||
addedKeys, removedKeys := getChangedKeys(data, flashback)
|
addedServiceKeys, removedServiceKeys := getChangedServiceKeys(current, flashback)
|
||||||
|
|
||||||
if len(addedKeys) > 0 {
|
addedServiceNodeKeys, removedServiceNodeKeys := getChangedServiceNodeKeys(current, flashback)
|
||||||
log.WithField("DiscoveredServices", addedKeys).Debug("Catalog Services change detected.")
|
|
||||||
|
if len(addedServiceKeys) > 0 || len(addedServiceNodeKeys) > 0 {
|
||||||
|
log.WithField("DiscoveredServices", addedServiceKeys).Debug("Catalog Services change detected.")
|
||||||
watchCh <- data
|
watchCh <- data
|
||||||
flashback = data
|
flashback = current
|
||||||
}
|
}
|
||||||
|
|
||||||
if len(removedKeys) > 0 {
|
if len(removedServiceKeys) > 0 || len(removedServiceNodeKeys) > 0 {
|
||||||
log.WithField("MissingServices", removedKeys).Debug("Catalog Services change detected.")
|
log.WithField("MissingServices", removedServiceKeys).Debug("Catalog Services change detected.")
|
||||||
watchCh <- data
|
watchCh <- data
|
||||||
flashback = data
|
flashback = current
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
func getServiceIds(services []*api.CatalogService) []string {
|
||||||
|
var serviceIds []string
|
||||||
|
for _, service := range services {
|
||||||
|
serviceIds = append(serviceIds, service.ServiceID)
|
||||||
|
}
|
||||||
|
return serviceIds
|
||||||
|
}
|
||||||
|
|
||||||
func (p *CatalogProvider) healthyNodes(service string) (catalogUpdate, error) {
|
func (p *CatalogProvider) healthyNodes(service string) (catalogUpdate, error) {
|
||||||
health := p.client.Health()
|
health := p.client.Health()
|
||||||
|
@ -330,6 +396,14 @@ func (p *CatalogProvider) getAttribute(name string, tags []string, defaultValue
|
||||||
return p.getTag(p.getPrefixedName(name), tags, defaultValue)
|
return p.getTag(p.getPrefixedName(name), tags, defaultValue)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (p *CatalogProvider) getBasicAuth(tags []string) []string {
|
||||||
|
list := p.getAttribute("frontend.auth.basic", tags, "")
|
||||||
|
if list != "" {
|
||||||
|
return strings.Split(list, ",")
|
||||||
|
}
|
||||||
|
return []string{}
|
||||||
|
}
|
||||||
|
|
||||||
func (p *CatalogProvider) hasTag(name string, tags []string) bool {
|
func (p *CatalogProvider) hasTag(name string, tags []string) bool {
|
||||||
// Very-very unlikely that a Consul tag would ever start with '=!='
|
// Very-very unlikely that a Consul tag would ever start with '=!='
|
||||||
tag := p.getTag(name, tags, "=!=")
|
tag := p.getTag(name, tags, "=!=")
|
||||||
|
@ -377,6 +451,7 @@ func (p *CatalogProvider) buildConfig(catalog []catalogUpdate) *types.Configurat
|
||||||
"getBackendName": p.getBackendName,
|
"getBackendName": p.getBackendName,
|
||||||
"getBackendAddress": p.getBackendAddress,
|
"getBackendAddress": p.getBackendAddress,
|
||||||
"getAttribute": p.getAttribute,
|
"getAttribute": p.getAttribute,
|
||||||
|
"getBasicAuth": p.getBasicAuth,
|
||||||
"getTag": p.getTag,
|
"getTag": p.getTag,
|
||||||
"hasTag": p.hasTag,
|
"hasTag": p.hasTag,
|
||||||
"getEntryPoints": p.getEntryPoints,
|
"getEntryPoints": p.getEntryPoints,
|
||||||
|
|
|
@ -348,6 +348,7 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
"random.foo=bar",
|
"random.foo=bar",
|
||||||
"traefik.backend.maxconn.amount=1000",
|
"traefik.backend.maxconn.amount=1000",
|
||||||
"traefik.backend.maxconn.extractorfunc=client.ip",
|
"traefik.backend.maxconn.extractorfunc=client.ip",
|
||||||
|
"traefik.frontend.auth.basic=test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/,test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
Nodes: []*api.ServiceEntry{
|
Nodes: []*api.ServiceEntry{
|
||||||
|
@ -380,6 +381,7 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
Rule: "Host:test.localhost",
|
Rule: "Host:test.localhost",
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
|
BasicAuth: []string{"test:$apr1$H6uskkkW$IgXLP6ewTrSuBkTrqE8wj/", "test2:$apr1$d9hr9HBB$4HxwgUir3HP4EsggP/QNo0"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
expectedBackends: map[string]*types.Backend{
|
expectedBackends: map[string]*types.Backend{
|
||||||
|
@ -411,6 +413,7 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
|
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
|
||||||
}
|
}
|
||||||
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
|
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
|
||||||
|
t.Fatalf("expected %#v, got %#v", c.expectedFrontends["frontend-test"].BasicAuth, actualConfig.Frontends["frontend-test"].BasicAuth)
|
||||||
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
|
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -610,8 +613,8 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||||
|
|
||||||
func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
type Input struct {
|
type Input struct {
|
||||||
currState map[string][]string
|
currState map[string]Service
|
||||||
prevState map[string][]string
|
prevState map[string]Service
|
||||||
}
|
}
|
||||||
|
|
||||||
type Output struct {
|
type Output struct {
|
||||||
|
@ -625,37 +628,37 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string][]string{
|
currState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"bar-service": {"v1"},
|
"bar-service": {Name: "v1"},
|
||||||
"baz-service": {"v1"},
|
"baz-service": {Name: "v1"},
|
||||||
"qux-service": {"v1"},
|
"qux-service": {Name: "v1"},
|
||||||
"quux-service": {"v1"},
|
"quux-service": {Name: "v1"},
|
||||||
"quuz-service": {"v1"},
|
"quuz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"grault-service": {"v1"},
|
"grault-service": {Name: "v1"},
|
||||||
"garply-service": {"v1"},
|
"garply-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
prevState: map[string][]string{
|
prevState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"bar-service": {"v1"},
|
"bar-service": {Name: "v1"},
|
||||||
"baz-service": {"v1"},
|
"baz-service": {Name: "v1"},
|
||||||
"qux-service": {"v1"},
|
"qux-service": {Name: "v1"},
|
||||||
"quux-service": {"v1"},
|
"quux-service": {Name: "v1"},
|
||||||
"quuz-service": {"v1"},
|
"quuz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"grault-service": {"v1"},
|
"grault-service": {Name: "v1"},
|
||||||
"garply-service": {"v1"},
|
"garply-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
output: Output{
|
output: Output{
|
||||||
|
@ -665,34 +668,34 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string][]string{
|
currState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"bar-service": {"v1"},
|
"bar-service": {Name: "v1"},
|
||||||
"baz-service": {"v1"},
|
"baz-service": {Name: "v1"},
|
||||||
"qux-service": {"v1"},
|
"qux-service": {Name: "v1"},
|
||||||
"quux-service": {"v1"},
|
"quux-service": {Name: "v1"},
|
||||||
"quuz-service": {"v1"},
|
"quuz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"grault-service": {"v1"},
|
"grault-service": {Name: "v1"},
|
||||||
"garply-service": {"v1"},
|
"garply-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
prevState: map[string][]string{
|
prevState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"bar-service": {"v1"},
|
"bar-service": {Name: "v1"},
|
||||||
"baz-service": {"v1"},
|
"baz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"grault-service": {"v1"},
|
"grault-service": {Name: "v1"},
|
||||||
"garply-service": {"v1"},
|
"garply-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
output: Output{
|
output: Output{
|
||||||
|
@ -702,33 +705,33 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string][]string{
|
currState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"qux-service": {"v1"},
|
"qux-service": {Name: "v1"},
|
||||||
"quux-service": {"v1"},
|
"quux-service": {Name: "v1"},
|
||||||
"quuz-service": {"v1"},
|
"quuz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"grault-service": {"v1"},
|
"grault-service": {Name: "v1"},
|
||||||
"garply-service": {"v1"},
|
"garply-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
prevState: map[string][]string{
|
prevState: map[string]Service{
|
||||||
"foo-service": {"v1"},
|
"foo-service": {Name: "v1"},
|
||||||
"bar-service": {"v1"},
|
"bar-service": {Name: "v1"},
|
||||||
"baz-service": {"v1"},
|
"baz-service": {Name: "v1"},
|
||||||
"qux-service": {"v1"},
|
"qux-service": {Name: "v1"},
|
||||||
"quux-service": {"v1"},
|
"quux-service": {Name: "v1"},
|
||||||
"quuz-service": {"v1"},
|
"quuz-service": {Name: "v1"},
|
||||||
"corge-service": {"v1"},
|
"corge-service": {Name: "v1"},
|
||||||
"waldo-service": {"v1"},
|
"waldo-service": {Name: "v1"},
|
||||||
"fred-service": {"v1"},
|
"fred-service": {Name: "v1"},
|
||||||
"plugh-service": {"v1"},
|
"plugh-service": {Name: "v1"},
|
||||||
"xyzzy-service": {"v1"},
|
"xyzzy-service": {Name: "v1"},
|
||||||
"thud-service": {"v1"},
|
"thud-service": {Name: "v1"},
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
output: Output{
|
output: Output{
|
||||||
|
@ -739,7 +742,7 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, c := range cases {
|
for _, c := range cases {
|
||||||
addedKeys, removedKeys := getChangedKeys(c.input.currState, c.input.prevState)
|
addedKeys, removedKeys := getChangedServiceKeys(c.input.currState, c.input.prevState)
|
||||||
|
|
||||||
if !reflect.DeepEqual(fun.Set(addedKeys), fun.Set(c.output.addedKeys)) {
|
if !reflect.DeepEqual(fun.Set(addedKeys), fun.Set(c.output.addedKeys)) {
|
||||||
t.Fatalf("Added keys comparison results: got %q, want %q", addedKeys, c.output.addedKeys)
|
t.Fatalf("Added keys comparison results: got %q, want %q", addedKeys, c.output.addedKeys)
|
||||||
|
@ -853,3 +856,38 @@ func TestConsulCatalogFilterEnabled(t *testing.T) {
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
desc string
|
||||||
|
tags []string
|
||||||
|
expected []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "label missing",
|
||||||
|
tags: []string{},
|
||||||
|
expected: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "label existing",
|
||||||
|
tags: []string{
|
||||||
|
"traefik.frontend.auth.basic=user:password",
|
||||||
|
},
|
||||||
|
expected: []string{"user:password"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, c := range cases {
|
||||||
|
c := c
|
||||||
|
t.Run(c.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
provider := &CatalogProvider{
|
||||||
|
Prefix: "traefik",
|
||||||
|
}
|
||||||
|
actual := provider.getBasicAuth(c.tags)
|
||||||
|
if !reflect.DeepEqual(actual, c.expected) {
|
||||||
|
t.Errorf("actual %q, expected %q", actual, c.expected)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -182,6 +182,7 @@ func (p *Provider) loadECSConfig(ctx context.Context, client *awsClient) (*types
|
||||||
var ecsFuncMap = template.FuncMap{
|
var ecsFuncMap = template.FuncMap{
|
||||||
"filterFrontends": p.filterFrontends,
|
"filterFrontends": p.filterFrontends,
|
||||||
"getFrontendRule": p.getFrontendRule,
|
"getFrontendRule": p.getFrontendRule,
|
||||||
|
"getBasicAuth": p.getBasicAuth,
|
||||||
"getLoadBalancerSticky": p.getLoadBalancerSticky,
|
"getLoadBalancerSticky": p.getLoadBalancerSticky,
|
||||||
"getLoadBalancerMethod": p.getLoadBalancerMethod,
|
"getLoadBalancerMethod": p.getLoadBalancerMethod,
|
||||||
}
|
}
|
||||||
|
@ -469,6 +470,14 @@ func (p *Provider) getFrontendRule(i ecsInstance) string {
|
||||||
return "Host:" + strings.ToLower(strings.Replace(i.Name, "_", "-", -1)) + "." + p.Domain
|
return "Host:" + strings.ToLower(strings.Replace(i.Name, "_", "-", -1)) + "." + p.Domain
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func (p *Provider) getBasicAuth(i ecsInstance) []string {
|
||||||
|
label := p.label(i, types.LabelFrontendAuthBasic)
|
||||||
|
if label != "" {
|
||||||
|
return strings.Split(label, ",")
|
||||||
|
}
|
||||||
|
return []string{}
|
||||||
|
}
|
||||||
|
|
||||||
func (p *Provider) getLoadBalancerSticky(instances []ecsInstance) string {
|
func (p *Provider) getLoadBalancerSticky(instances []ecsInstance) string {
|
||||||
if len(instances) > 0 {
|
if len(instances) > 0 {
|
||||||
label := p.label(instances[0], types.LabelBackendLoadbalancerSticky)
|
label := p.label(instances[0], types.LabelBackendLoadbalancerSticky)
|
||||||
|
|
|
@ -521,3 +521,34 @@ func TestTaskChunking(t *testing.T) {
|
||||||
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
func TestEcsGetBasicAuth(t *testing.T) {
|
||||||
|
cases := []struct {
|
||||||
|
desc string
|
||||||
|
instance ecsInstance
|
||||||
|
expected []string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "label missing",
|
||||||
|
instance: simpleEcsInstance(map[string]*string{}),
|
||||||
|
expected: []string{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "label existing",
|
||||||
|
instance: simpleEcsInstance(map[string]*string{
|
||||||
|
types.LabelFrontendAuthBasic: aws.String("user:password"),
|
||||||
|
}),
|
||||||
|
expected: []string{"user:password"},
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range cases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
provider := &Provider{}
|
||||||
|
actual := provider.getBasicAuth(test.instance)
|
||||||
|
assert.Equal(t, test.expected, actual)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
|
@ -125,14 +125,14 @@ func (p *Provider) apiProvide(configurationChan chan<- types.ConfigMessage, pool
|
||||||
return nil
|
return nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func listRancherEnvironments(client *rancher.RancherClient) []*rancher.Project {
|
func listRancherEnvironments(client *rancher.RancherClient) []*rancher.Environment {
|
||||||
|
|
||||||
// Rancher Environment in frontend UI is actually project in API
|
// Rancher Environment in frontend UI is actually a stack
|
||||||
// https://forums.rancher.com/t/api-key-for-all-environments/279/9
|
// https://forums.rancher.com/t/api-key-for-all-environments/279/9
|
||||||
|
|
||||||
var environmentList = []*rancher.Project{}
|
var environmentList = []*rancher.Environment{}
|
||||||
|
|
||||||
environments, err := client.Project.List(nil)
|
environments, err := client.Environment.List(nil)
|
||||||
|
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Errorf("Cannot get Rancher Environments %+v", err)
|
log.Errorf("Cannot get Rancher Environments %+v", err)
|
||||||
|
@ -193,12 +193,13 @@ func listRancherContainer(client *rancher.RancherClient) []*rancher.Container {
|
||||||
return containerList
|
return containerList
|
||||||
}
|
}
|
||||||
|
|
||||||
func parseAPISourcedRancherData(environments []*rancher.Project, services []*rancher.Service, containers []*rancher.Container) []rancherData {
|
func parseAPISourcedRancherData(environments []*rancher.Environment, services []*rancher.Service, containers []*rancher.Container) []rancherData {
|
||||||
var rancherDataList []rancherData
|
var rancherDataList []rancherData
|
||||||
|
|
||||||
for _, environment := range environments {
|
for _, environment := range environments {
|
||||||
|
|
||||||
for _, service := range services {
|
for _, service := range services {
|
||||||
|
|
||||||
if service.EnvironmentId != environment.Id {
|
if service.EnvironmentId != environment.Id {
|
||||||
continue
|
continue
|
||||||
}
|
}
|
||||||
|
|
|
@ -18,17 +18,19 @@ if [ -z "$DATE" ]; then
|
||||||
DATE=$(date -u '+%Y-%m-%d_%I:%M:%S%p')
|
DATE=$(date -u '+%Y-%m-%d_%I:%M:%S%p')
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
echo "Building ${VERSION} ${CODENAME} ${DATE}"
|
||||||
|
|
||||||
GIT_REPO_URL='github.com/containous/traefik/version'
|
GIT_REPO_URL='github.com/containous/traefik/version'
|
||||||
GO_BUILD_CMD="go build -ldflags"
|
GO_BUILD_CMD="go build -ldflags"
|
||||||
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=$VERSION -X ${GIT_REPO_URL}.Codename=$CODENAME -X ${GIT_REPO_URL}.BuildDate=$DATE"
|
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Codename=${CODENAME} -X ${GIT_REPO_URL}.BuildDate=${DATE}"
|
||||||
|
|
||||||
# Build 386 amd64 binaries
|
# Build 386 amd64 binaries
|
||||||
OS_PLATFORM_ARG=(linux windows darwin)
|
OS_PLATFORM_ARG=(linux windows darwin)
|
||||||
OS_ARCH_ARG=(amd64)
|
OS_ARCH_ARG=(amd64)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for $OS/$ARCH..."
|
echo "Building binary for ${OS}/${ARCH}..."
|
||||||
GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 $GO_BUILD_CMD "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
||||||
|
@ -37,7 +39,7 @@ OS_PLATFORM_ARG=(linux)
|
||||||
OS_ARCH_ARG=(arm64)
|
OS_ARCH_ARG=(arm64)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for $OS/$ARCH..."
|
echo "Building binary for ${OS}/${ARCH}..."
|
||||||
GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 $GO_BUILD_CMD "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
|
@ -18,9 +18,11 @@ if [ -z "$DATE" ]; then
|
||||||
DATE=$(date -u '+%Y-%m-%d_%I:%M:%S%p')
|
DATE=$(date -u '+%Y-%m-%d_%I:%M:%S%p')
|
||||||
fi
|
fi
|
||||||
|
|
||||||
|
echo "Building ${VERSION} ${CODENAME} ${DATE}"
|
||||||
|
|
||||||
GIT_REPO_URL='github.com/containous/traefik/version'
|
GIT_REPO_URL='github.com/containous/traefik/version'
|
||||||
GO_BUILD_CMD="go build -ldflags"
|
GO_BUILD_CMD="go build -ldflags"
|
||||||
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=$VERSION -X ${GIT_REPO_URL}.Codename=$CODENAME -X ${GIT_REPO_URL}.BuildDate=$DATE"
|
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Codename=${CODENAME} -X ${GIT_REPO_URL}.BuildDate=${DATE}"
|
||||||
|
|
||||||
# Build arm binaries
|
# Build arm binaries
|
||||||
OS_PLATFORM_ARG=(linux windows darwin)
|
OS_PLATFORM_ARG=(linux windows darwin)
|
||||||
|
@ -28,7 +30,7 @@ OS_ARCH_ARG=(386)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for $OS/$ARCH..."
|
echo "Building binary for $OS/$ARCH..."
|
||||||
GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 $GO_BUILD_CMD "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
||||||
|
@ -38,18 +40,21 @@ OS_ARCH_ARG=(386 amd64)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
# Get rid of existing binaries
|
# Get rid of existing binaries
|
||||||
rm -f dist/traefik_$OS-$ARCH
|
rm -f dist/traefik_${OS}-${ARCH}
|
||||||
echo "Building binary for $OS/$ARCH..."
|
echo "Building binary for $OS/$ARCH..."
|
||||||
GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 $GO_BUILD_CMD "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
||||||
# Build arm binaries
|
# Build arm binaries
|
||||||
OS_PLATFORM_ARG=(linux)
|
OS_PLATFORM_ARG=(linux)
|
||||||
OS_ARCH_ARG=(arm)
|
OS_ARCH_ARG=(arm)
|
||||||
|
ARM_ARG=(6)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for $OS/$ARCH..."
|
for ARM in ${ARM_ARG[@]}; do
|
||||||
GOARCH=$ARCH GOOS=$OS CGO_ENABLED=0 $GO_BUILD_CMD "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
echo "Building binary for $OS/${ARCH}32v${ARM}..."
|
||||||
|
GOARCH=${ARCH} GOOS=${OS} GOARM=${ARM} CGO_ENABLED=0 ${GO_BUILD_CMD} "$GO_BUILD_OPT" -o "dist/traefik_$OS-${ARCH}" ./cmd/traefik/
|
||||||
|
done
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
|
@ -17,7 +17,6 @@ find_dirs() {
|
||||||
find . -not \( \
|
find . -not \( \
|
||||||
\( \
|
\( \
|
||||||
-path './integration/*' \
|
-path './integration/*' \
|
||||||
-o -path './.glide/*' \
|
|
||||||
-o -path './vendor/*' \
|
-o -path './vendor/*' \
|
||||||
-o -path './.git/*' \
|
-o -path './.git/*' \
|
||||||
\) \
|
\) \
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||||
|
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^\(integration/\)\?vendor/' || true) )
|
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
|
||||||
unset IFS
|
unset IFS
|
||||||
|
|
||||||
badFiles=()
|
badFiles=()
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||||
|
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^\(integration/\)\?vendor/\|autogen' || true) )
|
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/\|autogen' || true) )
|
||||||
unset IFS
|
unset IFS
|
||||||
|
|
||||||
errors=()
|
errors=()
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||||
|
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^\(integration/\)\?vendor/' || true) )
|
files=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/' || true) )
|
||||||
unset IFS
|
unset IFS
|
||||||
|
|
||||||
errors=()
|
errors=()
|
||||||
|
|
|
@ -3,7 +3,7 @@
|
||||||
source "$(dirname "$BASH_SOURCE")/.validate"
|
source "$(dirname "$BASH_SOURCE")/.validate"
|
||||||
|
|
||||||
IFS=$'\n'
|
IFS=$'\n'
|
||||||
src=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^\(integration/\)\?vendor/\|autogen' || true) )
|
src=( $(validate_diff --diff-filter=ACMR --name-only -- '*.go' | grep -v '^vendor/\|autogen' || true) )
|
||||||
docs=( $(validate_diff --diff-filter=ACMR --name-only -- 'docs/*.md') )
|
docs=( $(validate_diff --diff-filter=ACMR --name-only -- 'docs/*.md') )
|
||||||
unset IFS
|
unset IFS
|
||||||
files=("${src[@]}" "${docs[@]}")
|
files=("${src[@]}" "${docs[@]}")
|
||||||
|
|
|
@ -647,6 +647,7 @@ func (server *Server) prepareServer(entryPointName string, entryPoint *configura
|
||||||
listener, err := net.Listen("tcp", entryPoint.Address)
|
listener, err := net.Listen("tcp", entryPoint.Address)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.Error("Error opening listener ", err)
|
log.Error("Error opening listener ", err)
|
||||||
|
return nil, nil, err
|
||||||
}
|
}
|
||||||
|
|
||||||
if entryPoint.ProxyProtocol {
|
if entryPoint.ProxyProtocol {
|
||||||
|
|
|
@ -96,7 +96,7 @@ func TestPrepareServerTimeouts(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
entryPointName := "http"
|
entryPointName := "http"
|
||||||
entryPoint := &configuration.EntryPoint{Address: "localhost:8080"}
|
entryPoint := &configuration.EntryPoint{Address: "localhost:0"}
|
||||||
router := middlewares.NewHandlerSwitcher(mux.NewRouter())
|
router := middlewares.NewHandlerSwitcher(mux.NewRouter())
|
||||||
|
|
||||||
srv := NewServer(test.globalConfig)
|
srv := NewServer(test.globalConfig)
|
||||||
|
@ -504,14 +504,14 @@ func TestServerEntrypointWhitelistConfig(t *testing.T) {
|
||||||
{
|
{
|
||||||
desc: "no whitelist middleware if no config on entrypoint",
|
desc: "no whitelist middleware if no config on entrypoint",
|
||||||
entrypoint: &configuration.EntryPoint{
|
entrypoint: &configuration.EntryPoint{
|
||||||
Address: ":8080",
|
Address: ":0",
|
||||||
},
|
},
|
||||||
wantMiddleware: false,
|
wantMiddleware: false,
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "whitelist middleware should be added if configured on entrypoint",
|
desc: "whitelist middleware should be added if configured on entrypoint",
|
||||||
entrypoint: &configuration.EntryPoint{
|
entrypoint: &configuration.EntryPoint{
|
||||||
Address: ":8080",
|
Address: ":0",
|
||||||
WhitelistSourceRange: []string{
|
WhitelistSourceRange: []string{
|
||||||
"127.0.0.1/32",
|
"127.0.0.1/32",
|
||||||
},
|
},
|
||||||
|
|
|
@ -40,6 +40,9 @@
|
||||||
"{{.}}",
|
"{{.}}",
|
||||||
{{end}}]
|
{{end}}]
|
||||||
{{end}}
|
{{end}}
|
||||||
|
basicAuth = [{{range getBasicAuth .Attributes}}
|
||||||
|
"{{.}}",
|
||||||
|
{{end}}]
|
||||||
[frontends."frontend-{{.ServiceName}}".routes."route-host-{{.ServiceName}}"]
|
[frontends."frontend-{{.ServiceName}}".routes."route-host-{{.ServiceName}}"]
|
||||||
rule = "{{getFrontendRule .}}"
|
rule = "{{getFrontendRule .}}"
|
||||||
{{end}}
|
{{end}}
|
||||||
|
|
|
@ -18,6 +18,9 @@
|
||||||
priority = {{ getPriority .}}
|
priority = {{ getPriority .}}
|
||||||
entryPoints = [{{range getEntryPoints .}}
|
entryPoints = [{{range getEntryPoints .}}
|
||||||
"{{.}}",
|
"{{.}}",
|
||||||
|
{{end}}]
|
||||||
|
basicAuth = [{{range getBasicAuth .}}
|
||||||
|
"{{.}}",
|
||||||
{{end}}]
|
{{end}}]
|
||||||
[frontends.frontend-{{ $serviceName }}.routes.route-frontend-{{ $serviceName }}]
|
[frontends.frontend-{{ $serviceName }}.routes.route-frontend-{{ $serviceName }}]
|
||||||
rule = "{{getFrontendRule .}}"
|
rule = "{{getFrontendRule .}}"
|
||||||
|
|
|
@ -26,9 +26,9 @@
|
||||||
}
|
}
|
||||||
|
|
||||||
table {
|
table {
|
||||||
table-layout: fixed;
|
table-layout: fixed;
|
||||||
}
|
}
|
||||||
|
|
||||||
td, th {
|
td, th {
|
||||||
word-wrap: break-word;
|
word-wrap: break-word;
|
||||||
}
|
}
|
||||||
|
|
Loading…
Reference in a new issue