Revert "Merge v1.4.2 into master"
This commit is contained in:
parent
6fcab72ec7
commit
0c702b0b6b
18 changed files with 145 additions and 717 deletions
18
CHANGELOG.md
18
CHANGELOG.md
|
@ -1,23 +1,5 @@
|
||||||
# Change Log
|
# Change Log
|
||||||
|
|
||||||
## [v1.4.3](https://github.com/containous/traefik/tree/v1.4.3) (2017-11-14)
|
|
||||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.2...v1.4.3)
|
|
||||||
|
|
||||||
**Bug fixes:**
|
|
||||||
- **[consulcatalog]** Fix Traefik reload if Consul Catalog tags change ([#2389](https://github.com/containous/traefik/pull/2389) by [mmatur](https://github.com/mmatur))
|
|
||||||
- **[kv]** Add Traefik prefix to the KV key ([#2400](https://github.com/containous/traefik/pull/2400) by [nmengin](https://github.com/nmengin))
|
|
||||||
- **[middleware]** Flush and Status code ([#2403](https://github.com/containous/traefik/pull/2403) by [ldez](https://github.com/ldez))
|
|
||||||
- **[middleware]** Exclude GRPC from compress ([#2391](https://github.com/containous/traefik/pull/2391) by [ldez](https://github.com/ldez))
|
|
||||||
- **[middleware]** Keep status when stream mode and compress ([#2380](https://github.com/containous/traefik/pull/2380) by [Juliens](https://github.com/Juliens))
|
|
||||||
|
|
||||||
**Documentation:**
|
|
||||||
- **[acme]** Fix some typos ([#2363](https://github.com/containous/traefik/pull/2363) by [tomsaleeba](https://github.com/tomsaleeba))
|
|
||||||
- **[docker]** Minor fix for docker volume vs created directory ([#2372](https://github.com/containous/traefik/pull/2372) by [visibilityspots](https://github.com/visibilityspots))
|
|
||||||
- **[k8s]** Link corrected ([#2385](https://github.com/containous/traefik/pull/2385) by [xlazex](https://github.com/xlazex))
|
|
||||||
|
|
||||||
**Misc:**
|
|
||||||
- **[k8s]** Add secret creation to docs for kubernetes backend ([#2374](https://github.com/containous/traefik/pull/2374) by [shadycuz](https://github.com/shadycuz))
|
|
||||||
|
|
||||||
## [v1.4.2](https://github.com/containous/traefik/tree/v1.4.2) (2017-11-02)
|
## [v1.4.2](https://github.com/containous/traefik/tree/v1.4.2) (2017-11-02)
|
||||||
[All Commits](https://github.com/containous/traefik/compare/v1.4.1...v1.4.2)
|
[All Commits](https://github.com/containous/traefik/compare/v1.4.1...v1.4.2)
|
||||||
|
|
||||||
|
|
|
@ -2,7 +2,7 @@
|
||||||
|
|
||||||
Træfik can be configured to use Kubernetes Ingress as a backend configuration.
|
Træfik can be configured to use Kubernetes Ingress as a backend configuration.
|
||||||
|
|
||||||
See also [Kubernetes user guide](/user-guide/kubernetes).
|
See also [Kubernetes user guide](/docs/user-guide/kubernetes).
|
||||||
|
|
||||||
|
|
||||||
## Configuration
|
## Configuration
|
||||||
|
@ -118,10 +118,10 @@ If one of the Net-Specifications are invalid, the whole list is invalid and allo
|
||||||
### Authentication
|
### Authentication
|
||||||
|
|
||||||
Is possible to add additional authentication annotations in the Ingress rule.
|
Is possible to add additional authentication annotations in the Ingress rule.
|
||||||
The source of the authentication is a secret that contains usernames and passwords inside the key auth.
|
The source of the authentication is a secret that contains usernames and passwords inside the the key auth.
|
||||||
|
|
||||||
- `ingress.kubernetes.io/auth-type`: `basic`
|
- `ingress.kubernetes.io/auth-type`: `basic`
|
||||||
- `ingress.kubernetes.io/auth-secret`: `mysecret`
|
- `ingress.kubernetes.io/auth-secret`
|
||||||
Contains the usernames and passwords with access to the paths defined in the Ingress Rule.
|
Contains the usernames and passwords with access to the paths defined in the Ingress Rule.
|
||||||
|
|
||||||
The secret must be created in the same namespace as the Ingress rule.
|
The secret must be created in the same namespace as the Ingress rule.
|
||||||
|
|
|
@ -59,8 +59,8 @@ services:
|
||||||
- web
|
- web
|
||||||
volumes:
|
volumes:
|
||||||
- /var/run/docker.sock:/var/run/docker.sock
|
- /var/run/docker.sock:/var/run/docker.sock
|
||||||
- /opt/traefik/traefik.toml:/traefik.toml
|
- /srv/traefik/traefik.toml:/traefik.toml
|
||||||
- /opt/traefik/acme.json:/acme.json
|
- /srv/traefik/acme.json:/acme.json
|
||||||
container_name: traefik
|
container_name: traefik
|
||||||
|
|
||||||
networks:
|
networks:
|
||||||
|
|
|
@ -140,7 +140,7 @@ This configuration allows generating a Let's Encrypt certificate during the firs
|
||||||
* TLS handshakes will be slow when requesting a hostname certificate for the first time, this can leads to DDoS attacks.
|
* TLS handshakes will be slow when requesting a hostname certificate for the first time, this can leads to DDoS attacks.
|
||||||
* Let's Encrypt have rate limiting: https://letsencrypt.org/docs/rate-limits
|
* Let's Encrypt have rate limiting: https://letsencrypt.org/docs/rate-limits
|
||||||
|
|
||||||
That's why, it's better to use the `onHostRule` option if possible.
|
That's why, it's better to use the `onHostRule` optin if possible.
|
||||||
|
|
||||||
### DNS challenge
|
### DNS challenge
|
||||||
|
|
||||||
|
@ -173,7 +173,7 @@ entryPoint = "https"
|
||||||
DNS challenge needs environment variables to be executed.
|
DNS challenge needs environment variables to be executed.
|
||||||
This variables have to be set on the machine/container which host Traefik.
|
This variables have to be set on the machine/container which host Traefik.
|
||||||
|
|
||||||
These variables are described [in this section](/configuration/acme/#dnsprovider).
|
These variables has described [in this section](/configuration/acme/#dnsprovider).
|
||||||
|
|
||||||
### OnHostRule option and provided certificates
|
### OnHostRule option and provided certificates
|
||||||
|
|
||||||
|
@ -201,7 +201,7 @@ Traefik will only try to generate a Let's encrypt certificate if the domain cann
|
||||||
|
|
||||||
#### Prerequisites
|
#### Prerequisites
|
||||||
|
|
||||||
Before you use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely at [this section](/user-guide/kv-config/#store-configuration-in-key-value-store), which will describe how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
|
Before to use Let's Encrypt in a Traefik cluster, take a look to [the key-value store explanations](/user-guide/kv-config) and more precisely to [this section](/user-guide/kv-config/#store-configuration-in-key-value-store) in the way to know how to migrate from a acme local storage *(acme.json file)* to a key-value store configuration.
|
||||||
|
|
||||||
#### Configuration
|
#### Configuration
|
||||||
|
|
||||||
|
|
|
@ -82,7 +82,7 @@ It is possible to use Træfik with a [Deployment](https://kubernetes.io/docs/con
|
||||||
|
|
||||||
The Deployment objects looks like this:
|
The Deployment objects looks like this:
|
||||||
|
|
||||||
```yaml
|
```yml
|
||||||
---
|
---
|
||||||
apiVersion: v1
|
apiVersion: v1
|
||||||
kind: ServiceAccount
|
kind: ServiceAccount
|
||||||
|
@ -330,72 +330,6 @@ echo "$(minikube ip) traefik-ui.minikube" | sudo tee -a /etc/hosts
|
||||||
|
|
||||||
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Træfik Web UI.
|
We should now be able to visit [traefik-ui.minikube](http://traefik-ui.minikube) in the browser and view the Træfik Web UI.
|
||||||
|
|
||||||
## Basic Authentication
|
|
||||||
|
|
||||||
It's possible to add additional authentication annotations in the Ingress rule.
|
|
||||||
The source of the authentication is a secret that contains usernames and passwords inside the key auth.
|
|
||||||
To read about basic auth limitations see the [Kubernetes Ingress](/configuration/backends/kubernetes) configuration page.
|
|
||||||
|
|
||||||
#### Creating the Secret
|
|
||||||
|
|
||||||
A. Use `htpasswd` to create a file containing the username and the base64-encoded password:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
htpasswd -c ./auth myusername
|
|
||||||
```
|
|
||||||
|
|
||||||
You will be prompted for a password which you will have to enter twice.
|
|
||||||
`htpasswd` will create a file with the following:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
cat auth
|
|
||||||
```
|
|
||||||
```
|
|
||||||
myusername:$apr1$78Jyn/1K$ERHKVRPPlzAX8eBtLuvRZ0
|
|
||||||
```
|
|
||||||
|
|
||||||
B. Now use `kubectl` to create a secret in the monitoring namespace using the file created by `htpasswd`.
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create secret generic mysecret --from-file auth --namespace=monitoring
|
|
||||||
```
|
|
||||||
|
|
||||||
!!! note
|
|
||||||
Secret must be in same namespace as the ingress rule.
|
|
||||||
|
|
||||||
C. Create the ingress using the following annotations to specify basic auth and that the username and password is stored in `mysecret`.
|
|
||||||
|
|
||||||
- `ingress.kubernetes.io/auth-type: "basic"`
|
|
||||||
- `ingress.kubernetes.io/auth-secret: "mysecret"`
|
|
||||||
|
|
||||||
Following is a full ingress example based on Prometheus:
|
|
||||||
|
|
||||||
```yaml
|
|
||||||
apiVersion: extensions/v1beta1
|
|
||||||
kind: Ingress
|
|
||||||
metadata:
|
|
||||||
name: prometheus-dashboard
|
|
||||||
namespace: monitoring
|
|
||||||
annotations:
|
|
||||||
kubernetes.io/ingress.class: traefik
|
|
||||||
ingress.kubernetes.io/auth-type: "basic"
|
|
||||||
ingress.kubernetes.io/auth-secret: "mysecret"
|
|
||||||
spec:
|
|
||||||
rules:
|
|
||||||
- host: dashboard.prometheus.example.com
|
|
||||||
http:
|
|
||||||
paths:
|
|
||||||
- backend:
|
|
||||||
serviceName: prometheus
|
|
||||||
servicePort: 9090
|
|
||||||
```
|
|
||||||
|
|
||||||
You can apply the example ingress as following:
|
|
||||||
|
|
||||||
```shell
|
|
||||||
kubectl create -f prometheus-ingress.yaml -n monitoring
|
|
||||||
```
|
|
||||||
|
|
||||||
## Name based routing
|
## Name based routing
|
||||||
|
|
||||||
In this example we are going to setup websites for 3 of the United Kingdoms best loved cheeses, Cheddar, Stilton and Wensleydale.
|
In this example we are going to setup websites for 3 of the United Kingdoms best loved cheeses, Cheddar, Stilton and Wensleydale.
|
||||||
|
|
|
@ -148,37 +148,6 @@ This variable must be initialized with the ACL token value.
|
||||||
|
|
||||||
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
|
If Traefik is launched into a Docker container, the variable `CONSUL_HTTP_TOKEN` can be initialized with the `-e` Docker option : `-e "CONSUL_HTTP_TOKEN=[consul-acl-token-value]"`
|
||||||
|
|
||||||
If a Consul ACL is used to restrict Træfik read/write access, one of the following configurations is needed.
|
|
||||||
|
|
||||||
- HCL format :
|
|
||||||
|
|
||||||
```
|
|
||||||
key "traefik" {
|
|
||||||
policy = "write"
|
|
||||||
},
|
|
||||||
|
|
||||||
session "" {
|
|
||||||
policy = "write"
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
- JSON format :
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"key": {
|
|
||||||
"traefik": {
|
|
||||||
"policy": "write"
|
|
||||||
}
|
|
||||||
},
|
|
||||||
"session": {
|
|
||||||
"": {
|
|
||||||
"policy": "write"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
### TLS support
|
### TLS support
|
||||||
|
|
||||||
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
|
To connect to a Consul endpoint using SSL, simply specify `https://` in the `consul.endpoint` property
|
||||||
|
|
8
glide.lock
generated
8
glide.lock
generated
|
@ -1,5 +1,5 @@
|
||||||
hash: 5aef5628a880e04fac9cd9db2a33f1f4716680b0c338f3aa803d9786f253405a
|
hash: 7fd36649e80749e16bbfa69777e0f90a017fbc2f67d7efd46373716a16b1a60a
|
||||||
updated: 2017-11-14T16:23:20.438135-04:00
|
updated: 2017-11-02T11:39:20.438135-04:00
|
||||||
imports:
|
imports:
|
||||||
- name: cloud.google.com/go
|
- name: cloud.google.com/go
|
||||||
version: 2e6a95edb1071d750f6d7db777bf66cd2997af6c
|
version: 2e6a95edb1071d750f6d7db777bf66cd2997af6c
|
||||||
|
@ -398,9 +398,7 @@ imports:
|
||||||
repo: https://github.com/ijc25/Gotty.git
|
repo: https://github.com/ijc25/Gotty.git
|
||||||
vcs: git
|
vcs: git
|
||||||
- name: github.com/NYTimes/gziphandler
|
- name: github.com/NYTimes/gziphandler
|
||||||
version: 26a3f68265200656f31940bc15b191f7d10b5bbd
|
version: 97ae7fbaf81620fe97840685304a78a306a39c64
|
||||||
repo: https://github.com/containous/gziphandler.git
|
|
||||||
vcs: git
|
|
||||||
- name: github.com/ogier/pflag
|
- name: github.com/ogier/pflag
|
||||||
version: 45c278ab3607870051a2ea9040bb85fcb8557481
|
version: 45c278ab3607870051a2ea9040bb85fcb8557481
|
||||||
- name: github.com/opencontainers/go-digest
|
- name: github.com/opencontainers/go-digest
|
||||||
|
|
|
@ -80,9 +80,6 @@ import:
|
||||||
vcs: git
|
vcs: git
|
||||||
- package: github.com/abbot/go-http-auth
|
- package: github.com/abbot/go-http-auth
|
||||||
- package: github.com/NYTimes/gziphandler
|
- package: github.com/NYTimes/gziphandler
|
||||||
version: fork-containous
|
|
||||||
repo: https://github.com/containous/gziphandler.git
|
|
||||||
vcs: git
|
|
||||||
- package: github.com/docker/leadership
|
- package: github.com/docker/leadership
|
||||||
- package: github.com/satori/go.uuid
|
- package: github.com/satori/go.uuid
|
||||||
version: ^1.1.0
|
version: ^1.1.0
|
||||||
|
|
|
@ -36,7 +36,7 @@ func (s *ConsulCatalogSuite) SetUpSuite(c *check.C) {
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConsulCatalogSuite) waitToElectConsulLeader() error {
|
func (s *ConsulCatalogSuite) waitToElectConsulLeader() error {
|
||||||
return try.Do(15*time.Second, func() error {
|
return try.Do(3*time.Second, func() error {
|
||||||
leader, err := s.consulClient.Status().Leader()
|
leader, err := s.consulClient.Status().Leader()
|
||||||
|
|
||||||
if err != nil || len(leader) == 0 {
|
if err != nil || len(leader) == 0 {
|
||||||
|
@ -344,82 +344,8 @@ func (s *ConsulCatalogSuite) TestBasicAuthSimpleService(c *check.C) {
|
||||||
c.Assert(err, checker.IsNil)
|
c.Assert(err, checker.IsNil)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (s *ConsulCatalogSuite) TestRefreshConfigTagChange(c *check.C) {
|
|
||||||
cmd, display := s.traefikCmd(
|
|
||||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
|
||||||
"--consulCatalog",
|
|
||||||
"--consulCatalog.exposedByDefault=false",
|
|
||||||
"--consulCatalog.watch=true",
|
|
||||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
|
||||||
"--consulCatalog.domain=consul.localhost")
|
|
||||||
defer display(c)
|
|
||||||
err := cmd.Start()
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
defer cmd.Process.Kill()
|
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx1")
|
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=false", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
|
||||||
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
|
||||||
|
|
||||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 5*time.Second, try.BodyContains("nginx1"))
|
|
||||||
c.Assert(err, checker.NotNil)
|
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=true", "traefik.backend.circuitbreaker=ResponseCodeRatio(500, 600, 0, 600) > 0.5"})
|
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
|
||||||
|
|
||||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
req.Host = "test.consul.localhost"
|
|
||||||
|
|
||||||
err = try.Request(req, 20*time.Second, try.StatusCodeIs(http.StatusOK), try.HasBody())
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
|
|
||||||
err = try.GetRequest("http://127.0.0.1:8080/api/providers/consul_catalog/backends", 60*time.Second, try.BodyContains("nginx1"))
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *ConsulCatalogSuite) TestCircuitBreaker(c *check.C) {
|
|
||||||
cmd, display := s.traefikCmd(
|
|
||||||
withConfigFile("fixtures/consul_catalog/simple.toml"),
|
|
||||||
"--retry",
|
|
||||||
"--retry.attempts=1",
|
|
||||||
"--forwardingTimeouts.dialTimeout=5s",
|
|
||||||
"--forwardingTimeouts.responseHeaderTimeout=10s",
|
|
||||||
"--consulCatalog",
|
|
||||||
"--consulCatalog.exposedByDefault=false",
|
|
||||||
"--consulCatalog.watch=true",
|
|
||||||
"--consulCatalog.endpoint="+s.consulIP+":8500",
|
|
||||||
"--consulCatalog.domain=consul.localhost")
|
|
||||||
defer display(c)
|
|
||||||
err := cmd.Start()
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
defer cmd.Process.Kill()
|
|
||||||
|
|
||||||
nginx := s.composeProject.Container(c, "nginx1")
|
|
||||||
nginx2 := s.composeProject.Container(c, "nginx2")
|
|
||||||
nginx3 := s.composeProject.Container(c, "nginx3")
|
|
||||||
|
|
||||||
err = s.registerService("test", nginx.NetworkSettings.IPAddress, 80, []string{"name=nginx1", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
|
||||||
defer s.deregisterService("test", nginx.NetworkSettings.IPAddress)
|
|
||||||
err = s.registerService("test", nginx2.NetworkSettings.IPAddress, 42, []string{"name=nginx2", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
|
||||||
defer s.deregisterService("test", nginx2.NetworkSettings.IPAddress)
|
|
||||||
err = s.registerService("test", nginx3.NetworkSettings.IPAddress, 42, []string{"name=nginx3", "traefik.enable=true", "traefik.backend.circuitbreaker=NetworkErrorRatio() > 0.5"})
|
|
||||||
c.Assert(err, checker.IsNil, check.Commentf("Error registering service"))
|
|
||||||
defer s.deregisterService("test", nginx3.NetworkSettings.IPAddress)
|
|
||||||
|
|
||||||
req, err := http.NewRequest(http.MethodGet, "http://127.0.0.1:8000/", nil)
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
req.Host = "test.consul.localhost"
|
|
||||||
|
|
||||||
err = try.Request(req, 20*time.Second, try.StatusCodeIs(http.StatusServiceUnavailable), try.HasBody())
|
|
||||||
c.Assert(err, checker.IsNil)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s *ConsulCatalogSuite) TestRetryWithConsulServer(c *check.C) {
|
func (s *ConsulCatalogSuite) TestRetryWithConsulServer(c *check.C) {
|
||||||
|
|
||||||
//Scale consul to 0 to be able to start traefik before and test retry
|
//Scale consul to 0 to be able to start traefik before and test retry
|
||||||
s.composeProject.Scale(c, "consul", 0)
|
s.composeProject.Scale(c, "consul", 0)
|
||||||
|
|
||||||
|
|
|
@ -1,6 +1,6 @@
|
||||||
consul:
|
consul:
|
||||||
image: consul
|
image: progrium/consul
|
||||||
command: agent -server -bootstrap-expect 1 -client 0.0.0.0 -log-level debug -ui
|
command: -server -bootstrap -log-level debug -ui-dir /ui
|
||||||
ports:
|
ports:
|
||||||
- "8400:8400"
|
- "8400:8400"
|
||||||
- "8500:8500"
|
- "8500:8500"
|
||||||
|
@ -15,5 +15,3 @@ nginx1:
|
||||||
image: nginx:alpine
|
image: nginx:alpine
|
||||||
nginx2:
|
nginx2:
|
||||||
image: nginx:alpine
|
image: nginx:alpine
|
||||||
nginx3:
|
|
||||||
image: nginx:alpine
|
|
||||||
|
|
|
@ -3,7 +3,6 @@ package middlewares
|
||||||
import (
|
import (
|
||||||
"compress/gzip"
|
"compress/gzip"
|
||||||
"net/http"
|
"net/http"
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/NYTimes/gziphandler"
|
"github.com/NYTimes/gziphandler"
|
||||||
"github.com/containous/traefik/log"
|
"github.com/containous/traefik/log"
|
||||||
|
@ -14,13 +13,8 @@ type Compress struct{}
|
||||||
|
|
||||||
// ServerHTTP is a function used by Negroni
|
// ServerHTTP is a function used by Negroni
|
||||||
func (c *Compress) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
func (c *Compress) ServeHTTP(rw http.ResponseWriter, r *http.Request, next http.HandlerFunc) {
|
||||||
contentType := r.Header.Get("Content-Type")
|
|
||||||
if strings.HasPrefix(contentType, "application/grpc") {
|
|
||||||
next.ServeHTTP(rw, r)
|
|
||||||
} else {
|
|
||||||
gzipHandler(next).ServeHTTP(rw, r)
|
gzipHandler(next).ServeHTTP(rw, r)
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
func gzipHandler(h http.Handler) http.Handler {
|
func gzipHandler(h http.Handler) http.Handler {
|
||||||
wrapper, err := gziphandler.GzipHandlerWithOpts(
|
wrapper, err := gziphandler.GzipHandlerWithOpts(
|
||||||
|
|
|
@ -16,7 +16,6 @@ import (
|
||||||
const (
|
const (
|
||||||
acceptEncodingHeader = "Accept-Encoding"
|
acceptEncodingHeader = "Accept-Encoding"
|
||||||
contentEncodingHeader = "Content-Encoding"
|
contentEncodingHeader = "Content-Encoding"
|
||||||
contentTypeHeader = "Content-Type"
|
|
||||||
varyHeader = "Vary"
|
varyHeader = "Vary"
|
||||||
gzipValue = "gzip"
|
gzipValue = "gzip"
|
||||||
)
|
)
|
||||||
|
@ -82,26 +81,6 @@ func TestShouldNotCompressWhenNoAcceptEncodingHeader(t *testing.T) {
|
||||||
assert.EqualValues(t, rw.Body.Bytes(), fakeBody)
|
assert.EqualValues(t, rw.Body.Bytes(), fakeBody)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestShouldNotCompressWhenGRPC(t *testing.T) {
|
|
||||||
handler := &Compress{}
|
|
||||||
|
|
||||||
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
|
||||||
req.Header.Add(acceptEncodingHeader, gzipValue)
|
|
||||||
req.Header.Add(contentTypeHeader, "application/grpc")
|
|
||||||
|
|
||||||
baseBody := generateBytes(gziphandler.DefaultMinSize)
|
|
||||||
next := func(rw http.ResponseWriter, r *http.Request) {
|
|
||||||
rw.Write(baseBody)
|
|
||||||
}
|
|
||||||
|
|
||||||
rw := httptest.NewRecorder()
|
|
||||||
handler.ServeHTTP(rw, req, next)
|
|
||||||
|
|
||||||
assert.Empty(t, rw.Header().Get(acceptEncodingHeader))
|
|
||||||
assert.Empty(t, rw.Header().Get(contentEncodingHeader))
|
|
||||||
assert.EqualValues(t, rw.Body.Bytes(), baseBody)
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIntegrationShouldNotCompress(t *testing.T) {
|
func TestIntegrationShouldNotCompress(t *testing.T) {
|
||||||
fakeCompressedBody := generateBytes(100000)
|
fakeCompressedBody := generateBytes(100000)
|
||||||
comp := &Compress{}
|
comp := &Compress{}
|
||||||
|
@ -158,31 +137,6 @@ func TestIntegrationShouldNotCompress(t *testing.T) {
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestShouldWriteHeaderWhenFlush(t *testing.T) {
|
|
||||||
comp := &Compress{}
|
|
||||||
negro := negroni.New(comp)
|
|
||||||
negro.UseHandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
|
|
||||||
rw.Header().Add(contentEncodingHeader, gzipValue)
|
|
||||||
rw.Header().Add(varyHeader, acceptEncodingHeader)
|
|
||||||
rw.WriteHeader(http.StatusUnauthorized)
|
|
||||||
rw.(http.Flusher).Flush()
|
|
||||||
rw.Write([]byte("short"))
|
|
||||||
})
|
|
||||||
ts := httptest.NewServer(negro)
|
|
||||||
defer ts.Close()
|
|
||||||
|
|
||||||
req := testhelpers.MustNewRequest(http.MethodGet, ts.URL, nil)
|
|
||||||
req.Header.Add(acceptEncodingHeader, gzipValue)
|
|
||||||
|
|
||||||
resp, err := http.DefaultClient.Do(req)
|
|
||||||
require.NoError(t, err)
|
|
||||||
|
|
||||||
assert.Equal(t, http.StatusUnauthorized, resp.StatusCode)
|
|
||||||
|
|
||||||
assert.Equal(t, gzipValue, resp.Header.Get(contentEncodingHeader))
|
|
||||||
assert.Equal(t, acceptEncodingHeader, resp.Header.Get(varyHeader))
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestIntegrationShouldCompress(t *testing.T) {
|
func TestIntegrationShouldCompress(t *testing.T) {
|
||||||
fakeBody := generateBytes(100000)
|
fakeBody := generateBytes(100000)
|
||||||
|
|
||||||
|
|
|
@ -77,14 +77,9 @@ func (a nodeSorter) Less(i int, j int) bool {
|
||||||
return lentr.Service.Port < rentr.Service.Port
|
return lentr.Service.Port < rentr.Service.Port
|
||||||
}
|
}
|
||||||
|
|
||||||
func hasChanged(current map[string]Service, previous map[string]Service) bool {
|
func getChangedServiceKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||||
addedServiceKeys, removedServiceKeys := getChangedServiceKeys(current, previous)
|
currKeySet := fun.Set(fun.Keys(currState).([]string)).(map[string]bool)
|
||||||
return len(removedServiceKeys) > 0 || len(addedServiceKeys) > 0 || hasNodeOrTagsChanged(current, previous)
|
prevKeySet := fun.Set(fun.Keys(prevState).([]string)).(map[string]bool)
|
||||||
}
|
|
||||||
|
|
||||||
func getChangedServiceKeys(current map[string]Service, previous map[string]Service) ([]string, []string) {
|
|
||||||
currKeySet := fun.Set(fun.Keys(current).([]string)).(map[string]bool)
|
|
||||||
prevKeySet := fun.Set(fun.Keys(previous).([]string)).(map[string]bool)
|
|
||||||
|
|
||||||
addedKeys := fun.Difference(currKeySet, prevKeySet).(map[string]bool)
|
addedKeys := fun.Difference(currKeySet, prevKeySet).(map[string]bool)
|
||||||
removedKeys := fun.Difference(prevKeySet, currKeySet).(map[string]bool)
|
removedKeys := fun.Difference(prevKeySet, currKeySet).(map[string]bool)
|
||||||
|
@ -92,23 +87,20 @@ func getChangedServiceKeys(current map[string]Service, previous map[string]Servi
|
||||||
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
return fun.Keys(addedKeys).([]string), fun.Keys(removedKeys).([]string)
|
||||||
}
|
}
|
||||||
|
|
||||||
func hasNodeOrTagsChanged(current map[string]Service, previous map[string]Service) bool {
|
func getChangedServiceNodeKeys(currState map[string]Service, prevState map[string]Service) ([]string, []string) {
|
||||||
var added []string
|
var addedNodeKeys []string
|
||||||
var removed []string
|
var removedNodeKeys []string
|
||||||
for key, value := range current {
|
for key, value := range currState {
|
||||||
if prevValue, ok := previous[key]; ok {
|
if prevValue, ok := prevState[key]; ok {
|
||||||
addedNodesKeys, removedNodesKeys := getChangedStringKeys(value.Nodes, prevValue.Nodes)
|
addedKeys, removedKeys := getChangedHealthyKeys(value.Nodes, prevValue.Nodes)
|
||||||
added = append(added, addedNodesKeys...)
|
addedNodeKeys = append(addedKeys)
|
||||||
removed = append(removed, removedNodesKeys...)
|
removedNodeKeys = append(removedKeys)
|
||||||
addedTagsKeys, removedTagsKeys := getChangedStringKeys(value.Tags, prevValue.Tags)
|
|
||||||
added = append(added, addedTagsKeys...)
|
|
||||||
removed = append(removed, removedTagsKeys...)
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
return len(added) > 0 || len(removed) > 0
|
return addedNodeKeys, removedNodeKeys
|
||||||
}
|
}
|
||||||
|
|
||||||
func getChangedStringKeys(currState []string, prevState []string) ([]string, []string) {
|
func getChangedHealthyKeys(currState []string, prevState []string) ([]string, []string) {
|
||||||
currKeySet := fun.Set(currState).(map[string]bool)
|
currKeySet := fun.Set(currState).(map[string]bool)
|
||||||
prevKeySet := fun.Set(prevState).(map[string]bool)
|
prevKeySet := fun.Set(prevState).(map[string]bool)
|
||||||
|
|
||||||
|
@ -171,7 +163,7 @@ func (p *CatalogProvider) watchHealthState(stopCh <-chan struct{}, watchCh chan<
|
||||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||||
// Thus it is required to do extra check for changes...
|
// Thus it is required to do extra check for changes...
|
||||||
addedKeys, removedKeys := getChangedStringKeys(current, flashback)
|
addedKeys, removedKeys := getChangedHealthyKeys(current, flashback)
|
||||||
|
|
||||||
if len(addedKeys) > 0 {
|
if len(addedKeys) > 0 {
|
||||||
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
log.WithField("DiscoveredServices", addedKeys).Debug("Health State change detected.")
|
||||||
|
@ -250,7 +242,12 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||||
// A critical note is that the return of a blocking request is no guarantee of a change.
|
// A critical note is that the return of a blocking request is no guarantee of a change.
|
||||||
// It is possible that there was an idempotent write that does not affect the result of the query.
|
// It is possible that there was an idempotent write that does not affect the result of the query.
|
||||||
// Thus it is required to do extra check for changes...
|
// Thus it is required to do extra check for changes...
|
||||||
if hasChanged(current, flashback) {
|
addedServiceKeys, removedServiceKeys := getChangedServiceKeys(current, flashback)
|
||||||
|
|
||||||
|
addedServiceNodeKeys, removedServiceNodeKeys := getChangedServiceNodeKeys(current, flashback)
|
||||||
|
|
||||||
|
if len(removedServiceKeys) > 0 || len(removedServiceNodeKeys) > 0 || len(addedServiceKeys) > 0 || len(addedServiceNodeKeys) > 0 {
|
||||||
|
log.WithField("MissingServices", removedServiceKeys).WithField("DiscoveredServices", addedServiceKeys).Debug("Catalog Services change detected.")
|
||||||
watchCh <- data
|
watchCh <- data
|
||||||
flashback = current
|
flashback = current
|
||||||
}
|
}
|
||||||
|
@ -258,7 +255,6 @@ func (p *CatalogProvider) watchCatalogServices(stopCh <-chan struct{}, watchCh c
|
||||||
}
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
|
|
||||||
func getServiceIds(services []*api.CatalogService) []string {
|
func getServiceIds(services []*api.CatalogService) []string {
|
||||||
var serviceIds []string
|
var serviceIds []string
|
||||||
for _, service := range services {
|
for _, service := range services {
|
||||||
|
@ -275,6 +271,7 @@ func (p *CatalogProvider) healthyNodes(service string) (catalogUpdate, error) {
|
||||||
log.WithError(err).Errorf("Failed to fetch details of %s", service)
|
log.WithError(err).Errorf("Failed to fetch details of %s", service)
|
||||||
return catalogUpdate{}, err
|
return catalogUpdate{}, err
|
||||||
}
|
}
|
||||||
|
|
||||||
nodes := fun.Filter(func(node *api.ServiceEntry) bool {
|
nodes := fun.Filter(func(node *api.ServiceEntry) bool {
|
||||||
return p.nodeFilter(service, node)
|
return p.nodeFilter(service, node)
|
||||||
}, data).([]*api.ServiceEntry)
|
}, data).([]*api.ServiceEntry)
|
||||||
|
|
|
@ -1,6 +1,7 @@
|
||||||
package consul
|
package consul
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"reflect"
|
||||||
"sort"
|
"sort"
|
||||||
"testing"
|
"testing"
|
||||||
"text/template"
|
"text/template"
|
||||||
|
@ -20,13 +21,11 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||||
}
|
}
|
||||||
provider.setupFrontEndTemplate()
|
provider.setupFrontEndTemplate()
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
service serviceUpdate
|
service serviceUpdate
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should return default host foo.localhost",
|
|
||||||
service: serviceUpdate{
|
service: serviceUpdate{
|
||||||
ServiceName: "foo",
|
ServiceName: "foo",
|
||||||
Attributes: []string{},
|
Attributes: []string{},
|
||||||
|
@ -34,7 +33,6 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||||
expected: "Host:foo.localhost",
|
expected: "Host:foo.localhost",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return host *.example.com",
|
|
||||||
service: serviceUpdate{
|
service: serviceUpdate{
|
||||||
ServiceName: "foo",
|
ServiceName: "foo",
|
||||||
Attributes: []string{
|
Attributes: []string{
|
||||||
|
@ -44,7 +42,6 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||||
expected: "Host:*.example.com",
|
expected: "Host:*.example.com",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return host foo.example.com",
|
|
||||||
service: serviceUpdate{
|
service: serviceUpdate{
|
||||||
ServiceName: "foo",
|
ServiceName: "foo",
|
||||||
Attributes: []string{
|
Attributes: []string{
|
||||||
|
@ -54,7 +51,6 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||||
expected: "Host:foo.example.com",
|
expected: "Host:foo.example.com",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return path prefix /bar",
|
|
||||||
service: serviceUpdate{
|
service: serviceUpdate{
|
||||||
ServiceName: "foo",
|
ServiceName: "foo",
|
||||||
Attributes: []string{
|
Attributes: []string{
|
||||||
|
@ -66,14 +62,11 @@ func TestConsulCatalogGetFrontendRule(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, e := range services {
|
||||||
test := test
|
actual := provider.getFrontendRule(e.service)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
if actual != e.expected {
|
||||||
t.Parallel()
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
|
}
|
||||||
actual := provider.getFrontendRule(test.service)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -83,15 +76,13 @@ func TestConsulCatalogGetTag(t *testing.T) {
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
tags []string
|
tags []string
|
||||||
key string
|
key string
|
||||||
defaultValue string
|
defaultValue string
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should return value of foo.bar key",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=random",
|
"foo.bar=random",
|
||||||
"traefik.backend.weight=42",
|
"traefik.backend.weight=42",
|
||||||
|
@ -103,17 +94,21 @@ func TestConsulCatalogGetTag(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
assert.Equal(t, true, provider.hasTag("management", []string{"management"}))
|
actual := provider.hasTag("management", []string{"management"})
|
||||||
assert.Equal(t, true, provider.hasTag("management", []string{"management=yes"}))
|
if !actual {
|
||||||
|
t.Fatalf("expected %v, got %v", true, actual)
|
||||||
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
actual = provider.hasTag("management", []string{"management=yes"})
|
||||||
test := test
|
if !actual {
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
t.Fatalf("expected %v, got %v", true, actual)
|
||||||
t.Parallel()
|
}
|
||||||
|
|
||||||
actual := provider.getTag(test.key, test.tags, test.defaultValue)
|
for _, e := range services {
|
||||||
assert.Equal(t, test.expected, actual)
|
actual := provider.getTag(e.key, e.tags, e.defaultValue)
|
||||||
})
|
if actual != e.expected {
|
||||||
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -123,15 +118,13 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
tags []string
|
tags []string
|
||||||
key string
|
key string
|
||||||
defaultValue string
|
defaultValue string
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should return tag value 42",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=ramdom",
|
"foo.bar=ramdom",
|
||||||
"traefik.backend.weight=42",
|
"traefik.backend.weight=42",
|
||||||
|
@ -141,7 +134,6 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||||
expected: "42",
|
expected: "42",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return tag default value 0",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=ramdom",
|
"foo.bar=ramdom",
|
||||||
"traefik.backend.wei=42",
|
"traefik.backend.wei=42",
|
||||||
|
@ -152,16 +144,17 @@ func TestConsulCatalogGetAttribute(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
assert.Equal(t, provider.Prefix+".foo", provider.getPrefixedName("foo"))
|
expected := provider.Prefix + ".foo"
|
||||||
|
actual := provider.getPrefixedName("foo")
|
||||||
|
if actual != expected {
|
||||||
|
t.Fatalf("expected %s, got %s", expected, actual)
|
||||||
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, e := range services {
|
||||||
test := test
|
actual := provider.getAttribute(e.key, e.tags, e.defaultValue)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
if actual != e.expected {
|
||||||
t.Parallel()
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
|
}
|
||||||
actual := provider.getAttribute(test.key, test.tags, test.defaultValue)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -171,15 +164,13 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||||
Prefix: "",
|
Prefix: "",
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
tags []string
|
tags []string
|
||||||
key string
|
key string
|
||||||
defaultValue string
|
defaultValue string
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should return tag value 42",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=ramdom",
|
"foo.bar=ramdom",
|
||||||
"backend.weight=42",
|
"backend.weight=42",
|
||||||
|
@ -189,7 +180,6 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||||
expected: "42",
|
expected: "42",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return default value 0",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=ramdom",
|
"foo.bar=ramdom",
|
||||||
"backend.wei=42",
|
"backend.wei=42",
|
||||||
|
@ -199,7 +189,6 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||||
expected: "0",
|
expected: "0",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return for.bar key value random",
|
|
||||||
tags: []string{
|
tags: []string{
|
||||||
"foo.bar=ramdom",
|
"foo.bar=ramdom",
|
||||||
"backend.wei=42",
|
"backend.wei=42",
|
||||||
|
@ -210,16 +199,17 @@ func TestConsulCatalogGetAttributeWithEmptyPrefix(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
assert.Equal(t, "foo", provider.getPrefixedName("foo"))
|
expected := "foo"
|
||||||
|
actual := provider.getPrefixedName("foo")
|
||||||
|
if actual != expected {
|
||||||
|
t.Fatalf("expected %s, got %s", expected, actual)
|
||||||
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, e := range services {
|
||||||
test := test
|
actual := provider.getAttribute(e.key, e.tags, e.defaultValue)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
if actual != e.expected {
|
||||||
t.Parallel()
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
|
}
|
||||||
actual := provider.getAttribute(test.key, test.tags, test.defaultValue)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -229,13 +219,11 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
node *api.ServiceEntry
|
node *api.ServiceEntry
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should return the address of the service",
|
|
||||||
node: &api.ServiceEntry{
|
node: &api.ServiceEntry{
|
||||||
Node: &api.Node{
|
Node: &api.Node{
|
||||||
Address: "10.1.0.1",
|
Address: "10.1.0.1",
|
||||||
|
@ -247,7 +235,6 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||||
expected: "10.2.0.1",
|
expected: "10.2.0.1",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should return the address of the node",
|
|
||||||
node: &api.ServiceEntry{
|
node: &api.ServiceEntry{
|
||||||
Node: &api.Node{
|
Node: &api.Node{
|
||||||
Address: "10.1.0.1",
|
Address: "10.1.0.1",
|
||||||
|
@ -260,14 +247,11 @@ func TestConsulCatalogGetBackendAddress(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, e := range services {
|
||||||
test := test
|
actual := provider.getBackendAddress(e.node)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
if actual != e.expected {
|
||||||
t.Parallel()
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
|
}
|
||||||
actual := provider.getBackendAddress(test.node)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -277,13 +261,11 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
services := []struct {
|
||||||
desc string
|
|
||||||
node *api.ServiceEntry
|
node *api.ServiceEntry
|
||||||
expected string
|
expected string
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should create backend name without tags",
|
|
||||||
node: &api.ServiceEntry{
|
node: &api.ServiceEntry{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
Service: "api",
|
Service: "api",
|
||||||
|
@ -295,7 +277,6 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||||
expected: "api--10-0-0-1--80--0",
|
expected: "api--10-0-0-1--80--0",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should create backend name with multiple tags",
|
|
||||||
node: &api.ServiceEntry{
|
node: &api.ServiceEntry{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
Service: "api",
|
Service: "api",
|
||||||
|
@ -307,7 +288,6 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||||
expected: "api--10-0-0-1--80--traefik-weight-42--traefik-enable-true--1",
|
expected: "api--10-0-0-1--80--traefik-weight-42--traefik-enable-true--1",
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should create backend name with one tag",
|
|
||||||
node: &api.ServiceEntry{
|
node: &api.ServiceEntry{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
Service: "api",
|
Service: "api",
|
||||||
|
@ -320,15 +300,11 @@ func TestConsulCatalogGetBackendName(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for i, test := range testCases {
|
for i, e := range services {
|
||||||
test := test
|
actual := provider.getBackendName(e.node, i)
|
||||||
i := i
|
if actual != e.expected {
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
t.Fatalf("expected %s, got %s", e.expected, actual)
|
||||||
t.Parallel()
|
}
|
||||||
|
|
||||||
actual := provider.getBackendName(test.node, i)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -341,20 +317,17 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
frontEndRuleTemplate: template.New("consul catalog frontend rule"),
|
frontEndRuleTemplate: template.New("consul catalog frontend rule"),
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
cases := []struct {
|
||||||
desc string
|
|
||||||
nodes []catalogUpdate
|
nodes []catalogUpdate
|
||||||
expectedFrontends map[string]*types.Frontend
|
expectedFrontends map[string]*types.Frontend
|
||||||
expectedBackends map[string]*types.Backend
|
expectedBackends map[string]*types.Backend
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should build config of nothing",
|
|
||||||
nodes: []catalogUpdate{},
|
nodes: []catalogUpdate{},
|
||||||
expectedFrontends: map[string]*types.Frontend{},
|
expectedFrontends: map[string]*types.Frontend{},
|
||||||
expectedBackends: map[string]*types.Backend{},
|
expectedBackends: map[string]*types.Backend{},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should build config with no frontend and backend",
|
|
||||||
nodes: []catalogUpdate{
|
nodes: []catalogUpdate{
|
||||||
{
|
{
|
||||||
Service: &serviceUpdate{
|
Service: &serviceUpdate{
|
||||||
|
@ -366,7 +339,6 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
expectedBackends: map[string]*types.Backend{},
|
expectedBackends: map[string]*types.Backend{},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should build config who contains one frontend and one backend",
|
|
||||||
nodes: []catalogUpdate{
|
nodes: []catalogUpdate{
|
||||||
{
|
{
|
||||||
Service: &serviceUpdate{
|
Service: &serviceUpdate{
|
||||||
|
@ -436,31 +408,28 @@ func TestConsulCatalogBuildConfig(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, c := range cases {
|
||||||
test := test
|
actualConfig := provider.buildConfig(c.nodes)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
if !reflect.DeepEqual(actualConfig.Backends, c.expectedBackends) {
|
||||||
t.Parallel()
|
t.Fatalf("expected %#v, got %#v", c.expectedBackends, actualConfig.Backends)
|
||||||
|
}
|
||||||
actualConfig := provider.buildConfig(test.nodes)
|
if !reflect.DeepEqual(actualConfig.Frontends, c.expectedFrontends) {
|
||||||
assert.Equal(t, test.expectedBackends, actualConfig.Backends)
|
t.Fatalf("expected %#v, got %#v", c.expectedFrontends["frontend-test"].BasicAuth, actualConfig.Frontends["frontend-test"].BasicAuth)
|
||||||
assert.Equal(t, test.expectedFrontends, actualConfig.Frontends)
|
t.Fatalf("expected %#v, got %#v", c.expectedFrontends, actualConfig.Frontends)
|
||||||
})
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestConsulCatalogNodeSorter(t *testing.T) {
|
func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||||
testCases := []struct {
|
cases := []struct {
|
||||||
desc string
|
|
||||||
nodes []*api.ServiceEntry
|
nodes []*api.ServiceEntry
|
||||||
expected []*api.ServiceEntry
|
expected []*api.ServiceEntry
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should sort nothing",
|
|
||||||
nodes: []*api.ServiceEntry{},
|
nodes: []*api.ServiceEntry{},
|
||||||
expected: []*api.ServiceEntry{},
|
expected: []*api.ServiceEntry{},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should sort by node address",
|
|
||||||
nodes: []*api.ServiceEntry{
|
nodes: []*api.ServiceEntry{
|
||||||
{
|
{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
|
@ -489,7 +458,6 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should sort by service name",
|
|
||||||
nodes: []*api.ServiceEntry{
|
nodes: []*api.ServiceEntry{
|
||||||
{
|
{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
|
@ -584,7 +552,6 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should sort by node address",
|
|
||||||
nodes: []*api.ServiceEntry{
|
nodes: []*api.ServiceEntry{
|
||||||
{
|
{
|
||||||
Service: &api.AgentService{
|
Service: &api.AgentService{
|
||||||
|
@ -636,15 +603,12 @@ func TestConsulCatalogNodeSorter(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, c := range cases {
|
||||||
test := test
|
sort.Sort(nodeSorter(c.nodes))
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
actual := c.nodes
|
||||||
t.Parallel()
|
if !reflect.DeepEqual(actual, c.expected) {
|
||||||
|
t.Fatalf("expected %q, got %q", c.expected, actual)
|
||||||
sort.Sort(nodeSorter(test.nodes))
|
}
|
||||||
actual := test.nodes
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
|
@ -659,13 +623,11 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
removedKeys []string
|
removedKeys []string
|
||||||
}
|
}
|
||||||
|
|
||||||
testCases := []struct {
|
cases := []struct {
|
||||||
desc string
|
|
||||||
input Input
|
input Input
|
||||||
output Output
|
output Output
|
||||||
}{
|
}{
|
||||||
{
|
{
|
||||||
desc: "Should add 0 services and removed 0",
|
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string]Service{
|
currState: map[string]Service{
|
||||||
"foo-service": {Name: "v1"},
|
"foo-service": {Name: "v1"},
|
||||||
|
@ -706,7 +668,6 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should add 3 services and removed 0",
|
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string]Service{
|
currState: map[string]Service{
|
||||||
"foo-service": {Name: "v1"},
|
"foo-service": {Name: "v1"},
|
||||||
|
@ -744,7 +705,6 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
},
|
},
|
||||||
},
|
},
|
||||||
{
|
{
|
||||||
desc: "Should add 2 services and removed 2",
|
|
||||||
input: Input{
|
input: Input{
|
||||||
currState: map[string]Service{
|
currState: map[string]Service{
|
||||||
"foo-service": {Name: "v1"},
|
"foo-service": {Name: "v1"},
|
||||||
|
@ -782,20 +742,21 @@ func TestConsulCatalogGetChangedKeys(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, c := range cases {
|
||||||
test := test
|
addedKeys, removedKeys := getChangedServiceKeys(c.input.currState, c.input.prevState)
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
addedKeys, removedKeys := getChangedServiceKeys(test.input.currState, test.input.prevState)
|
if !reflect.DeepEqual(fun.Set(addedKeys), fun.Set(c.output.addedKeys)) {
|
||||||
assert.Equal(t, fun.Set(test.output.addedKeys), fun.Set(addedKeys), "Added keys comparison results: got %q, want %q", addedKeys, test.output.addedKeys)
|
t.Fatalf("Added keys comparison results: got %q, want %q", addedKeys, c.output.addedKeys)
|
||||||
assert.Equal(t, fun.Set(test.output.removedKeys), fun.Set(removedKeys), "Removed keys comparison results: got %q, want %q", removedKeys, test.output.removedKeys)
|
}
|
||||||
})
|
|
||||||
|
if !reflect.DeepEqual(fun.Set(removedKeys), fun.Set(c.output.removedKeys)) {
|
||||||
|
t.Fatalf("Removed keys comparison results: got %q, want %q", removedKeys, c.output.removedKeys)
|
||||||
|
}
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestConsulCatalogFilterEnabled(t *testing.T) {
|
func TestConsulCatalogFilterEnabled(t *testing.T) {
|
||||||
testCases := []struct {
|
cases := []struct {
|
||||||
desc string
|
desc string
|
||||||
exposedByDefault bool
|
exposedByDefault bool
|
||||||
node *api.ServiceEntry
|
node *api.ServiceEntry
|
||||||
|
@ -881,23 +842,24 @@ func TestConsulCatalogFilterEnabled(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, c := range cases {
|
||||||
test := test
|
c := c
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
t.Run(c.desc, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
provider := &CatalogProvider{
|
provider := &CatalogProvider{
|
||||||
Domain: "localhost",
|
Domain: "localhost",
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
ExposedByDefault: test.exposedByDefault,
|
ExposedByDefault: c.exposedByDefault,
|
||||||
|
}
|
||||||
|
if provider.nodeFilter("test", c.node) != c.expected {
|
||||||
|
t.Errorf("got unexpected filtering = %t", !c.expected)
|
||||||
}
|
}
|
||||||
actual := provider.nodeFilter("test", test.node)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
||||||
testCases := []struct {
|
cases := []struct {
|
||||||
desc string
|
desc string
|
||||||
tags []string
|
tags []string
|
||||||
expected []string
|
expected []string
|
||||||
|
@ -916,15 +878,17 @@ func TestConsulCatalogGetBasicAuth(t *testing.T) {
|
||||||
},
|
},
|
||||||
}
|
}
|
||||||
|
|
||||||
for _, test := range testCases {
|
for _, c := range cases {
|
||||||
test := test
|
c := c
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
t.Run(c.desc, func(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
provider := &CatalogProvider{
|
provider := &CatalogProvider{
|
||||||
Prefix: "traefik",
|
Prefix: "traefik",
|
||||||
}
|
}
|
||||||
actual := provider.getBasicAuth(test.tags)
|
actual := provider.getBasicAuth(c.tags)
|
||||||
assert.Equal(t, test.expected, actual)
|
if !reflect.DeepEqual(actual, c.expected) {
|
||||||
|
t.Errorf("actual %q, expected %q", actual, c.expected)
|
||||||
|
}
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
@ -966,276 +930,7 @@ func TestConsulCatalogHasStickinessLabel(t *testing.T) {
|
||||||
t.Parallel()
|
t.Parallel()
|
||||||
|
|
||||||
actual := provider.hasStickinessLabel(test.tags)
|
actual := provider.hasStickinessLabel(test.tags)
|
||||||
assert.Equal(t, test.expected, actual)
|
assert.Equal(t, actual, test.expected)
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConsulCatalogGetChangedStringKeys(t *testing.T) {
|
|
||||||
testCases := []struct {
|
|
||||||
desc string
|
|
||||||
current []string
|
|
||||||
previous []string
|
|
||||||
expectedAdded []string
|
|
||||||
expectedRemoved []string
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
desc: "1 element added, 0 removed",
|
|
||||||
current: []string{"chou"},
|
|
||||||
previous: []string{},
|
|
||||||
expectedAdded: []string{"chou"},
|
|
||||||
expectedRemoved: []string{},
|
|
||||||
}, {
|
|
||||||
desc: "0 element added, 0 removed",
|
|
||||||
current: []string{"chou"},
|
|
||||||
previous: []string{"chou"},
|
|
||||||
expectedAdded: []string{},
|
|
||||||
expectedRemoved: []string{},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "0 element added, 1 removed",
|
|
||||||
current: []string{},
|
|
||||||
previous: []string{"chou"},
|
|
||||||
expectedAdded: []string{},
|
|
||||||
expectedRemoved: []string{"chou"},
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "1 element added, 1 removed",
|
|
||||||
current: []string{"carotte"},
|
|
||||||
previous: []string{"chou"},
|
|
||||||
expectedAdded: []string{"carotte"},
|
|
||||||
expectedRemoved: []string{"chou"},
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range testCases {
|
|
||||||
test := test
|
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
actualAdded, actualRemoved := getChangedStringKeys(test.current, test.previous)
|
|
||||||
assert.Equal(t, test.expectedAdded, actualAdded)
|
|
||||||
assert.Equal(t, test.expectedRemoved, actualRemoved)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConsulCatalogHasNodeOrTagschanged(t *testing.T) {
|
|
||||||
testCases := []struct {
|
|
||||||
desc string
|
|
||||||
current map[string]Service
|
|
||||||
previous map[string]Service
|
|
||||||
expected bool
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
desc: "Change detected due to change of nodes",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node2"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "No change missing current service",
|
|
||||||
current: make(map[string]Service),
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "No change on nodes",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "No change on nodes and tags",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "Change detected con tags",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range testCases {
|
|
||||||
test := test
|
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
actual := hasNodeOrTagsChanged(test.current, test.previous)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func TestConsulCatalogHasChanged(t *testing.T) {
|
|
||||||
testCases := []struct {
|
|
||||||
desc string
|
|
||||||
current map[string]Service
|
|
||||||
previous map[string]Service
|
|
||||||
expected bool
|
|
||||||
}{
|
|
||||||
{
|
|
||||||
desc: "Change detected due to change new service",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: make(map[string]Service),
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "Change detected due to change service removed",
|
|
||||||
current: make(map[string]Service),
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "Change detected due to change of nodes",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node2"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "No change on nodes",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "No change on nodes and tags",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: false,
|
|
||||||
},
|
|
||||||
{
|
|
||||||
desc: "Change detected on tags",
|
|
||||||
current: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo=bar"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
previous: map[string]Service{
|
|
||||||
"foo-service": {
|
|
||||||
Name: "foo",
|
|
||||||
Nodes: []string{"node1"},
|
|
||||||
Tags: []string{"foo"},
|
|
||||||
},
|
|
||||||
},
|
|
||||||
expected: true,
|
|
||||||
},
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, test := range testCases {
|
|
||||||
test := test
|
|
||||||
t.Run(test.desc, func(t *testing.T) {
|
|
||||||
t.Parallel()
|
|
||||||
|
|
||||||
actual := hasChanged(test.current, test.previous)
|
|
||||||
assert.Equal(t, test.expected, actual)
|
|
||||||
})
|
})
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
|
@ -102,7 +102,7 @@ func (p *Provider) watchKv(configurationChan chan<- types.ConfigMessage, prefix
|
||||||
func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error {
|
func (p *Provider) Provide(configurationChan chan<- types.ConfigMessage, pool *safe.Pool, constraints types.Constraints) error {
|
||||||
p.Constraints = append(p.Constraints, constraints...)
|
p.Constraints = append(p.Constraints, constraints...)
|
||||||
operation := func() error {
|
operation := func() error {
|
||||||
if _, err := p.kvclient.Exists(p.Prefix + "/qmslkjdfmqlskdjfmqlksjazçueznbvbwzlkajzebvkwjdcqmlsfj"); err != nil {
|
if _, err := p.kvclient.Exists("qmslkjdfmqlskdjfmqlksjazçueznbvbwzlkajzebvkwjdcqmlsfj"); err != nil {
|
||||||
return fmt.Errorf("Failed to test KV store connection: %v", err)
|
return fmt.Errorf("Failed to test KV store connection: %v", err)
|
||||||
}
|
}
|
||||||
if p.Watch {
|
if p.Watch {
|
||||||
|
|
|
@ -24,17 +24,13 @@ GIT_REPO_URL='github.com/containous/traefik/version'
|
||||||
GO_BUILD_CMD="go build -ldflags"
|
GO_BUILD_CMD="go build -ldflags"
|
||||||
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Codename=${CODENAME} -X ${GIT_REPO_URL}.BuildDate=${DATE}"
|
GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Codename=${CODENAME} -X ${GIT_REPO_URL}.BuildDate=${DATE}"
|
||||||
|
|
||||||
# Build amd64 binaries
|
# Build 386 amd64 binaries
|
||||||
OS_PLATFORM_ARG=(linux windows darwin)
|
OS_PLATFORM_ARG=(linux windows darwin)
|
||||||
OS_ARCH_ARG=(amd64)
|
OS_ARCH_ARG=(amd64)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
BIN_EXT=''
|
|
||||||
if [ "$OS" == "windows" ]; then
|
|
||||||
BIN_EXT='.exe'
|
|
||||||
fi
|
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for ${OS}/${ARCH}..."
|
echo "Building binary for ${OS}/${ARCH}..."
|
||||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}${BIN_EXT}" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
|
@ -28,13 +28,9 @@ GO_BUILD_OPT="-s -w -X ${GIT_REPO_URL}.Version=${VERSION} -X ${GIT_REPO_URL}.Cod
|
||||||
OS_PLATFORM_ARG=(linux windows darwin)
|
OS_PLATFORM_ARG=(linux windows darwin)
|
||||||
OS_ARCH_ARG=(386)
|
OS_ARCH_ARG=(386)
|
||||||
for OS in ${OS_PLATFORM_ARG[@]}; do
|
for OS in ${OS_PLATFORM_ARG[@]}; do
|
||||||
BIN_EXT=''
|
|
||||||
if [ "$OS" == "windows" ]; then
|
|
||||||
BIN_EXT='.exe'
|
|
||||||
fi
|
|
||||||
for ARCH in ${OS_ARCH_ARG[@]}; do
|
for ARCH in ${OS_ARCH_ARG[@]}; do
|
||||||
echo "Building binary for ${OS}/${ARCH}..."
|
echo "Building binary for $OS/$ARCH..."
|
||||||
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "${GO_BUILD_OPT}" -o "dist/traefik_${OS}-${ARCH}${BIN_EXT}" ./cmd/traefik/
|
GOARCH=${ARCH} GOOS=${OS} CGO_ENABLED=0 ${GO_BUILD_CMD} "$GO_BUILD_OPT" -o "dist/traefik_$OS-$ARCH" ./cmd/traefik/
|
||||||
done
|
done
|
||||||
done
|
done
|
||||||
|
|
||||||
|
|
12
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
12
vendor/github.com/NYTimes/gziphandler/gzip.go
generated
vendored
|
@ -150,10 +150,8 @@ func (w *GzipResponseWriter) startGzip() error {
|
||||||
|
|
||||||
// WriteHeader just saves the response code until close or GZIP effective writes.
|
// WriteHeader just saves the response code until close or GZIP effective writes.
|
||||||
func (w *GzipResponseWriter) WriteHeader(code int) {
|
func (w *GzipResponseWriter) WriteHeader(code int) {
|
||||||
if w.code == 0 {
|
|
||||||
w.code = code
|
w.code = code
|
||||||
}
|
}
|
||||||
}
|
|
||||||
|
|
||||||
// init graps a new gzip writer from the gzipWriterPool and writes the correct
|
// init graps a new gzip writer from the gzipWriterPool and writes the correct
|
||||||
// content encoding header.
|
// content encoding header.
|
||||||
|
@ -192,15 +190,9 @@ func (w *GzipResponseWriter) Close() error {
|
||||||
// http.ResponseWriter if it is an http.Flusher. This makes GzipResponseWriter
|
// http.ResponseWriter if it is an http.Flusher. This makes GzipResponseWriter
|
||||||
// an http.Flusher.
|
// an http.Flusher.
|
||||||
func (w *GzipResponseWriter) Flush() {
|
func (w *GzipResponseWriter) Flush() {
|
||||||
if w.gw == nil {
|
if w.gw != nil {
|
||||||
// Only flush once startGzip has been called.
|
|
||||||
//
|
|
||||||
// Flush is thus a no-op until the written body
|
|
||||||
// exceeds minSize.
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
w.gw.Flush()
|
w.gw.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
if fw, ok := w.ResponseWriter.(http.Flusher); ok {
|
if fw, ok := w.ResponseWriter.(http.Flusher); ok {
|
||||||
fw.Flush()
|
fw.Flush()
|
||||||
|
|
Loading…
Reference in a new issue