Currently with a kv tree like:
/traefik/backends/b1/servers/ẁeb1
/traefik/backends/b1/servers/web2
/traefik/backends/b1/servers/web2/url
Traefik would try to forward traffic to web1, which is impossible as
traefik doesn't know the url of web1.
This commit solve that, by ignoring backend server with no url "key"
when generating the config.
This is very useful, for people who use etcd TTL feature. They can then
just "renew" the url key every X second, and if the server goes down, it
is automatic removed from traefik after the TTL.
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
add some comments
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
update readmes
make test runnable
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
make test
squash! add dynamo
add glide.lock
format imports
gofmt
update glide.lock
fixes for review
golint
clean up and reorganize tests
add dynamodb integration test
remove default region. clean up tests. consistent docs
forgot the region is required
DRY
make validate
update readme and commit dependencies
- traefik.mycustomservice.port=443
- traefik.mycustomservice.frontend.rule=Path:/mycustomservice
- traefik.anothercustomservice.port=8080
- traefik.anothercustomservice.frontend.rule=Path:/anotherservice
all traffic to frontend /mycustomservice is redirected to the port 443 of the container while using /anotherservice will redirect to the port 8080 of the docker container
More documentation in the docs/toml.md file
Change-Id: Ifaa3bb00ef0a0f38aa189e0ca1586fde8c5ed862
Signed-off-by: Florent BENOIT <fbenoit@codenvy.com>
Detect whether in-cluster or cluster-external Kubernetes client should
be used based on the KUBERNETES_SERVICE_{HOST,PORT} environment
variables.
Adds bearer token and CA certificate file path parameters.
In Swarm mode with with Docker Swarm’s Load Balancer disabled (traefik.backend.loadbalancer.swarm=false)
service name will be the name of the docker service and name will be the container task name
(e.g. whoami0.1). When generating backend and fronted rules, we will use service name instead of name if a
rule is not provided.
Initialize dockerData.ServiceName to dockerData.Name to support non-swarm mode.
SWARM Mode has it's own built in Load balancer, so if we want to leverage sticky sessions,
or if we would just prefer to bypass it and go directly to the containers (aka tasks), via
--label traefik.backend.disable.swarm.loadbalancer=true
then we need to let Traefik know about the underlying tasks and register them as
services within it's backend.
The IP-Per-Task PR introduced a bug using the marathon application
port mapping. This port should be used only in the proxy server, the
downstream connection should be always made with the task port.
This commit fix the regression and adds a unit test to prevent new
problems in this setup.
Only use one channel for all watches
Re-use stop channel from the provider
Skip events that have already been handled by the provider, builds on 007f8cc48ea9504bb7754c5e3244124be422f47d
On a reasonably sized cluster:
63 nodes
87 services
90 endpoints
The initialization of the k8s provider would hang.
I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
Add compatibility with labels: `HAPROXY_GROUP` and `HAPROXY_0_VHOST`.
* `HAPROXY_GROUP` become a new tag
* `HAPROXY_0_VHOST` become a rule `Host:`
https://github.com/mesosphere/marathon-lb
- React to health_status events
- Filter container that have a health status *and* that are not healthy
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration