Only use one channel for all watches
Re-use stop channel from the provider
Skip events that have already been handled by the provider, builds on 007f8cc48ea9504bb7754c5e3244124be422f47d
On a reasonably sized cluster:
63 nodes
87 services
90 endpoints
The initialization of the k8s provider would hang.
I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
Add compatibility with labels: `HAPROXY_GROUP` and `HAPROXY_0_VHOST`.
* `HAPROXY_GROUP` become a new tag
* `HAPROXY_0_VHOST` become a rule `Host:`
https://github.com/mesosphere/marathon-lb
- React to health_status events
- Filter container that have a health status *and* that are not healthy
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration