SWARM Mode has it's own built in Load balancer, so if we want to leverage sticky sessions,
or if we would just prefer to bypass it and go directly to the containers (aka tasks), via
--label traefik.backend.disable.swarm.loadbalancer=true
then we need to let Traefik know about the underlying tasks and register them as
services within it's backend.
The IP-Per-Task PR introduced a bug using the marathon application
port mapping. This port should be used only in the proxy server, the
downstream connection should be always made with the task port.
This commit fix the regression and adds a unit test to prevent new
problems in this setup.
Only use one channel for all watches
Re-use stop channel from the provider
Skip events that have already been handled by the provider, builds on 007f8cc48ea9504bb7754c5e3244124be422f47d
On a reasonably sized cluster:
63 nodes
87 services
90 endpoints
The initialization of the k8s provider would hang.
I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
Add compatibility with labels: `HAPROXY_GROUP` and `HAPROXY_0_VHOST`.
* `HAPROXY_GROUP` become a new tag
* `HAPROXY_0_VHOST` become a rule `Host:`
https://github.com/mesosphere/marathon-lb
- React to health_status events
- Filter container that have a health status *and* that are not healthy
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration
Healthcheck are not mandatory, so if a result is not present, assume it
is ok to continue. Fixes the case when a new leader is elected and
don't have any healthcheck result's, returning 404 to all requests.
https://github.com/containous/traefik/issues/653
We added the ability to filter the ingresses used by traefik based
on a label selector, but we shouldn't need to have matching
labels on every other resource, Ingress allready has a way
to explicty choose which pods end up in the load ballancer
(by refering to the membership of a particular service)
The TargetRef contains information from the object referenced
by the pod, unless the service has been set up with bare
endpoints - i.e. not pointing at pods this information
will be present.
It just makes the information that we show in the web-ui
a little more constent with that shown in kubectl
and the kuberntes dashboard.
… making it possible to use in other packages ; and thus in the
User-Agent header for the docker client.
Also removing the dockerverion hack as it's not required anymore.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
The watch of consul can return for various reasons and not of all of
them require a reload of the config. The order of nodes provided by
consul is not stable so to ensure a identical config is generated for an
identical server set the nodes needs to be sorted before creating the
config.
… and also share context accross API call, as this is how it's meant to
be used (and it allows to skip some calls if `cancel` is called).
Signed-off-by: Vincent Demeester <vincent@sbr.pm>