Add compatibility with labels: `HAPROXY_GROUP` and `HAPROXY_0_VHOST`.
* `HAPROXY_GROUP` become a new tag
* `HAPROXY_0_VHOST` become a rule `Host:`
https://github.com/mesosphere/marathon-lb
- React to health_status events
- Filter container that have a health status *and* that are not healthy
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration
Healthcheck are not mandatory, so if a result is not present, assume it
is ok to continue. Fixes the case when a new leader is elected and
don't have any healthcheck result's, returning 404 to all requests.
https://github.com/containous/traefik/issues/653
We added the ability to filter the ingresses used by traefik based
on a label selector, but we shouldn't need to have matching
labels on every other resource, Ingress allready has a way
to explicty choose which pods end up in the load ballancer
(by refering to the membership of a particular service)
The TargetRef contains information from the object referenced
by the pod, unless the service has been set up with bare
endpoints - i.e. not pointing at pods this information
will be present.
It just makes the information that we show in the web-ui
a little more constent with that shown in kubectl
and the kuberntes dashboard.
… making it possible to use in other packages ; and thus in the
User-Agent header for the docker client.
Also removing the dockerverion hack as it's not required anymore.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
The watch of consul can return for various reasons and not of all of
them require a reload of the config. The order of nodes provided by
consul is not stable so to ensure a identical config is generated for an
identical server set the nodes needs to be sorted before creating the
config.
… and also share context accross API call, as this is how it's meant to
be used (and it allows to skip some calls if `cancel` is called).
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
add flag on Retry
set Retry.MaxMem to 2 by default
rm useless import
rm useless structtag
add custom parser on []acme.Domain type
add commants + refactor
Since we already know the name and namespace
of the service(s) we want we can just get the
correct one back from the API without filtering
the results.
* Potentialy saves a network hop
* Ability to configure LB algothim (given some work to expose an
anotation etc...)
* K8s config Watch is triggered far less often
By design k8s ingress is only designed to ballance services from within
the namespace of the ingress.
This is disscuessed a little in
https://github.com/kubernetes/kubernetes/issues/17088.
For now traefik should only reference the services in the current
namespace. For me this was a confusing change of behaviour
from the reference implimentations, as I have services
with the same name in each namespace.
Another fun thing consul lets you do is use spaces in your tags. This
means when including tags in backend name it's possible to generate
invalid names.
If the flag kubernetes.namespaces is set...
Then we only select ingresses from that/those namespace(s)
This allows multiple instances of traefik to
independently load balance for each namespace.
This could be for logical or security reasons.
Addresses #336