On a reasonably sized cluster:
63 nodes
87 services
90 endpoints
The initialization of the k8s provider would hang.
I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
We added the ability to filter the ingresses used by traefik based
on a label selector, but we shouldn't need to have matching
labels on every other resource, Ingress allready has a way
to explicty choose which pods end up in the load ballancer
(by refering to the membership of a particular service)
Since we already know the name and namespace
of the service(s) we want we can just get the
correct one back from the API without filtering
the results.
* Potentialy saves a network hop
* Ability to configure LB algothim (given some work to expose an
anotation etc...)
* K8s config Watch is triggered far less often
If the flag kubernetes.namespaces is set...
Then we only select ingresses from that/those namespace(s)
This allows multiple instances of traefik to
independently load balance for each namespace.
This could be for logical or security reasons.
Addresses #336