This was likely just a copy-paste issue, the bug should be benign because the secret is cast to the correct type later, but the additional logging is a major annoyance, and is happening even if basic auth is not in use with Kubernetes.
We previously fell back to using ClusterIPs. However, the approach can
lead to all kinds of problems since Ingresses rely on being able to talk
to Endpoints directly. For instance, it can break stickiness and
retries.
Instead of doing sanity checks in the Kubernetes provider, we just
accept any non-empty value from the annotation and rely on the server
part to filter out unknown rules.
This allows us to automatically stay in sync with the currently
supported Path matchers/modifiers.
A missing annotation would previously be handled in the default error
case, causing a noisy warning-level log message to be generated each
time.
We add another case statement to ignore the case where the annotation is
missing from the annotations map.
Also piggybacking a minor improvement to the log message.
- Improves default filtering behavior to filter by container health/healthState
- Optionally allows filtering by service health/healthState
- Allows configuration of refresh interval
For the two existing health check parameters (path and interval), we add
support for Marathon labels.
Changes in detail:
- Extend the Marathon provider and template.
- Refactor Server.loadConfig to reduce duplication.
- Refactor the healthcheck package slightly to accommodate the changes
and allow extending by future parameters.
- Update documentation.
The IP-Per-Task feature changed the behavior for
clients without this configuration (using the task IP instead
of task hostname). This patch make the new behavior available
just for Mesos installation with IP-Per-Task enabled. It also
make it possible to force the use of task's hostname.
Previously, we did the check too late resulting in the traefik.port
label not being effective.
The change comes with additional refactorings in production and tests.
This fix allows the Traefik Rancher provider to obtain a complete view
of the environments, services and containers being managed by the
Rancher deployment.
- Split the file into smaller ones (docker, swarm and service tests)
- Use some builder to reduce a little bit the noise for creating containers
Signed-off-by: Vincent Demeester <vincent@sbr.pm>