Change Marathon provider to make just one API call instead of two per
configuration update by means of specifying embedded resources, which
enable retrieving multiple response types from the API at once. Apart
from the obvious savings in API calls, we primarily gain a consistent
view on both applications and tasks that allows us to drop a lot of
correlation logic. Additionally, it will serve as the basis for the
introduction of readiness checks which require application/task
consistency for correct leverage on the proxy end.
Additional changes:
marathon.go:
- Filter on tasks now embedded inside the applications.
- Reduce/simplify signature on multiple template functions as we do not
need to check for proper application/task correlation anymore.
- Remove getFrontendBackend in favor of just getBackend.
- Move filtering on enabled/exposed applications from `taskFilter` to
`applicationFilter`. (The task filter just reached out to the
applications anyway, so it never made sense to locate it with the
tasks where the filter was called once for every task even though the
result would never change.)
- Remove duplicate constraints filter in tasks, where it neither made
sense to keep as it operates on the application level only.
- Add context to rendering error.
marathon_test.go:
- Simplify and reduce numerous tests.
- Convert tests with high number of cases into parallelized sub-tests.
- Improve readability/structure for several tests.
- Add missing test for enabled/exposed applications.
- Simplify the mocked Marathon server.
marathon.tmpl:
- Update application/task iteration.
- Replace `getFrontendBackend` by `getBackend`.
We previously fell back to using ClusterIPs. However, the approach can
lead to all kinds of problems since Ingresses rely on being able to talk
to Endpoints directly. For instance, it can break stickiness and
retries.
For the two existing health check parameters (path and interval), we add
support for Marathon labels.
Changes in detail:
- Extend the Marathon provider and template.
- Refactor Server.loadConfig to reduce duplication.
- Refactor the healthcheck package slightly to accommodate the changes
and allow extending by future parameters.
- Update documentation.
- traefik.mycustomservice.port=443
- traefik.mycustomservice.frontend.rule=Path:/mycustomservice
- traefik.anothercustomservice.port=8080
- traefik.anothercustomservice.frontend.rule=Path:/anotherservice
all traffic to frontend /mycustomservice is redirected to the port 443 of the container while using /anotherservice will redirect to the port 8080 of the docker container
More documentation in the docs/toml.md file
Change-Id: Ifaa3bb00ef0a0f38aa189e0ca1586fde8c5ed862
Signed-off-by: Florent BENOIT <fbenoit@codenvy.com>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration
I changed what I think is needed and I have done manual testing on this.
I tried to keep the changes to a minimun.
The changes are approx:
* HTML output now includes the provider name in parenthesis.
* I'm not versed in bootstrap, should the output group providers in a
* table?
* PUT is only enabled on /api/web.
* GET on /api returns a map containing all providers configuration
* GET on /api/{provider} will return the config as before on that
* provider.