This change introduces the builder pattern to the Marathon unit tests in
order to simplify and reduce the amount of testing boilerplate.
Additional changes:
- Add missing unit tests.
- Make all tests look consistent.
- Use dedicated type for task states for increased type safety.
- Remove obsoleted getApplication function.
Change Marathon provider to make just one API call instead of two per
configuration update by means of specifying embedded resources, which
enable retrieving multiple response types from the API at once. Apart
from the obvious savings in API calls, we primarily gain a consistent
view on both applications and tasks that allows us to drop a lot of
correlation logic. Additionally, it will serve as the basis for the
introduction of readiness checks which require application/task
consistency for correct leverage on the proxy end.
Additional changes:
marathon.go:
- Filter on tasks now embedded inside the applications.
- Reduce/simplify signature on multiple template functions as we do not
need to check for proper application/task correlation anymore.
- Remove getFrontendBackend in favor of just getBackend.
- Move filtering on enabled/exposed applications from `taskFilter` to
`applicationFilter`. (The task filter just reached out to the
applications anyway, so it never made sense to locate it with the
tasks where the filter was called once for every task even though the
result would never change.)
- Remove duplicate constraints filter in tasks, where it neither made
sense to keep as it operates on the application level only.
- Add context to rendering error.
marathon_test.go:
- Simplify and reduce numerous tests.
- Convert tests with high number of cases into parallelized sub-tests.
- Improve readability/structure for several tests.
- Add missing test for enabled/exposed applications.
- Simplify the mocked Marathon server.
marathon.tmpl:
- Update application/task iteration.
- Replace `getFrontendBackend` by `getBackend`.
When Secrets permissions have not been granted (which is likely to be
the case for users not needing the basic auth feature), the watch on the
Secrets API will never yield a response, thereby causing the controller
to never sync successfully, and in turn causing the check for all
controller synchronizations to fail consistently. Thus, no event will
ever be handled.
Update traefik dependencies (docker/docker and related)
- Update dependencies
- Fix compilation problems
- Remove vdemeester/docker-events (in docker api now)
- Remove `integration/vendor`
- Use `testImport`
- update some deps.
- regenerate the lock from scratch (after a `glide cc`)
Introduces Rancher's metadata service as an optional provider source for
Traefik, enabled by setting `rancher.MetadataService`.
The provider uses a long polling technique to watch the metadata service and
obtain near instantaneous updates. Alternatively it can be configured to poll
the metadata service every `rancher.RefreshSeconds` by setting
`rancher.MetadataPoll`.
The refactor splits API and metadata service code into separate source
files respectively, and specific configuration is deferred to
sub-structs.
Incorporates bugfix #1414
This was likely just a copy-paste issue, the bug should be benign because the secret is cast to the correct type later, but the additional logging is a major annoyance, and is happening even if basic auth is not in use with Kubernetes.
We previously fell back to using ClusterIPs. However, the approach can
lead to all kinds of problems since Ingresses rely on being able to talk
to Endpoints directly. For instance, it can break stickiness and
retries.
Instead of doing sanity checks in the Kubernetes provider, we just
accept any non-empty value from the annotation and rely on the server
part to filter out unknown rules.
This allows us to automatically stay in sync with the currently
supported Path matchers/modifiers.
A missing annotation would previously be handled in the default error
case, causing a noisy warning-level log message to be generated each
time.
We add another case statement to ignore the case where the annotation is
missing from the annotations map.
Also piggybacking a minor improvement to the log message.
- Improves default filtering behavior to filter by container health/healthState
- Optionally allows filtering by service health/healthState
- Allows configuration of refresh interval
For the two existing health check parameters (path and interval), we add
support for Marathon labels.
Changes in detail:
- Extend the Marathon provider and template.
- Refactor Server.loadConfig to reduce duplication.
- Refactor the healthcheck package slightly to accommodate the changes
and allow extending by future parameters.
- Update documentation.
The IP-Per-Task feature changed the behavior for
clients without this configuration (using the task IP instead
of task hostname). This patch make the new behavior available
just for Mesos installation with IP-Per-Task enabled. It also
make it possible to force the use of task's hostname.
Previously, we did the check too late resulting in the traefik.port
label not being effective.
The change comes with additional refactorings in production and tests.
This fix allows the Traefik Rancher provider to obtain a complete view
of the environments, services and containers being managed by the
Rancher deployment.
- Split the file into smaller ones (docker, swarm and service tests)
- Use some builder to reduce a little bit the noise for creating containers
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
* Abort Kubernetes Ingress update if Kubernetes API call fails
Currently if a Kubernetes API call fails we potentially remove a working service from Traefik. This changes it so if a Kubernetes API call fails we abort out of the ingress update and use the current working config. Github issue: #1240
Also added a test to cover when requested resources (services and endpoints) that the user has specified don’t exist.
* Specifically capturing the tc range as documented here: https://blog.golang.org/subtests
* Updating service names in the mock data to be more clear
* Updated expected data to match what currently happens in the loadIngress
* Adding a blank Servers to the expected output so we compare against that instead of nil.
* Replacing the JSON test output with spew for the TestMissingResources test to help ensure we have useful output incase of failures
* Adding a temporary fix to the GetEndoints mocked function so we can override the return value for if the endpoints exist.
After the 1.2 release the use of properExists should be removed and the GetEndpoints function should return false for the second value indicating the endpoint doesn’t exist. However at this time that would break a lot of the tests.
* Adding quick TODO line about removing the properExists property
* Link to issue 1307 re: properExists flag.
If the ECS cluster has > 100 tasks, passing them to
ecs.DescribeTasksRequest() will result in the AWS API returning
errors.
This patch breaks them into chunks of at most 100, and calls
DescribeTasks for each chunk.
We also return early in case ListTasks returns no values; this
prevents DescribeTasks from throwing HTTP errors.
Currently with a kv tree like:
/traefik/backends/b1/servers/ẁeb1
/traefik/backends/b1/servers/web2
/traefik/backends/b1/servers/web2/url
Traefik would try to forward traffic to web1, which is impossible as
traefik doesn't know the url of web1.
This commit solve that, by ignoring backend server with no url "key"
when generating the config.
This is very useful, for people who use etcd TTL feature. They can then
just "renew" the url key every X second, and if the server goes down, it
is automatic removed from traefik after the TTL.
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
add some comments
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
update readmes
make test runnable
Signed-off-by: Taylor Skinner <tskinn12@gmail.com>
make test
squash! add dynamo
add glide.lock
format imports
gofmt
update glide.lock
fixes for review
golint
clean up and reorganize tests
add dynamodb integration test
remove default region. clean up tests. consistent docs
forgot the region is required
DRY
make validate
update readme and commit dependencies
- traefik.mycustomservice.port=443
- traefik.mycustomservice.frontend.rule=Path:/mycustomservice
- traefik.anothercustomservice.port=8080
- traefik.anothercustomservice.frontend.rule=Path:/anotherservice
all traffic to frontend /mycustomservice is redirected to the port 443 of the container while using /anotherservice will redirect to the port 8080 of the docker container
More documentation in the docs/toml.md file
Change-Id: Ifaa3bb00ef0a0f38aa189e0ca1586fde8c5ed862
Signed-off-by: Florent BENOIT <fbenoit@codenvy.com>
Detect whether in-cluster or cluster-external Kubernetes client should
be used based on the KUBERNETES_SERVICE_{HOST,PORT} environment
variables.
Adds bearer token and CA certificate file path parameters.
In Swarm mode with with Docker Swarm’s Load Balancer disabled (traefik.backend.loadbalancer.swarm=false)
service name will be the name of the docker service and name will be the container task name
(e.g. whoami0.1). When generating backend and fronted rules, we will use service name instead of name if a
rule is not provided.
Initialize dockerData.ServiceName to dockerData.Name to support non-swarm mode.
SWARM Mode has it's own built in Load balancer, so if we want to leverage sticky sessions,
or if we would just prefer to bypass it and go directly to the containers (aka tasks), via
--label traefik.backend.disable.swarm.loadbalancer=true
then we need to let Traefik know about the underlying tasks and register them as
services within it's backend.
The IP-Per-Task PR introduced a bug using the marathon application
port mapping. This port should be used only in the proxy server, the
downstream connection should be always made with the task port.
This commit fix the regression and adds a unit test to prevent new
problems in this setup.
Only use one channel for all watches
Re-use stop channel from the provider
Skip events that have already been handled by the provider, builds on 007f8cc48ea9504bb7754c5e3244124be422f47d
On a reasonably sized cluster:
63 nodes
87 services
90 endpoints
The initialization of the k8s provider would hang.
I tracked this down to the ResourceEventHandlerFuncs. Once you reach the
channel buffer size (10) the k8s Informer gets stuck. You can't read or
write messages to the channel anymore. I think this is probably a lock
issue somewhere in k8s but the more reasonable solution for the traefik
usecase is to just drop events when the queue is full since we only use
the events for signalling, not their content, thus dropping an event
doesn't matter.
Add compatibility with labels: `HAPROXY_GROUP` and `HAPROXY_0_VHOST`.
* `HAPROXY_GROUP` become a new tag
* `HAPROXY_0_VHOST` become a rule `Host:`
https://github.com/mesosphere/marathon-lb
- React to health_status events
- Filter container that have a health status *and* that are not healthy
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
This change adds sticky session support, by using the new
oxy/rr/StickySession feature.
To use it, set traefik.backend.sticky to true.
This is currently only implemented in the wrr load balancer, and against
the Marathon backend, but lifting it should be very doable.
In the wrr load balancer, a cookie called _TRAEFIK_SERVERNAME will be
set with the backend to use. If the cookie is altered to an invalid
backend server, or the server is removed from the load balancer, the
next server will be used instead.
Otherwise, the cookie will be checked in Oxy's rr on access and if valid
the connection will be wired through to it.
Initial implementation: Force both to be present to trigger behavior.
add ability to see rendered template in debug
add support for loadbalancer and circuit breaker specification
add documentation for new configuration
Healthcheck are not mandatory, so if a result is not present, assume it
is ok to continue. Fixes the case when a new leader is elected and
don't have any healthcheck result's, returning 404 to all requests.
https://github.com/containous/traefik/issues/653
We added the ability to filter the ingresses used by traefik based
on a label selector, but we shouldn't need to have matching
labels on every other resource, Ingress allready has a way
to explicty choose which pods end up in the load ballancer
(by refering to the membership of a particular service)
The TargetRef contains information from the object referenced
by the pod, unless the service has been set up with bare
endpoints - i.e. not pointing at pods this information
will be present.
It just makes the information that we show in the web-ui
a little more constent with that shown in kubectl
and the kuberntes dashboard.
… making it possible to use in other packages ; and thus in the
User-Agent header for the docker client.
Also removing the dockerverion hack as it's not required anymore.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
The watch of consul can return for various reasons and not of all of
them require a reload of the config. The order of nodes provided by
consul is not stable so to ensure a identical config is generated for an
identical server set the nodes needs to be sorted before creating the
config.
… and also share context accross API call, as this is how it's meant to
be used (and it allows to skip some calls if `cancel` is called).
Signed-off-by: Vincent Demeester <vincent@sbr.pm>
add flag on Retry
set Retry.MaxMem to 2 by default
rm useless import
rm useless structtag
add custom parser on []acme.Domain type
add commants + refactor
Since we already know the name and namespace
of the service(s) we want we can just get the
correct one back from the API without filtering
the results.
* Potentialy saves a network hop
* Ability to configure LB algothim (given some work to expose an
anotation etc...)
* K8s config Watch is triggered far less often
By design k8s ingress is only designed to ballance services from within
the namespace of the ingress.
This is disscuessed a little in
https://github.com/kubernetes/kubernetes/issues/17088.
For now traefik should only reference the services in the current
namespace. For me this was a confusing change of behaviour
from the reference implimentations, as I have services
with the same name in each namespace.
Another fun thing consul lets you do is use spaces in your tags. This
means when including tags in backend name it's possible to generate
invalid names.
If the flag kubernetes.namespaces is set...
Then we only select ingresses from that/those namespace(s)
This allows multiple instances of traefik to
independently load balance for each namespace.
This could be for logical or security reasons.
Addresses #336
The docker provider now uses docker/engine-api and
vdemeester/docker-events instead of fsouza-dockerclient.
Signed-off-by: Vincent Demeester <vincent@sbr.pm>