Merge current v2.8 into master

This commit is contained in:
kevinpollet 2022-07-25 17:31:51 +02:00
commit ab94bbaece
No known key found for this signature in database
GPG key ID: 0C9A5DDD1B292453
71 changed files with 1093 additions and 475 deletions

View file

@ -7,7 +7,7 @@ on:
env:
GO_VERSION: 1.17
GOLANGCI_LINT_VERSION: v1.46.2
GOLANGCI_LINT_VERSION: v1.47.1
MISSSPELL_VERSION: v0.3.4
IN_DOCKER: ""

1
.gitignore vendored
View file

@ -18,3 +18,4 @@ vendor/
plugins-storage/
plugins-local/
traefik_changelog.md
integration/tailscale.secret

View file

@ -179,7 +179,7 @@
]
[[issues.exclude-rules]]
path = "(.+)_test.go"
linters = ["goconst", "funlen", "godot"]
linters = ["goconst", "funlen", "godot", "nosnakecase"]
[[issues.exclude-rules]]
path = "integration/.+_test.go"
text = "Error return value of `cmd\\.Process\\.Kill` is not checked"
@ -222,3 +222,15 @@
[[issues.exclude-rules]]
path = "pkg/server/router/tcp/manager.go"
text = "Function 'buildEntryPointHandler' is too long (.+)"
[[issues.exclude-rules]]
path = "integration/fake_dns_server.go"
linters = ["nosnakecase"]
[[issues.exclude-rules]]
path = "pkg/plugins/providers.go"
linters = ["nosnakecase"]
[[issues.exclude-rules]]
path = "pkg/tls/cipher.go"
linters = ["nosnakecase"]
[[issues.exclude-rules]]
text = "O_WRONLY|O_RDWR|O_CREATE|O_TRUNC|O_APPEND"
linters = ["nosnakecase"]

View file

@ -1,3 +1,16 @@
## [v2.8.1](https://github.com/traefik/traefik/tree/v2.8.1) (2022-07-11)
[All Commits](https://github.com/traefik/traefik/compare/v2.8.0...v2.8.1)
**Bug fixes:**
- **[kv]** Upgrade valkeyrie to v0.4.1 ([#9161](https://github.com/traefik/traefik/pull/9161) by [moutoum](https://github.com/moutoum))
- **[middleware,metrics]** Improve performances when Prometheus metrics are enabled ([#9168](https://github.com/traefik/traefik/pull/9168) by [juliens](https://github.com/juliens))
- **[middleware]** Support forwarded websocket protocol in RedirectScheme ([#9159](https://github.com/traefik/traefik/pull/9159) by [moutoum](https://github.com/moutoum))
**Documentation:**
- Update the language for advocating page ([#9169](https://github.com/traefik/traefik/pull/9169) by [tfny](https://github.com/tfny))
- Add callout for anyone using Traefik to manage commercial applications ([#9152](https://github.com/traefik/traefik/pull/9152) by [tomatokoolaid](https://github.com/tomatokoolaid))
- Update deprecation notices ([#9149](https://github.com/traefik/traefik/pull/9149) by [ddtmachado](https://github.com/ddtmachado))
## [v2.8.0](https://github.com/traefik/traefik/tree/v2.8.0) (2022-06-29)
[All Commits](https://github.com/traefik/traefik/compare/v2.8.0-rc1...v2.8.0)
@ -234,7 +247,7 @@ Release canceled.
- **[webui]** Add a link to service on router detail view ([#8821](https://github.com/traefik/traefik/pull/8821) by [Tchoupinax](https://github.com/Tchoupinax))
**Documentation:**
- Add a Feature Deprecation page ([#8868](https://github.com/traefik/traefik/pull/8868) by [ddtmachado](https://github.com/ddtmachado))
- Add a Feature Deprecation page ([#8868](https://github.com/traefik/traefik/pull/8868) by [ddtmachado](https://github.com/ddtmachado))
**Misc:**
- Merge current v2.6 into master ([#8877](https://github.com/traefik/traefik/pull/8877) by [rtribotte](https://github.com/rtribotte))
@ -602,7 +615,6 @@ Release canceled.
- Merge current v2.4 into master ([#7748](https://github.com/traefik/traefik/pull/7748) by [kevinpollet](https://github.com/kevinpollet))
- Merge current v2.4 into master ([#7728](https://github.com/traefik/traefik/pull/7728) by [mmatur](https://github.com/mmatur))
## [v2.4.14](https://github.com/traefik/traefik/tree/v2.4.14) (2021-08-16)
[All Commits](https://github.com/traefik/traefik/compare/v2.4.13...v2.4.14)
@ -3418,7 +3430,6 @@ Same changelog as v2.0.3.
## [v1.7.0-rc2](https://github.com/traefik/traefik/tree/v1.7.0-rc2) (2018-07-17)
[All Commits](https://github.com/traefik/traefik/compare/v1.7.0-rc1...v1.7.0-rc2)
**Bug fixes:**
- **[acme,provider]** Create init method on provider interface ([#3580](https://github.com/traefik/traefik/pull/3580) by [Juliens](https://github.com/Juliens))
- **[acme]** Serve TLS-Challenge certificate in first ([#3605](https://github.com/traefik/traefik/pull/3605) by [nmengin](https://github.com/nmengin))
@ -4428,7 +4439,7 @@ Same changelog as v2.0.3.
- **[acme]** Dumpcerts.sh: fixed sed, extracted domain keys ([#2161](https://github.com/traefik/traefik/pull/2161) by [sjawhar](https://github.com/sjawhar))
- Merge current v1.4 into master ([#2469](https://github.com/traefik/traefik/pull/2469) by [ldez](https://github.com/ldez))
- Revert "Merge v1.4.2 into master" ([#2414](https://github.com/traefik/traefik/pull/2414) by [ldez](https://github.com/ldez))
- Merge v1.4.3 into master ([#2406](https://github.com/traefik/traefik/pull/2406) by [ldez](https://github.com/ldez))
- Merge v1.4.3 into master ([#2406](https://github.com/traefik/traefik/pull/2406) by [ldez](https://github.com/ldez))
- Merge v1.4.2 into master ([#2358](https://github.com/traefik/traefik/pull/2358) by [ldez](https://github.com/ldez))
- Merge v1.4.3 into master ([#2415](https://github.com/traefik/traefik/pull/2415) by [ldez](https://github.com/ldez))
- Merge v1.4.1 into master ([#2318](https://github.com/traefik/traefik/pull/2318) by [ldez](https://github.com/ldez))
@ -5777,7 +5788,7 @@ Same changelog as v2.0.3.
- Fix k8s watch [\#573](https://github.com/traefik/traefik/pull/573) ([errm](https://github.com/errm))
- Add requirements.txt for netlify [\#567](https://github.com/traefik/traefik/pull/567) ([emilevauge](https://github.com/emilevauge))
- Merge v1.0.1 master [\#565](https://github.com/traefik/traefik/pull/565) ([emilevauge](https://github.com/emilevauge))
- Move webui to FountainJS with Webpack [\#558](https://github.com/traefik/traefik/pull/558) ([micaelmbagira](https://github.com/micaelmbagira))
- Move webui to FountainJS with Webpack [\#558](https://github.com/traefik/traefik/pull/558) ([micaelmbagira](https://github.com/micaelmbagira))
- Add global InsecureSkipVerify option to disable certificate checking [\#557](https://github.com/traefik/traefik/pull/557) ([stuart-c](https://github.com/stuart-c))
- Move version.go in its own package… [\#553](https://github.com/traefik/traefik/pull/553) ([vdemeester](https://github.com/vdemeester))
- Upgrade libkermit and dependencies [\#552](https://github.com/traefik/traefik/pull/552) ([vdemeester](https://github.com/vdemeester))
@ -6000,7 +6011,7 @@ Same changelog as v2.0.3.
- Fix k8s watch [\#573](https://github.com/traefik/traefik/pull/573) ([errm](https://github.com/errm))
- Add requirements.txt for netlify [\#567](https://github.com/traefik/traefik/pull/567) ([emilevauge](https://github.com/emilevauge))
- Merge v1.0.1 master [\#565](https://github.com/traefik/traefik/pull/565) ([emilevauge](https://github.com/emilevauge))
- Move webui to FountainJS with Webpack [\#558](https://github.com/traefik/traefik/pull/558) ([micaelmbagira](https://github.com/micaelmbagira))
- Move webui to FountainJS with Webpack [\#558](https://github.com/traefik/traefik/pull/558) ([micaelmbagira](https://github.com/micaelmbagira))
- Add global InsecureSkipVerify option to disable certificate checking [\#557](https://github.com/traefik/traefik/pull/557) ([stuart-c](https://github.com/stuart-c))
- Move version.go in its own package… [\#553](https://github.com/traefik/traefik/pull/553) ([vdemeester](https://github.com/vdemeester))
- Upgrade libkermit and dependencies [\#552](https://github.com/traefik/traefik/pull/552) ([vdemeester](https://github.com/vdemeester))
@ -6162,6 +6173,4 @@ Same changelog as v2.0.3.
- log info about TOML configuration file using [\#420](https://github.com/traefik/traefik/pull/420) ([cocap10](https://github.com/cocap10))
- Doc about skipping some integration tests with '-check.f ConsulCatalogSuite' [\#418](https://github.com/traefik/traefik/pull/418) ([samber](https://github.com/samber))
\* *This Change Log was automatically generated by [gcg](https://github.com/ldez/gcg)*

View file

@ -30,18 +30,18 @@ Project maintainers have the right and responsibility to remove, edit, or reject
## Scope
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or our community.
This Code of Conduct applies both within project spaces and in public spaces when an individual is representing the project or our community.
Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Examples of representing a project or community include using an official project e-mail address, posting via an official social media account, or acting as an appointed representative at an online or offline event.
Representation of a project may be further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be reported by contacting the project team at contact@traefik.io
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
All complaints will be reviewed and investigated and will result in a response that is deemed necessary and appropriate to the circumstances.
The project team is obligated to maintain confidentiality with regard to the reporter of an incident.
The project team is obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.

View file

@ -2,8 +2,10 @@
Here are some guidelines that should help to start contributing to the project.
- [Submitting pull Requests](https://github.com/traefik/contributors-guide/blob/master/pr_guidelines.md)
- [Submitting pull Requests](https://doc.traefik.io/traefik/contributing/submitting-pull-requests/)
- [Submitting issues](https://doc.traefik.io/traefik/contributing/submitting-issues/)
- [Submitting security issues](docs/content/contributing/submitting-security-issues.md)
- [Submitting security issues](https://doc.traefik.io/traefik/contributing/submitting-security-issues/)
- [Advocating for Traefik](https://doc.traefik.io/traefik/contributing/advocating/)
- [Triage Process](https://github.com/traefik/contributors-guide/blob/master/issue_triage.md)
If you are willing to become a maintainer of the project, please take a look at the [maintainers guidelines](docs/content/contributing/maintainers-guidelines.md).

View file

@ -14,6 +14,7 @@ TRAEFIK_IMAGE := $(if $(REPONAME),$(REPONAME),"traefik/traefik")
INTEGRATION_OPTS := $(if $(MAKE_DOCKER_HOST),-e "DOCKER_HOST=$(MAKE_DOCKER_HOST)",-v "/var/run/docker.sock:/var/run/docker.sock")
DOCKER_BUILD_ARGS := $(if $(DOCKER_VERSION), "--build-arg=DOCKER_VERSION=$(DOCKER_VERSION)",)
# only used when running in docker
TRAEFIK_ENVS := \
-e OS_ARCH_ARG \
-e OS_PLATFORM_ARG \
@ -23,7 +24,7 @@ TRAEFIK_ENVS := \
-e CODENAME \
-e TESTDIRS \
-e CI \
-e CONTAINER=DOCKER # Indicator for integration tests that we are running inside a container.
-e IN_DOCKER=true # Indicator for integration tests that we are running inside a container.
TRAEFIK_MOUNT := -v "$(CURDIR)/dist:/go/src/github.com/traefik/traefik/dist"
DOCKER_RUN_OPTS := $(TRAEFIK_ENVS) $(TRAEFIK_MOUNT) "$(TRAEFIK_DEV_IMAGE)"
@ -102,7 +103,7 @@ crossbinary-default-parallel:
test: build-dev-image
-docker network create traefik-test-network --driver bridge --subnet 172.31.42.0/24
trap 'docker network rm traefik-test-network' EXIT; \
$(if $(IN_DOCKER),$(DOCKER_RUN_TRAEFIK_TEST),) ./script/make.sh generate test-unit binary test-integration
$(if $(IN_DOCKER),$(DOCKER_RUN_TRAEFIK_TEST)) ./script/make.sh generate test-unit binary test-integration
## Run the unit tests
.PHONY: test-unit
@ -116,7 +117,7 @@ test-unit: build-dev-image
test-integration: build-dev-image
-docker network create traefik-test-network --driver bridge --subnet 172.31.42.0/24
trap 'docker network rm traefik-test-network' EXIT; \
$(if $(IN_DOCKER),$(DOCKER_RUN_TRAEFIK_TEST),) ./script/make.sh generate binary test-integration
$(if $(IN_DOCKER),$(DOCKER_RUN_TRAEFIK_TEST)) ./script/make.sh generate binary test-integration
## Pull all images for integration tests
.PHONY: pull-images

View file

@ -10,7 +10,6 @@
[![Join the community support forum at https://community.traefik.io/](https://img.shields.io/badge/style-register-green.svg?style=social&label=Discourse)](https://community.traefik.io/)
[![Twitter](https://img.shields.io/twitter/follow/traefik.svg?style=social)](https://twitter.com/intent/follow?screen_name=traefik)
Traefik (pronounced _traffic_) is a modern HTTP reverse proxy and load balancer that makes deploying microservices easy.
Traefik integrates with your existing infrastructure components ([Docker](https://www.docker.com/), [Swarm mode](https://docs.docker.com/engine/swarm/), [Kubernetes](https://kubernetes.io), [Marathon](https://mesosphere.github.io/marathon/), [Consul](https://www.consul.io/), [Etcd](https://coreos.com/etcd/), [Rancher](https://rancher.com), [Amazon ECS](https://aws.amazon.com/ecs), ...) and configures itself automatically and dynamically.
Pointing Traefik at your orchestrator should be the _only_ configuration step you need.
@ -65,7 +64,6 @@ _(But if you'd rather configure some of your routes manually, Traefik supports t
- Exposes a Rest API
- Packaged as a single binary file (made with :heart: with go) and available as an [official](https://hub.docker.com/r/_/traefik/) docker image
## Supported Backends
- [Docker](https://doc.traefik.io/traefik/providers/docker/) / [Swarm mode](https://doc.traefik.io/traefik/providers/docker/)
@ -93,6 +91,7 @@ A collection of contributions around Traefik can be found at [https://awesome.tr
## Support
To get community support, you can:
- join the Traefik community forum: [![Join the chat at https://community.traefik.io/](https://img.shields.io/badge/style-register-green.svg?style=social&label=Discourse)](https://community.traefik.io/)
If you need commercial support, please contact [Traefik.io](https://traefik.io) by mail: <mailto:support@traefik.io>.
@ -127,7 +126,6 @@ We are strongly promoting a philosophy of openness and sharing, and firmly stand
This [document](docs/content/contributing/maintainers-guidelines.md) describes how to be part of the core team as well as various responsibilities and guidelines for Traefik maintainers.
You can also find more information on our process to review pull requests and manage issues [in this document](docs/content/contributing/maintainers.md).
## Contributing
If you'd like to contribute to the project, refer to the [contributing documentation](CONTRIBUTING.md).

View file

@ -13,7 +13,7 @@ RUN mkdir -p /usr/local/bin \
| tar -xzC /usr/local/bin --transform 's#^.+/##x'
# Download golangci-lint binary to bin folder in $GOPATH
RUN curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | bash -s -- -b $GOPATH/bin v1.46.2
RUN curl -sfL https://raw.githubusercontent.com/golangci/golangci-lint/master/install.sh | bash -s -- -b $GOPATH/bin v1.47.1
# Download misspell binary to bin folder in $GOPATH
RUN curl -sfL https://raw.githubusercontent.com/client9/misspell/master/install-misspell.sh | bash -s -- -b $GOPATH/bin v0.3.4

View file

@ -8,8 +8,24 @@ description: "There are many ways to contribute to Traefik Proxy. If you're talk
Spread the Love & Tell Us about It
{: .subtitle }
There are many ways to contribute to the project, and there is one that always spark joy: when we see/read about users talking about how Traefik helps them solve their problems.
Traefik Proxy was started by the community for the community.
You can contribute to the Traefik community in three main ways:
If you're talking about Traefik, [let us know](https://traefik.io/submit-my-contribution/) and we'll promote your enthusiasm!
**Spread the word!** Guides, videos, blog posts, how-to articles, and showing off your network design all help spread the word about Traefik Proxy
and teach others in the community how to best implement it.
It always sparks joy when users share how Traefik Proxy helps them solve their problems.
If you are talking about Traefik Proxy, [let us know](https://traefik.io/submit-my-contribution/) and we will promote your work and reward your enthusiasm!
If you are giving a talk that includes or is about Traefik Proxy, [let us know](https://traefik.io/submit-my-contribution/) and we will send you swag and stickers for your time at the conference.
If you have written about Traefik or shared useful information you would like to promote, feel free to add links to the [dedicated wiki page on GitHub](https://github.com/traefik/traefik/wiki/Awesome-Traefik).
Also, if you've written about Traefik or shared useful information you'd like to promote, feel free to add links in the [dedicated wiki page on Github](https://github.com/traefik/traefik/wiki/Awesome-Traefik).
**Help community members!** Everyone needs a place to share their cool innovations or get help with that pesky bug that only a different pair of eyes seems to be able to see.
Join our [Community Forum](https://community.traefik.io/) where you can ask questions, help out other users, and share your neat configuration examples or snippets.
Top contributors will be asked to join the Ambassador program and get unique swag to celebrate!
**Build cool solutions!** Traefik Proxy would be so much better if only it had…
We love all the wonderful ideas that our users come up with, but we can only build so much.
Luckily, as an open source community, our users can help by [building awesome features](https://github.com/orgs/traefik/projects/9/views/7), enhancements, or bug fixes.
We are a big community, so we do need to prioritize a bit.
That is why we use the tag `contributor/wanted` to let you know which pull requests will make it to the front of the queue for design support and review.
Feel free to grab one of these and run with it.
Top contributors get unique swag to celebrate.

View file

@ -3,12 +3,228 @@ title: "Traefik Pull Requests Documentation"
description: "Looking to contribute to Traefik Proxy? This guide will show you the guidelines for submitting a PR in our contributors guide repository."
---
# Submitting Pull Requests
# Before You Submit a Pull Request
A Quick Guide for Efficient Contributions
{: .subtitle }
This guide is for contributors who already have a pull request to submit.
If you are looking for information on setting up your developer environment
and creating code to contribute to Traefik Proxy or related projects,
see the [development guide](https://docs.traefik.io/contributing/building-testing/).
So you've decided to improve Traefik?
Thank You!
Looking for a way to contribute to Traefik Proxy?
Check out this list of [Priority Issues](https://github.com/traefik/traefik/labels/contributor%2Fwanted),
the [Good First Issue](https://github.com/traefik/traefik/labels/contributor%2Fgood-first-issue) list,
or the list of [confirmed bugs](https://github.com/traefik/traefik/labels/kind%2Fbug%2Fconfirmed) waiting to be remedied.
Please review the [guidelines on creating PRs](https://github.com/traefik/contributors-guide/blob/master/pr_guidelines.md) for Traefik in our [contributors guide repository](https://github.com/traefik/contributors-guide).
## How We Prioritize
We wish we could review every pull request right away.
Unfortunately, our team has to prioritize pull requests (PRs) for review
(but we are welcoming new [maintainers](https://github.com/traefik/traefik/blob/master/docs/content/contributing/maintainers-guidelines.md) to speed this up,
so if you are interested, check it out and apply).
The PRs we are able to handle fastest are:
* Documentation updates.
* Bug fixes.
* Enhancements and Features with a `contributor/wanted` tag.
PRs that take more time to address include:
* Enhancements or Features without the `contributor/wanted` tag.
If you have an idea for an enhancement or feature that you would like to build,
[create an issue](https://github.com/traefik/traefik/issues/new/choose) for it first
and tell us you are interested in writing the PR.
If an issue already exists, definitely comment on it to tell us you are interested in creating a PR.
This will allow us to communicate directly and let you know if it is something we would accept.
It also allows us to make sure you have all the information you need during the design phase
so that it can be reviewed and merged quickly.
If you have questions about the Triage process,
[read more here](https://github.com/traefik/contributors-guide/blob/master/issue_triage.md).
## The Pull Request Submit Process
Merging a PR requires the following steps to be completed before it is merged automatically.
* Make sure your pull request adheres to our best practices. These include:
* [Following project conventions](https://github.com/traefik/traefik/blob/master/docs/content/contributing/maintainers-guidelines.md); including using the PR Template.
* Make small pull requests.
* Solve only one problem at a time.
* Comment thoroughly.
* Do not open the PR from an organization repository.
* Keep "allows edit from maintainer" checked.
* Use semantic line breaks for documentation.
* Pass the validation check.
* Pass all tests.
* Receive 3 approving reviews maintainers.
## Pull Request Review Cycle
You can read about our Triage Process [here](https://github.com/traefik/contributors-guide/blob/master/issue_triage.md),
but in short, it looks like this:
* We triage every new PR or comment before entering it into the review process.
* We ensure that all prerequisites for review have been met.
* We check to make sure the use case meets our needs.
* We assign reviewers.
* Design Review.
* This takes longer than other parts of the process.
* We review that there are no obvious conflicts with our codebase.
* Code Review.
* We review the code in-depth and run tests.
* We may ask for changes here.
* During code review, we ask that you be reasonably responsive,
if a PR languishes in code review it is at risk of rejection,
or we may take ownership of the PR and the contributor will become a co-author.
* Merge.
* Success!
!!! note
Occasionally, we may freeze our codebase when working towards a specific feature or goal that could impact other development.
During this time, your pull request could remain unmerged while the release work is completed.
## Run Local Verifications
You must run these local verifications before you submit your pull request to predict the pass or failure of continuous integration.
Your PR will not be reviewed until these are green on the CI.
* `make validate`
* `make pull-images`
* `make test`
## The Testing and Merge Workflow
Pull Requests are managed by the bot [Myrmica Lobicornis](https://github.com/traefik/lobicornis).
This bot is responsible for verifying GitHub Checks (CI, Tests, etc), mergability, and minimum reviews.
In addition, it rebases or merges with the base PR branch if needed.
It performs several other housekeeping items
and you can read more about those on the [README](https://github.com/traefik/lobicornis) for Lobicornis.
The maintainer giving the final LGTM must add the `status/3-needs-merge` label to trigger the merge bot.
By default, a squash-rebase merge will be carried out.
The status `status/4-merge-in-progress` is only used by the bot.
If the bot is not able to perform the merge, the label `bot/need-human-merge` is added.
In such a situation, solve the conflicts/CI/... and then remove the label `bot/need-human-merge`.
To prevent the bot from automatically merging a PR, add the label `bot/no-merge`.
The label `bot/light-review` decreases the number of required LGTM from 3 to 1.
This label can be used when:
* Updating a dependency.
* Merging branches back into the next version branch.
* Submitting minor documentation changes.
* Submitting changelog PRs.
## Why Was My Pull Request Closed?
Traefik Proxy is made by the community for the community,
as such the goal is to engage the community to make Traefik the best reverse proxy available.
Part of this goal is maintaining a lean codebase and ensuring code velocity.
unfortunately, this means that sometimes we will not be able to merge a pull request.
Because we respect the work you did, you will always be told why we are closing your pull request.
If you do not agree with our decision, do not worry; closed pull requests are easy to recreate,
and little work is lost by closing a pull request that subsequently needs to be reopened.
Your pull request might be closed if:
* Your PR's design conflicts with our existing codebase in such a way that Merging is not an option
and the work needed to make your pull request usable is too high.
* To prevent this, make sure you created an issue first
and think about including Traefik Proxy maintainers in your design phase to minimize conflicts.
* Your PR is for an enhancement or feature that we will not use.
* Please remember to create an issue for any pull request **before** you create a PR
to ensure that your goal is something we can merge and that you have any design insight you might need from the team.
* Your PR has been waiting for feedback from the contributor for over 90 days.
## Why is My Pull Request Not Getting Reviewed
A few factors affect how long your pull request might wait for review.
We must prioritize which PRs we focus on.
Our first priority is PRs we have identified as having high community engagement and broad applicability.
We put our top priorities on our roadmap and you can identify them by the `contributor/wanted` tag.
These PRs will enter our review process the fastest.
Our second priority is bug fixes.
Especially for bugs that have already been tagged with `bug/confirmed`.
These reviews enter the process quickly.
If your PR does not meet the criteria above,
it will take longer for us to review as any PRs that do meet the criteria above will be prioritized.
Additionally, during the last few weeks of a milestone, we stop reviewing PRs to reduce churn and stabilize.
We will resume after the release.
The second major reason that we deprioritize your PR is that you are not following best practices.
The most common failures to follow best practices are:
* You did not create an issue for the PR you wish to make.
If you do not create an issue before submitting your PR,
we will not be able to answer any design questions and let you know how likely your PR is to be merged.
* You created pull requests that are too large to review.
* Break your pull requests up.
If you can extract whole ideas from your pull request and send those as pull requests of their own,
you should do that instead.
It is better to have many pull requests addressing one thing than one pull request addressing many things.
* Traefik Proxy is a fast-moving codebase — lock in your changes ASAP with your small pull request,
and make merges be someone else's problem.
We want every pull request to be useful on its own,
so use your best judgment on what should be a pull request vs. a commit.
* You did not comment well.
* Comment everything.
Please remember that we are working internationally, cross-culturally, and with different use-cases.
Your reviewer will not intuitively understand the problem the same way you do or solve it the same way you would.
This is why every change you make must be explained and your strategy for coding must also be explained.
* Your tests were inadequate or absent.
* If you do not know how to test your PR, please ask!
We will be happy to help you or suggest appropriate test cases.
If you have already followed the best practices and your PR still has not received a response,
here are some things you can do to move the process along:
* If you have fixed all the issues from a review,
remember to re-request a review (using the designated button) to let your reviewer know that you are ready.
You can choose to comment with the changes you made.
* Ping `@tfny` if you have not been assigned to a reviewer.
For more information on best practices, try these links:
* [How to Write a Git Commit Message - Chris Beams](https://chris.beams.io/posts/git-commit/)
* [Distributed Git - Contributing to a Project (Commit Guidelines)](https://git-scm.com/book/en/v2/Distributed-Git-Contributing-to-a-Project)
* [Whats with the 50/72 rule? - Preslav Rachev](https://preslav.me/2015/02/21/what-s-with-the-50-72-rule/)
* [A Note About Git Commit Messages - Tim Pope](https://tbaggery.com/2008/04/19/a-note-about-git-commit-messages.html)
## It's OK to Push Back
Sometimes reviewers make mistakes.
It is OK to push back on changes your reviewer requested.
If you have a good reason for doing something a certain way, you are absolutely allowed to debate the merits of a requested change.
Both the reviewer and reviewee should strive to discuss these issues in a polite and respectful manner.
You might be overruled, but you might also prevail.
We are pretty reasonable people.
Another phenomenon of open-source projects (where anyone can comment on any issue) is the dog-pile -
your pull request gets so many comments from so many people it becomes hard to follow.
In this situation, you can ask the primary reviewer (assignee) whether they want you to fork a new pull request
to clear out all the comments.
You do not have to fix every issue raised by every person who feels like commenting,
but you should answer reasonable comments with an explanation.
## Common Sense and Courtesy
No document can take the place of common sense and good taste.
Use your best judgment, while you put a bit of thought into how your work can be made easier to review.
If you do these things your pull requests will get merged with less friction.

View file

@ -8,11 +8,11 @@ description: "Thank you to all those who have contributed! Traefik Proxy is an o
_You_ Made It
{: .subtitle}
Traefik truly is an [open-source project](https://github.com/traefik/traefik/),
Traefik Proxy truly is an [open-source project](https://github.com/traefik/traefik/),
and wouldn't have become what it is today without the help of our [many contributors](https://github.com/traefik/traefik/graphs/contributors) (at the time of writing this),
not accounting for people having helped with issues, tests, comments, articles, ... or just enjoying it and letting others know.
not accounting for people having helped with issues, tests, comments, articles, ... or just enjoy using Traefik Proxy and share with others.
So once again, thank you for your invaluable help on making Traefik such a good product.
So once again, thank you for your invaluable help in making Traefik such a good product!
!!! question "Where to Go Next?"
If you want to:

View file

@ -4,9 +4,9 @@ This page is maintained and updated periodically to reflect our roadmap and any
| Feature | Deprecated | End of Support | Removal |
|---------------------------------------------------------------|------------|----------------|---------|
| [Pilot Dashboard (Metrics)](#pilot-dashboard-metrics) | 2.7 | 2.8 | 2.9 |
| [Pilot Plugins](#pilot-plugins) | 2.7 | 2.8 | 2.9 |
| [Consul Enterprise Namespaces](#consul-enterprise-namespaces) | 2.8 | TBD | TBD |
| [Pilot Dashboard (Metrics)](#pilot-dashboard-metrics) | 2.7 | 2.8 | 3.0 |
| [Pilot Plugins](#pilot-plugins) | 2.7 | 2.8 | 3.0 |
| [Consul Enterprise Namespace](#consul-enterprise-namespace) | 2.8 | N/A | 3.0 |
## Impact
@ -20,7 +20,7 @@ In 2.9, the Pilot platform and all Traefik integration code will be permanently
Starting on 2.7 the pilot token will not be a requirement anymore.
At 2.9, a new plugin catalog home should be available, decoupled from pilot.
### Consul Enterprise Namespaces
### Consul Enterprise Namespace
Starting on 2.8 the `namespace` option of Consul and Consul Catalog providers is deprecated,
please use the `namespaces` options instead.

View file

@ -6,7 +6,9 @@ Below is a non-exhaustive list of versions and their maintenance status:
| Version | Release Date | Active Support | Security Support |
|---------|--------------|--------------------|------------------|
| 2.6 | Jan 24, 2022 | Yes | Yes |
| 2.8 | Jun 29, 2022 | Yes | Yes |
| 2.7 | May 24, 2022 | Ended Jun 29, 2022 | No |
| 2.6 | Jan 24, 2022 | Ended May 24, 2022 | No |
| 2.5 | Aug 17, 2021 | Ended Jan 24, 2022 | No |
| 2.4 | Jan 19, 2021 | Ended Aug 17, 2021 | No |
| 2.3 | Sep 23, 2020 | Ended Jan 19, 2021 | No |

View file

@ -93,3 +93,18 @@ All available environment variables can be found [here](../reference/static-conf
All the configuration options are documented in their related section.
You can browse the available features in the menu, the [providers](../providers/overview.md), or the [routing section](../routing/overview.md) to see them in action.
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -179,10 +179,17 @@ And run it:
All the details are available in the [Contributing Guide](../contributing/building-testing.md)
!!! question "Using Traefik for Business?"
!!! question "Using Traefik for Business Applications?"
If you're using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/) of Traefik as your [Kubernetes Ingress](https://traefik.io/solutions/kubernetes-ingress/),
your [Docker Swarm Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/),
or your [API gateway](https://traefik.io/solutions/api-gateway/).
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -113,4 +113,20 @@ IP: 172.27.0.4
```
!!! question "Where to Go Next?"
Now that you have a basic understanding of how Traefik can automatically create the routes to your services and load balance them, it is time to dive into [the documentation](/) and let Traefik work for you!
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -666,3 +666,18 @@ If Let's Encrypt is not reachable, the following certificates will apply:
!!! important
For new (sub)domains which need Let's Encrypt authentication, the default Traefik certificate will be used until Traefik is restarted.
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -13,7 +13,6 @@ TODO: add schema
-->
The RedirectScheme middleware redirects the request if the request scheme is different from the configured scheme.
The middleware does not work for websocket requests.
!!! warning "When behind another reverse-proxy"

View file

@ -1,54 +1,32 @@
---
title: "Traefik Plugins Documentation"
description: "Learn how to connect Traefik Proxy with Pilot, a SaaS platform that offers features for metrics, alerts, and plugins. Read the technical documentation."
description: "Learn how to use Traefik Plugins. Read the technical documentation."
---
# Plugins and Traefik Pilot
# Traefik Plugins and the Plugin Catalog
Traefik Pilot is a software-as-a-service (SaaS) platform that connects to Traefik to extend its capabilities.
It offers a number of features to enhance observability and control of Traefik through a global control plane and dashboard, including:
Plugins are a powerful feature for extending Traefik with custom features and behaviors.
The [Plugin Catalog](https://plugins.traefik.io/) is a software-as-a-service (SaaS) platform that provides an exhaustive list of the existing plugins.
* Metrics for network activity of Traefik proxies and groups of proxies
* Alerts for service health issues and security vulnerabilities
* Plugins that extend the functionality of Traefik
??? note "Plugin Catalog Access"
You can reach the [Plugin Catalog](https://plugins.traefik.io/) from the Traefik Dashboard using the `Plugins` menu entry.
!!! important "Learn More About Traefik Pilot"
This section is intended only as a brief overview for Traefik users who are not familiar with Traefik Pilot.
To explore all that Traefik Pilot has to offer, please consult the [Traefik Pilot Documentation](https://doc.traefik.io/traefik-pilot/)
To add a new plugin to a Traefik instance, you must change that instance's static configuration.
Each plugin's **Install** section provides a static configuration example.
Many plugins have their own section in the Traefik dynamic configuration.
!!! Note "Prerequisites"
Traefik Pilot is compatible with Traefik Proxy 2.3 or later.
## Connecting to Traefik Pilot
To connect your Traefik proxies to Traefik Pilot, login or create an account at the [Traefik Pilot homepage](https://pilot.traefik.io) and choose **Register New Traefik Instance**.
To complete the connection, Traefik Pilot will issue a token that must be added to your Traefik static configuration, according to the instructions provided by the Traefik Pilot dashboard.
For more information, consult the [Quick Start Guide](https://doc.traefik.io/traefik-pilot/connecting/)
Health and security alerts for registered Traefik instances can be enabled from the Preferences in your [Traefik Pilot Profile](https://pilot.traefik.io/profile).
## Plugins
Plugins are available to any Traefik proxies that are connected to Traefik Pilot.
They are a powerful feature for extending Traefik with custom features and behaviors.
You can browse community-contributed plugins from the catalog in the [Traefik Pilot Dashboard](https://pilot.traefik.io/plugins).
To add a new plugin to a Traefik instance, you must modify that instance's static configuration.
The code to be added is provided for you when you choose **Install the Plugin** from the Traefik Pilot dashboard.
To learn more about Traefik plugins, consult the [documentation](https://doc.traefik.io/traefik-pilot/plugins/overview/).
To learn more about Traefik plugins, consult the [documentation](https://plugins.traefik.io/install).
!!! danger "Experimental Features"
Plugins can potentially modify the behavior of Traefik in unforeseen ways.
Plugins can change the behavior of Traefik in unforeseen ways.
Exercise caution when adding new plugins to production Traefik instances.
## Build Your Own Plugins
Traefik users can create their own plugins and contribute them to the Traefik Pilot catalog to share them with the community.
Traefik users can create their own plugins and share them with the community using the Plugin Catalog.
Traefik plugins are loaded dynamically.
Traefik will load plugins dynamically.
They need not be compiled, and no complex toolchain is necessary to build them.
The experience of implementing a Traefik plugin is comparable to writing a web browser extension.
To learn more and see code for example Traefik plugins, please see the [developer documentation](https://doc.traefik.io/traefik-pilot/plugins/plugin-dev/).
To learn more about Traefik plugin creation, please refer to the [developer documentation](https://plugins.traefik.io/create).

View file

@ -742,3 +742,18 @@ providers:
```bash tab="CLI"
--providers.docker.allowEmptyServices=true
```
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -501,3 +501,18 @@ providers:
To learn more about the various aspects of the Ingress specification that Traefik supports,
many examples of Ingresses definitions are located in the test [examples](https://github.com/traefik/traefik/tree/v2.8/pkg/provider/kubernetes/ingress/fixtures) of the Traefik repository.
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -1904,8 +1904,8 @@ spec:
schema:
openAPIV3Schema:
description: 'TraefikService is the CRD implementation of a Traefik Service.
TraefikService object allows to: - Apply weight to Services on load-balancing -
Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
TraefikService object allows to: - Apply weight to Services on load-balancing
- Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation

View file

@ -20,8 +20,8 @@ spec:
schema:
openAPIV3Schema:
description: 'TraefikService is the CRD implementation of a Traefik Service.
TraefikService object allows to: - Apply weight to Services on load-balancing -
Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
TraefikService object allows to: - Apply weight to Services on load-balancing
- Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation

View file

@ -967,3 +967,18 @@ entryPoints:
entrypoints.foo.address=:8000/udp
entrypoints.foo.udp.timeout=10s
```
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -1329,3 +1329,18 @@ There must be one (and only one) UDP [service](../services/index.md) referenced
Services are the target for the router.
!!! important "UDP routers can only target UDP services (and not HTTP or TCP services)."
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -1645,3 +1645,18 @@ udp:
[[udp.services.appv2.loadBalancer.servers]]
address = "private-ip-server-2:8080/"
```
!!! question "Using Traefik for Business Applications?"
If you are using Traefik for commercial applications,
consider the [Enterprise Edition](https://traefik.io/traefik-enterprise/).
You can use it as your:
- [Kubernetes Ingress Controller](https://traefik.io/solutions/kubernetes-ingress/)
- [Load Balancer](https://traefik.io/solutions/docker-swarm-ingress/)
- [API Gateway](https://traefik.io/solutions/api-gateway/)
Traefik Enterprise enables centralized access management,
distributed Let's Encrypt,
and other advanced capabilities.
Learn more in [this 15-minute technical walkthrough](https://info.traefik.io/watch-traefikee-demo).

View file

@ -138,7 +138,7 @@ nav:
- 'InFlightConn': 'middlewares/tcp/inflightconn.md'
- 'IpWhitelist': 'middlewares/tcp/ipwhitelist.md'
- 'Traefik Hub': 'traefik-hub/index.md'
- 'Plugins & Traefik Pilot': 'plugins/index.md'
- 'Plugins & Plugin Catalog': 'plugins/index.md'
- 'Operations':
- 'CLI': 'operations/cli.md'
- 'Dashboard' : 'operations/dashboard.md'

View file

@ -8,7 +8,6 @@
| mkdocs-material | [documentation][mkdocs-material] | [Sources][mkdocs-material-src] |
| pymdown-extensions| [documentation][pymdown-extensions] | [Sources][pymdown-extensions-src] |
[mkdocs]: https://www.mkdocs.org "Mkdocs"
[mkdocs-src]: https://github.com/mkdocs/mkdocs "Mkdocs - Sources"

View file

@ -1,16 +0,0 @@
#!/bin/bash
#
# This script is run in netlify environment to build and validate
# the website for documentation
CURRENT_DIR="$(cd "$(dirname "${0}")" && pwd -P)"
#### Build website
# Provide the URL for this deployment to Mkdocs
echo "${DEPLOY_PRIME_URL}" > "${CURRENT_DIR}/../CNAME"
sed -i "s#site_url:.*#site_url: ${DEPLOY_PRIME_URL}#" "${CURRENT_DIR}/../mkdocs.yml"
# Build
mkdocs build
exit 0

20
go.mod
View file

@ -36,7 +36,7 @@ require (
github.com/influxdata/influxdb1-client v0.0.0-20191209144304-8bf82d3c094d
github.com/instana/go-sensor v1.38.3
github.com/klauspost/compress v1.14.2
github.com/kvtools/valkeyrie v0.4.0
github.com/kvtools/valkeyrie v0.4.1
github.com/lucas-clemente/quic-go v0.28.0
github.com/mailgun/ttlmap v0.0.0-20170619185759-c1c17f74874f
github.com/miekg/dns v1.1.47
@ -49,11 +49,11 @@ require (
github.com/patrickmn/go-cache v2.1.0+incompatible
github.com/pires/go-proxyproto v0.6.1
github.com/pmezard/go-difflib v1.0.0
github.com/prometheus/client_golang v1.11.0
github.com/prometheus/client_golang v1.12.2-0.20220704083116-e8f91604d835
github.com/prometheus/client_model v0.2.0
github.com/rancher/go-rancher-metadata v0.0.0-20200311180630-7f4c936a06ac
github.com/sirupsen/logrus v1.8.1
github.com/stretchr/testify v1.7.1
github.com/stretchr/testify v1.7.5
github.com/stvp/go-udp-testing v0.0.0-20191102171040-06b61409b154
github.com/traefik/paerser v0.1.5
github.com/traefik/yaegi v0.13.0
@ -73,7 +73,7 @@ require (
google.golang.org/grpc v1.38.0
gopkg.in/DataDog/dd-trace-go.v1 v1.38.1
gopkg.in/fsnotify.v1 v1.4.7
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b
gopkg.in/yaml.v3 v3.0.1
k8s.io/api v0.22.1
k8s.io/apiextensions-apiserver v0.21.3
k8s.io/apimachinery v0.22.1
@ -155,7 +155,7 @@ require (
github.com/fsnotify/fsnotify v1.5.1 // indirect
github.com/fvbommel/sortorder v1.0.1 // indirect
github.com/go-errors/errors v1.0.1 // indirect
github.com/go-logfmt/logfmt v0.5.0 // indirect
github.com/go-logfmt/logfmt v0.5.1 // indirect
github.com/go-logr/logr v0.4.0 // indirect
github.com/go-resty/resty/v2 v2.1.1-0.20191201195748-d7b97669fe48 // indirect
github.com/go-task/slim-sprig v0.0.0-20210107165309-348f09dbbbc0 // indirect
@ -267,8 +267,8 @@ require (
github.com/philhofer/fwd v1.1.1 // indirect
github.com/pkg/errors v0.9.1 // indirect
github.com/pquerna/otp v1.3.0 // indirect
github.com/prometheus/common v0.26.0 // indirect
github.com/prometheus/procfs v0.6.0 // indirect
github.com/prometheus/common v0.35.0 // indirect
github.com/prometheus/procfs v0.7.3 // indirect
github.com/sacloud/libsacloud v1.36.2 // indirect
github.com/sanathkr/go-yaml v0.0.0-20170819195128-ed9d249f429b // indirect
github.com/santhosh-tekuri/jsonschema v1.2.4 // indirect
@ -282,7 +282,7 @@ require (
github.com/spf13/cast v1.3.1 // indirect
github.com/spf13/cobra v1.2.1 // indirect
github.com/spf13/pflag v1.0.5 // indirect
github.com/stretchr/objx v0.3.0 // indirect
github.com/stretchr/objx v0.4.0 // indirect
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/common v1.0.287 // indirect
github.com/tencentcloud/tencentcloud-sdk-go/tencentcloud/dnspod v1.0.287 // indirect
github.com/theupdateframework/notary v0.6.1 // indirect
@ -308,7 +308,7 @@ require (
go.uber.org/zap v1.18.1 // indirect
golang.org/x/crypto v0.0.0-20220427172511-eb4f295cb31f // indirect
golang.org/x/lint v0.0.0-20210508222113-6edffad5e616 // indirect
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602 // indirect
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b // indirect
golang.org/x/sync v0.0.0-20210220032951-036812b2e83c // indirect
golang.org/x/sys v0.0.0-20220520151302-bc2c85ada10a // indirect
golang.org/x/term v0.0.0-20210927222741-03fcf44c2211 // indirect
@ -317,7 +317,7 @@ require (
google.golang.org/api v0.44.0 // indirect
google.golang.org/appengine v1.6.7 // indirect
google.golang.org/genproto v0.0.0-20210602131652-f16073e35f0c // indirect
google.golang.org/protobuf v1.27.1 // indirect
google.golang.org/protobuf v1.28.0 // indirect
gopkg.in/inf.v0 v0.9.1 // indirect
gopkg.in/ini.v1 v1.62.0 // indirect
gopkg.in/ns1/ns1-go.v2 v2.6.2 // indirect

39
go.sum
View file

@ -686,11 +686,13 @@ github.com/go-kit/kit v0.9.0/go.mod h1:xBxKIO96dXMWWy0MnWVtmwkA9/13aqxPnvrjFYMA2
github.com/go-kit/kit v0.10.1-0.20200915143503-439c4d2ed3ea h1:CnEQOUv4ilElSwFB9g/lVmz206oLE4aNZDYngIY1Gvg=
github.com/go-kit/kit v0.10.1-0.20200915143503-439c4d2ed3ea/go.mod h1:xUsJbQ/Fp4kEt7AFgCuvyX4a71u8h9jB8tj/ORgOZ7o=
github.com/go-kit/log v0.1.0/go.mod h1:zbhenjAZHb184qTLMA9ZjW7ThYL0H2mk7Q6pNt4vbaY=
github.com/go-kit/log v0.2.0/go.mod h1:NwTd00d/i8cPZ3xOwwiv2PO5MOcx78fFErGNcVmBjv0=
github.com/go-ldap/ldap/v3 v3.1.3/go.mod h1:3rbOH3jRS2u6jg2rJnKAMLE/xQyCKIveG2Sa/Cohzb8=
github.com/go-logfmt/logfmt v0.3.0/go.mod h1:Qt1PoO58o5twSAckw1HlFXLmHsOX5/0LbT9GBnD5lWE=
github.com/go-logfmt/logfmt v0.4.0/go.mod h1:3RMwSq7FuexP4Kalkev3ejPJsZTpXXBr9+V4qmtdjCk=
github.com/go-logfmt/logfmt v0.5.0 h1:TrB8swr/68K7m9CcGut2g3UOihhbcbiMAYiuTXdEih4=
github.com/go-logfmt/logfmt v0.5.0/go.mod h1:wCYkCAKZfumFQihp8CzCvQ3paCTfi41vtzG1KdI/P7A=
github.com/go-logfmt/logfmt v0.5.1 h1:otpy5pqBCBZ1ng9RQ0dPu4PN7ba75Y/aA+UpowDyNVA=
github.com/go-logfmt/logfmt v0.5.1/go.mod h1:WYhtIu8zTZfxdn5+rREduYbwxfcBr/Vr6KEVveWlfTs=
github.com/go-logr/logr v0.1.0/go.mod h1:ixOQHD9gLJUVQQ2ZOR7zLEifBX6tGkNJF4QyIY7sIas=
github.com/go-logr/logr v0.2.0/go.mod h1:z6/tIYblkpsD+a4lm/fGIIU9mZ+XfAiaFtq7xTgseGU=
github.com/go-logr/logr v0.4.0 h1:K7/B1jt6fIBQVd4Owv2MqGQClcgf0R266+7C/QjRcLc=
@ -1275,8 +1277,8 @@ github.com/kr/pty v1.1.8/go.mod h1:O1sed60cT9XZ5uDucP5qwvh+TE3NnUj51EiZO/lmSfw=
github.com/kr/text v0.1.0/go.mod h1:4Jbv+DJW3UT/LiOwJeYQe1efqtUx/iVham/4vfdArNI=
github.com/kr/text v0.2.0 h1:5Nx0Ya0ZqY2ygV366QzturHI13Jq95ApcVaJBhpS+AY=
github.com/kr/text v0.2.0/go.mod h1:eLer722TekiGuMkidMxC/pM04lWEeraHUUmBw8l2grE=
github.com/kvtools/valkeyrie v0.4.0 h1:0lfG8XpxL28YCOUmSiFsyvgTSDxEQzQOtgvZrJ3sIm8=
github.com/kvtools/valkeyrie v0.4.0/go.mod h1:rNvw3wTLExfPgqcn+y6bpBZP8MYULZ4X1SAa2zEDg2o=
github.com/kvtools/valkeyrie v0.4.1 h1:S0lOF4OOmPFd1i37vHmgCVr7/2ygwL4grKbHT5vGQ6M=
github.com/kvtools/valkeyrie v0.4.1/go.mod h1:E34+bty7IqLoFkOqGD9AHDE4Bw4APJtoKmj0cqJJ7ug=
github.com/kylelemons/go-gypsy v0.0.0-20160905020020-08cad365cd28/go.mod h1:T/T7jsxVqf9k/zYOqbgNAsANsjxTd1Yq3htjDhQ1H0c=
github.com/kylelemons/godebug v0.0.0-20170820004349-d65d576e9348/go.mod h1:B69LEHPfb2qLo0BaaOLcbitczOKLWTsrBG9LczfCD4k=
github.com/labbsr0x/bindman-dns-webhook v1.0.2 h1:I7ITbmQPAVwrDdhd6dHKi+MYJTJqPCK0jE6YNBAevnk=
@ -1655,8 +1657,10 @@ github.com/prometheus/client_golang v1.1.0/go.mod h1:I1FGZT9+L76gKKOs5djB6ezCbFQ
github.com/prometheus/client_golang v1.3.0/go.mod h1:hJaj2vgQTGQmVCsAACORcieXFeDPbaTKGT+JTgUa3og=
github.com/prometheus/client_golang v1.4.0/go.mod h1:e9GMxYsXl05ICDXkRhurwBS4Q3OK1iX/F2sw+iXX5zU=
github.com/prometheus/client_golang v1.7.1/go.mod h1:PY5Wy2awLA44sXw4AOSfFBetzPP4j5+D6mVACh+pe2M=
github.com/prometheus/client_golang v1.11.0 h1:HNkLOAEQMIDv/K+04rukrLx6ch7msSRwf3/SASFAGtQ=
github.com/prometheus/client_golang v1.11.0/go.mod h1:Z6t4BnS23TR94PD6BsDNk8yVqroYurpAkEiz0P2BEV0=
github.com/prometheus/client_golang v1.12.1/go.mod h1:3Z9XVyYiZYEO+YQWt3RD2R3jrbd179Rt297l4aS6nDY=
github.com/prometheus/client_golang v1.12.2-0.20220704083116-e8f91604d835 h1:sYuFGkrz0PtewSFk0Bg7p7jjiiklc6FUIWz+mFGQfD0=
github.com/prometheus/client_golang v1.12.2-0.20220704083116-e8f91604d835/go.mod h1:RjnYTcBFM8s+WRft6oBqj4p5OgXJASPw5UFiI7w+GSs=
github.com/prometheus/client_model v0.0.0-20171117100541-99fa1f4be8e5/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20180712105110-5c3871d89910/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
github.com/prometheus/client_model v0.0.0-20190115171406-56726106282f/go.mod h1:MbSGuTsp3dbXC40dX6PRTWyKYBIrTGTE9sqQNg2J8bo=
@ -1676,8 +1680,10 @@ github.com/prometheus/common v0.6.0/go.mod h1:eBmuwkDJBwy6iBfxCBob6t6dR6ENT/y+J+
github.com/prometheus/common v0.7.0/go.mod h1:DjGbpBbp5NYNiECxcL/VnbXCCaQpKd3tt26CguLLsqA=
github.com/prometheus/common v0.9.1/go.mod h1:yhUN8i9wzaXS3w1O07YhxHEBxD+W35wd8bs7vj7HSQ4=
github.com/prometheus/common v0.10.0/go.mod h1:Tlit/dnDKsSWFlCLTWaA1cyBgKHSMdTB80sz/V91rCo=
github.com/prometheus/common v0.26.0 h1:iMAkS2TDoNWnKM+Kopnx/8tnEStIfpYA0ur0xQzzhMQ=
github.com/prometheus/common v0.26.0/go.mod h1:M7rCNAaPfAosfx8veZJCuw84e35h3Cfd9VFqTh1DIvc=
github.com/prometheus/common v0.32.1/go.mod h1:vu+V0TpY+O6vW9J44gczi3Ap/oXXR10b+M/gUGO4Hls=
github.com/prometheus/common v0.35.0 h1:Eyr+Pw2VymWejHqCugNaQXkAi6KayVNxaHeu6khmFBE=
github.com/prometheus/common v0.35.0/go.mod h1:phzohg0JFMnBEFGxTDbfu3QyL5GI8gTQJFhYO5B3mfA=
github.com/prometheus/procfs v0.0.0-20180125133057-cb4147076ac7/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20180725123919-05ee40e3a273/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
github.com/prometheus/procfs v0.0.0-20181005140218-185b4288413d/go.mod h1:c3At6R/oaqEKCNdg8wHV1ftS6bRYblBhIjjI8uT2IGk=
@ -1692,8 +1698,9 @@ github.com/prometheus/procfs v0.0.5/go.mod h1:4A/X28fw3Fc593LaREMrKMqOKvUAntwMDa
github.com/prometheus/procfs v0.0.8/go.mod h1:7Qr8sr6344vo1JqZ6HhLceV9o3AJ1Ff+GxbHq6oeK9A=
github.com/prometheus/procfs v0.1.3/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.2.0/go.mod h1:lV6e/gmhEcM9IjHGsFOCxxuZ+z1YqCvr4OA4YeYWdaU=
github.com/prometheus/procfs v0.6.0 h1:mxy4L2jP6qMonqmq+aTtOx1ifVWUgG/TAmntgbh3xv4=
github.com/prometheus/procfs v0.6.0/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/procfs v0.7.3 h1:4jVXhlkAyzOScmCkXBTOLRLTz8EeU+eyjrwB/EPq0VU=
github.com/prometheus/procfs v0.7.3/go.mod h1:cz+aTbrPOrUb4q7XlbU9ygM+/jj0fzG6c1xBZuNvfVA=
github.com/prometheus/tsdb v0.7.1/go.mod h1:qhTCs0VvXwvX/y3TZrWD7rabWM+ijKTux40TwIPHuXU=
github.com/qri-io/jsonpointer v0.1.0/go.mod h1:DnJPaYgiKu56EuDp8TU5wFLdZIcAnb/uH9v37ZaMV64=
github.com/qri-io/jsonschema v0.1.1/go.mod h1:QpzJ6gBQ0GYgGmh7mDQ1YsvvhSgE4rYj0k8t5MBOmUY=
@ -1846,8 +1853,9 @@ github.com/stretchr/objx v0.0.0-20180129172003-8a3f7159479f/go.mod h1:HFkY916IF+
github.com/stretchr/objx v0.1.0/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.1.1/go.mod h1:HFkY916IF+rwdDfMAkV7OtwuqBVzrE8GR6GFx+wExME=
github.com/stretchr/objx v0.2.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.3.0 h1:NGXK3lHquSN08v5vWalVI/L8XU9hdzE/G6xsrze47As=
github.com/stretchr/objx v0.3.0/go.mod h1:qt09Ya8vawLte6SNmTgCsAVtYtaKzEcn8ATUoHMkEqE=
github.com/stretchr/objx v0.4.0 h1:M2gUjqZET1qApGOWNSnZ49BAIMX4F/1plDv3+l31EJ4=
github.com/stretchr/objx v0.4.0/go.mod h1:YvHI0jy2hoMjB+UWwv71VJQ9isScKT/TqJzVSSt89Yw=
github.com/stretchr/testify v0.0.0-20151208002404-e3a8ff8ce365/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v0.0.0-20180303142811-b89eecf5ca5d/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
github.com/stretchr/testify v1.2.1/go.mod h1:a8OnRcib4nhh0OaRAV+Yts87kKdq0PP7pXfy6kDkUVs=
@ -1857,8 +1865,9 @@ github.com/stretchr/testify v1.4.0/go.mod h1:j7eGeouHqKxXV5pUuKE4zz7dFj8WfuZ+81P
github.com/stretchr/testify v1.5.1/go.mod h1:5W2xD1RspED5o8YsWQXVCued0rvSQ+mT+I5cxcmMvtA=
github.com/stretchr/testify v1.6.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.0/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.1 h1:5TQK59W5E3v0r2duFAb7P95B6hEeOyEnHRa8MjYSMTY=
github.com/stretchr/testify v1.7.1/go.mod h1:6Fq8oRcR53rry900zMqJjRRixrwX3KX962/h/Wwjteg=
github.com/stretchr/testify v1.7.5 h1:s5PTfem8p8EbKQOctVV53k6jCJt3UX4IEJzwh+C324Q=
github.com/stretchr/testify v1.7.5/go.mod h1:yNjHg4UonilssWZ8iaSj1OCr/vHnekPRkoO+kdMU+MU=
github.com/stvp/go-udp-testing v0.0.0-20191102171040-06b61409b154 h1:XGopsea1Dw7ecQ8JscCNQXDGYAKDiWjDeXnpN/+BY9g=
github.com/stvp/go-udp-testing v0.0.0-20191102171040-06b61409b154/go.mod h1:7jxmlfBCDBXRzr0eAQJ48XC1hBu1np4CS5+cHEYfwpc=
github.com/subosito/gotenv v1.2.0 h1:Slr1R9HxAlEKefgq5jn9U+DnETlIUa6HfgEzj0g5d7s=
@ -2232,9 +2241,12 @@ golang.org/x/net v0.0.0-20210410081132-afb366fc7cd1/go.mod h1:9tjilg8BloeKEkVJvy
golang.org/x/net v0.0.0-20210428140749-89ef3d95e781/go.mod h1:OJAsFXCWl8Ukc7SiCT/9KSuxbyM7479/AVlXFRxuMCk=
golang.org/x/net v0.0.0-20210510120150-4163338589ed/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210520170846-37e1c6afe023/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210525063256-abc453219eb5/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20210726213435-c6fcb2dbf985/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211020060615-d418f374d309/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20211112202133-69e39bad7dc2/go.mod h1:9nx3DQGgdP8bBQD5qxJ1jj9UTztislL4KSBs9R2vV5Y=
golang.org/x/net v0.0.0-20220127200216-cd36cc0744dd/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220225172249-27dd8689420f/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220425223048-2871e0cb64e4/go.mod h1:CfG3xpIq0wQ8r1q4Su4UZFWDARRcnwPjda9FqA0JpMk=
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e h1:TsQ7F31D3bUCLeqPT0u+yjp1guoArKaNKmCr22PYgTQ=
golang.org/x/net v0.0.0-20220624214902-1bab6f366d9e/go.mod h1:XRhObCWvk6IyKnWLug+ECip1KBveYUHfp+8e9klMJ9c=
@ -2252,8 +2264,10 @@ golang.org/x/oauth2 v0.0.0-20201208152858-08078c50e5b5/go.mod h1:KelEdhl1UZF7XfJ
golang.org/x/oauth2 v0.0.0-20210218202405-ba52d332ba99/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210220000619-9bb904979d93/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210313182246-cd4f82c27b84/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602 h1:0Ja1LBD+yisY6RWM/BH7TJVXWsSjs2VwBSmvSX4HdBc=
golang.org/x/oauth2 v0.0.0-20210402161424-2e8d93401602/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20210514164344-f6687ab2804c/go.mod h1:KelEdhl1UZF7XfJ4dDtk6s++YSgaE7mD/BuKKDLBl4A=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b h1:clP8eMhB30EHdc0bd2Twtq6kgU7yl5ub2cQLSdrv1Dg=
golang.org/x/oauth2 v0.0.0-20220223155221-ee480838109b/go.mod h1:DAh4E804XQdzx2j+YRIaUnCqCV2RuMz24cGBJ5QYIrc=
golang.org/x/perf v0.0.0-20180704124530-6e6d33e29852/go.mod h1:JLpeXjPJfIyPr5TlbXLkXWLhP8nz10XfvxElABhCtcw=
golang.org/x/sync v0.0.0-20180314180146-1d60e4601c6f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
golang.org/x/sync v0.0.0-20181108010431-42b317875d0f/go.mod h1:RxMgew5VJxzue5/jJTE5uejpjVlOe/izrB70Jof72aM=
@ -2392,6 +2406,7 @@ golang.org/x/sys v0.0.0-20210630005230-0f9fa26af87c/go.mod h1:oPkhp1MJrh7nUepCBc
golang.org/x/sys v0.0.0-20210927094055-39ccf1dd6fa6/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211103235746-7861aae1554b/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20211216021012-1d35b9e2eb4e/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220114195835-da31bd327af9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220209214540-3681064d5158/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220227234510-4e6760a101f9/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
golang.org/x/sys v0.0.0-20220307203707-22a9840ba4d7/go.mod h1:oPkhp1MJrh7nUepCBck5+mAzfO9JrbApNNgaTdGDITg=
@ -2660,8 +2675,9 @@ google.golang.org/protobuf v1.24.0/go.mod h1:r/3tXBNzIEhYS9I1OUVjXDlt8tc493IdKGj
google.golang.org/protobuf v1.25.0/go.mod h1:9JNX74DMeImyA3h4bdi1ymwjUzf21/xIlbajtzgsN7c=
google.golang.org/protobuf v1.26.0-rc.1/go.mod h1:jlhhOSvTdKEhbULTjvd4ARK9grFBp09yW+WbY/TyQbw=
google.golang.org/protobuf v1.26.0/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.27.1 h1:SnqbnDw1V7RiZcXPx5MEeqPv2s79L9i7BJUlG/+RurQ=
google.golang.org/protobuf v1.27.1/go.mod h1:9q0QmTI4eRPtz6boOQmLYwt+qCgq0jsYwAQnmE0givc=
google.golang.org/protobuf v1.28.0 h1:w43yiav+6bVFTBQFZX0r7ipe9JQ1QsbMgHwbBziscLw=
google.golang.org/protobuf v1.28.0/go.mod h1:HV8QOd/L58Z+nl8r43ehVNZIU/HEI6OcFqwMG9pJV4I=
gopkg.in/DataDog/dd-trace-go.v1 v1.38.1 h1:nAKgcpJLXRHF56cKCP3bN8gTTQmmNAZFEblbyGKhKTo=
gopkg.in/DataDog/dd-trace-go.v1 v1.38.1/go.mod h1:GBhK4yaMJ1h329ivtKAqRNe1EZ944UnZwtz5lh7CnJc=
gopkg.in/airbrake/gobrake.v2 v2.0.9/go.mod h1:/h5ZAUhDkGaJfjzjKLSjv6zCL6O0LLBxU4K+aSYdM/U=
@ -2733,8 +2749,9 @@ gopkg.in/yaml.v2 v2.4.0 h1:D8xgwECY7CYvx+Y2n4sBz93Jn9JRvxdiyyo8CTfuKaY=
gopkg.in/yaml.v2 v2.4.0/go.mod h1:RDklbk79AGWmwhnvt/jBztapEOGDOx6ZbXqjP6csGnQ=
gopkg.in/yaml.v3 v3.0.0-20200313102051-9f266ea9e77c/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20200615113413-eeeca48fe776/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b h1:h8qDotaEPuJATrMmW04NCwg7v22aHH28wwpauUhK9Oo=
gopkg.in/yaml.v3 v3.0.0-20210107192922-496545a6307b/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gopkg.in/yaml.v3 v3.0.1 h1:fxVm/GzAzEWqLHuvctI91KS9hhNmmWOoWu0XTYJS7CA=
gopkg.in/yaml.v3 v3.0.1/go.mod h1:K4uyk7z7BCEPqu6E+C64Yfv1cQ7kz7rIZviUmN+EgEM=
gorm.io/driver/mysql v1.0.1/go.mod h1:KtqSthtg55lFp3S5kUXqlGaelnWpKitn4k1xZTnoiPw=
gorm.io/driver/postgres v1.0.0/go.mod h1:wtMFcOzmuA5QigNsgEIb7O5lhvH1tHAF1RbWmLWV4to=
gorm.io/driver/sqlserver v1.0.4/go.mod h1:ciEo5btfITTBCj9BkoUVDvgQbUdLWQNqdFY5OGuGnRg=

View file

@ -1904,8 +1904,8 @@ spec:
schema:
openAPIV3Schema:
description: 'TraefikService is the CRD implementation of a Traefik Service.
TraefikService object allows to: - Apply weight to Services on load-balancing -
Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
TraefikService object allows to: - Apply weight to Services on load-balancing
- Mirror traffic on services More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice'
properties:
apiVersion:
description: 'APIVersion defines the versioned schema of this representation

View file

@ -27,7 +27,7 @@
[tcp.services]
[tcp.services.whoami-no-tls.loadBalancer]
[[tcp.services.whoami-no-tls.loadBalancer.servers]]
address = "whoami-no-tls:8080"
address = "{{ .WhoamiNoTLSAddress }}"
[http]
[http.routers]
@ -40,4 +40,4 @@
[http.services]
[http.services.whoami.loadBalancer]
[[http.services.whoami.loadBalancer.servers]]
url = "http://whoami:80"
url = "{{ .WhoamiURL }}"

View file

@ -27,4 +27,4 @@
[tcp.services]
[tcp.services.whoami-no-tls.loadBalancer]
[[tcp.services.whoami-no-tls.loadBalancer.servers]]
address = "whoami-banner:8080"
address = "{{ .WhoamiBannerAddress }}"

View file

@ -38,11 +38,11 @@
[tcp.services]
[tcp.services.whoami-a.loadBalancer]
[[tcp.services.whoami-a.loadBalancer.servers]]
address = "whoami-a:8080"
address = "{{ .WhoamiA }}"
[tcp.services.whoami-b.loadBalancer]
[[tcp.services.whoami-b.loadBalancer.servers]]
address = "whoami-b:8080"
address = "{{ .WhoamiB }}"
[tcp.middlewares]
[tcp.middlewares.allowing-ipwhitelist.ipWhiteList]

View file

@ -37,7 +37,7 @@
[http.services]
[http.services.whoami.loadBalancer]
[[http.services.whoami.loadBalancer.servers]]
url = "http://whoami:80"
url = "{{ .Whoami }}"
[tcp]
[tcp.routers]
[tcp.routers.to-whoami-a]
@ -62,15 +62,15 @@
[tcp.services.whoami-a.loadBalancer]
[[tcp.services.whoami-a.loadBalancer.servers]]
address = "whoami-a:8080"
address = "{{ .WhoamiA }}"
[tcp.services.whoami-b.loadBalancer]
[[tcp.services.whoami-b.loadBalancer.servers]]
address = "whoami-b:8080"
address = "{{ .WhoamiB }}"
[tcp.services.whoami-no-cert.loadBalancer]
[[tcp.services.whoami-no-cert.loadBalancer.servers]]
address = "whoami-no-cert:8080"
address = "{{ .WhoamiNoCert }}"
[[tls.certificates]]
certFile = "fixtures/tcp/whoami-c.crt"

View file

@ -36,7 +36,7 @@
[tcp.services.whoami-no-cert]
[tcp.services.whoami-no-cert.loadBalancer]
[[tcp.services.whoami-no-cert.loadBalancer.servers]]
address = "whoami-no-cert:8080"
address = "{{ .WhoamiNoCert }}"
[tls.options]

View file

@ -47,17 +47,17 @@
[tcp.services]
[tcp.services.whoami-no-tls.loadBalancer]
[[tcp.services.whoami-no-tls.loadBalancer.servers]]
address = "whoami-no-tls:8080"
address = "{{ .WhoamiNoTLS }}"
[tcp.services.whoami-a.loadBalancer]
[[tcp.services.whoami-a.loadBalancer.servers]]
address = "whoami-a:8080"
address = "{{ .WhoamiA }}"
[tcp.services.whoami-b.loadBalancer]
[[tcp.services.whoami-b.loadBalancer.servers]]
address = "whoami-b:8080"
address = "{{ .WhoamiB }}"
[tcp.services.whoami-no-cert.loadBalancer]
[[tcp.services.whoami-no-cert.loadBalancer.servers]]
address = "whoami-no-cert:8080"
address = "{{ .WhoamiNoCert }}"

View file

@ -27,4 +27,4 @@
[tcp.services]
[tcp.services.whoami-no-tls.loadBalancer]
[[tcp.services.whoami-no-tls.loadBalancer.servers]]
address = "whoami-no-tls:8080"
address = "{{ .WhoamiNoTLS }}"

View file

@ -34,8 +34,8 @@
[tcp.services.whoami-b.loadBalancer]
[[tcp.services.whoami-b.loadBalancer.servers]]
address = "whoami-b:8080"
address = "{{ .WhoamiB }}"
[tcp.services.whoami-ab.loadBalancer]
[[tcp.services.whoami-ab.loadBalancer.servers]]
address = "whoami-ab:8080"
address = "{{ .WhoamiAB }}"

View file

@ -4,8 +4,11 @@ package integration
import (
"bytes"
"context"
"errors"
"flag"
"fmt"
"io/fs"
"io/ioutil"
"os"
"os/exec"
"path/filepath"
@ -40,8 +43,23 @@ func Test(t *testing.T) {
return
}
// TODO(mpl): very niche optimization: do not start tailscale if none of the
// wanted tests actually need it (e.g. KeepAliveSuite does not).
var (
vpn *tailscaleNotSuite
useVPN bool
)
if os.Getenv("IN_DOCKER") != "true" {
if vpn = setupVPN(nil, "tailscale.secret"); vpn != nil {
defer vpn.TearDownSuite(nil)
useVPN = true
}
}
check.Suite(&AccessLogSuite{})
check.Suite(&AcmeSuite{})
if !useVPN {
check.Suite(&AcmeSuite{})
}
check.Suite(&ConsulCatalogSuite{})
check.Suite(&ConsulSuite{})
check.Suite(&DockerComposeSuite{})
@ -55,12 +73,16 @@ func Test(t *testing.T) {
check.Suite(&HostResolverSuite{})
check.Suite(&HTTPSSuite{})
check.Suite(&HTTPSuite{})
check.Suite(&K8sSuite{})
if !useVPN {
check.Suite(&K8sSuite{})
}
check.Suite(&KeepAliveSuite{})
check.Suite(&LogRotationSuite{})
check.Suite(&MarathonSuite15{})
check.Suite(&MarathonSuite{})
check.Suite(&ProxyProtocolSuite{})
check.Suite(&MarathonSuite15{})
if !useVPN {
check.Suite(&ProxyProtocolSuite{})
}
check.Suite(&RateLimitSuite{})
check.Suite(&RedisSuite{})
check.Suite(&RestSuite{})
@ -125,6 +147,24 @@ func (s *BaseSuite) composeUp(c *check.C, services ...string) {
c.Assert(err, checker.IsNil)
}
// composeExec runs the command in the given args in the given compose service container.
// Already running services are not affected (i.e. not stopped).
func (s *BaseSuite) composeExec(c *check.C, service string, args ...string) {
c.Assert(s.composeProject, check.NotNil)
c.Assert(s.dockerComposeService, check.NotNil)
_, err := s.dockerComposeService.Exec(context.Background(), s.composeProject.Name, composeapi.RunOptions{
Service: service,
Stdin: os.Stdin,
Stdout: os.Stdout,
Stderr: os.Stderr,
Command: args,
Tty: false,
Index: 1,
})
c.Assert(err, checker.IsNil)
}
// composeStop stops the given services of the current docker compose project and removes the corresponding containers.
func (s *BaseSuite) composeStop(c *check.C, services ...string) {
c.Assert(s.dockerComposeService, check.NotNil)
@ -285,3 +325,45 @@ func (s *BaseSuite) getContainerIP(c *check.C, name string) string {
func withConfigFile(file string) string {
return "--configFile=" + file
}
// tailscaleNotSuite includes a BaseSuite out of convenience, so we can benefit
// from composeUp et co., but it is not meant to function as a TestSuite per se.
type tailscaleNotSuite struct{ BaseSuite }
// setupVPN starts Tailscale on the corresponding container, and makes it a subnet
// router, for all the other containers (whoamis, etc) subsequently started for the
// integration tests.
// It only does so if the file provided as argument exists, and contains a
// Tailscale auth key (an ephemeral, but reusable, one is recommended).
//
// Add this section to your tailscale ACLs to auto-approve the routes for the
// containers in the docker subnet:
//
// "autoApprovers": {
// // Allow myself to automatically advertize routes for docker networks
// "routes": {
// "172.0.0.0/8": ["your_tailscale_identity"],
// },
// },
//
// TODO(mpl): we could maybe even move this setup to the Makefile, to start it
// and let it run (forever, or until voluntarily stopped).
func setupVPN(c *check.C, keyFile string) *tailscaleNotSuite {
data, err := ioutil.ReadFile(keyFile)
if err != nil {
if !errors.Is(err, fs.ErrNotExist) {
log.Fatal(err)
}
return nil
}
authKey := strings.TrimSpace(string(data))
// TODO: copy and create versions that don't need a check.C?
vpn := &tailscaleNotSuite{}
vpn.createComposeProject(c, "tailscale")
vpn.composeUp(c)
time.Sleep(5 * time.Second)
// If we ever change the docker subnet in the Makefile,
// we need to change this one below correspondingly.
vpn.composeExec(c, "tailscaled", "tailscale", "up", "--authkey="+authKey, "--advertise-routes=172.31.42.0/24")
return vpn
}

View file

@ -21,7 +21,7 @@ func (s *MarathonSuite15) SetUpSuite(c *check.C) {
s.createComposeProject(c, "marathon15")
s.composeUp(c)
s.marathonURL = "http://" + containerNameMarathon + ":8080"
s.marathonURL = "http://" + s.getComposeServiceIP(c, containerNameMarathon) + ":8080"
// Wait for Marathon readiness prior to creating the client so that we
// don't run into the "all cluster members down" state right from the

View file

@ -23,7 +23,7 @@ func (s *MarathonSuite) SetUpSuite(c *check.C) {
s.createComposeProject(c, "marathon")
s.composeUp(c)
s.marathonURL = "http://" + containerNameMarathon + ":8080"
s.marathonURL = "http://" + s.getComposeServiceIP(c, containerNameMarathon) + ":8080"
// Wait for Marathon readiness prior to creating the client so that we
// don't run into the "all cluster members down" state right from the
@ -45,6 +45,7 @@ func (s *MarathonSuite) TestConfigurationUpdate(c *check.C) {
MarathonURL string
}{s.marathonURL})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
defer display(c)
err := cmd.Start()

View file

@ -32,7 +32,7 @@ services:
-v /var/run/docker.sock:/var/run/docker.sock \
-v /cgroup:/cgroup -v /sys:/sys \
-v /usr/local/bin/docker:/usr/local/bin/docker \
-e MESOS_HOSTNAME=mesos-slave \
-e MESOS_HOSTNAME=$$(hostname -i) \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_ISOLATOR=cgroups/cpu,cgroups/mem \
-e MESOS_LOG_DIR=/var/log \

View file

@ -17,24 +17,32 @@ services:
MESOS_ZK: zk://zookeeper:2181/mesos
mesos-slave:
image: mesosphere/mesos-slave-dind:0.4.0_mesos-1.4.1_docker-17.05.0_ubuntu-16.04.3
image: docker:dind
privileged: true
# Uncomment published ports for interactive debugging.
# ports:
# - "5051:5051"
environment:
MESOS_HOSTNAME: mesos-slave
MESOS_CONTAINERIZERS: docker,mesos
MESOS_ISOLATOR: cgroups/cpu,cgroups/mem
MESOS_LOG_DIR: /var/log
MESOS_MASTER: zk://zookeeper:2181/mesos
MESOS_PORT: 5051
MESOS_WORK_DIR: /var/lib/mesos
MESOS_EXECUTOR_REGISTRATION_TIMEOUT: 5mins
MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD: 90secs
MESOS_DOCKER_STOP_TIMEOUT: 60secs
MESOS_RESOURCES: cpus:2;mem:2048;disk:20480;ports(*):[12000-12999]
MESOS_SYSTEMD_ENABLE_SUPPORT: false
command:
- "/bin/sh"
- "-c"
- "(/usr/local/bin/dockerd-entrypoint.sh &); sleep 10; set -x; \
docker -H unix:///var/run/docker.sock run -d --net=host --privileged \
-v /var/run/docker.sock:/var/run/docker.sock \
-v /cgroup:/cgroup -v /sys:/sys \
-v /usr/local/bin/docker:/usr/local/bin/docker \
-e MESOS_HOSTNAME=$$(hostname -i) \
-e MESOS_CONTAINERIZERS=docker,mesos \
-e MESOS_ISOLATOR=cgroups/cpu,cgroups/mem \
-e MESOS_LOG_DIR=/var/log \
-e MESOS_MASTER=zk://zookeeper:2181/mesos \
-e MESOS_PORT=5051 \
-e MESOS_WORK_DIR=/var/lib/mesos \
-e MESOS_EXECUTOR_REGISTRATION_TIMEOUT=5mins \
-e MESOS_EXECUTOR_SHUTDOWN_GRACE_PERIOD=90secs \
-e MESOS_DOCKER_STOP_TIMEOUT=60secs \
-e MESOS_RESOURCES='cpus:2;mem:2048;disk:20480;ports(*):[12000-12999]' \
-e MESOS_SYSTEMD_ENABLE_SUPPORT=false \
mesosphere/mesos-slave:1.4.1; sleep 600"
marathon:
image: mesosphere/marathon:v1.5.9

View file

@ -0,0 +1,17 @@
version: "3.8"
services:
tailscaled:
hostname: traefik-tests-gw # This will become the tailscale device name
image: tailscale/tailscale:v1.24.0
volumes:
# TODO: maybe mount the container's /var/lib to keep some state for tailscale?
- "/dev/net/tun:/dev/net/tun" # Required for tailscale to work
cap_add: # Required for tailscale to work
- net_admin
- sys_module
command: tailscaled
networks:
default:
name: traefik-test-network
external: true

View file

@ -3,6 +3,7 @@
## local.crt / local.key
Generate with
```bash
go run $GOROOT/src/crypto/tls/generate_cert.go --rsa-bits 1024 --host 127.0.0.1,::1,localhost --ca --start-date "Jan 1 00:00:00 1970" --duration=1000000h
mv cert.pem local.cert

View file

@ -25,7 +25,17 @@ func (s *TCPSuite) SetUpSuite(c *check.C) {
}
func (s *TCPSuite) TestMixed(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/mixed.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/mixed.toml", struct {
Whoami string
WhoamiA string
WhoamiB string
WhoamiNoCert string
}{
Whoami: "http://" + s.getComposeServiceIP(c, "whoami") + ":80",
WhoamiA: s.getComposeServiceIP(c, "whoami-a") + ":8080",
WhoamiB: s.getComposeServiceIP(c, "whoami-b") + ":8080",
WhoamiNoCert: s.getComposeServiceIP(c, "whoami-no-cert") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -75,7 +85,11 @@ func (s *TCPSuite) TestMixed(c *check.C) {
}
func (s *TCPSuite) TestTLSOptions(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/multi-tls-options.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/multi-tls-options.toml", struct {
WhoamiNoCert string
}{
WhoamiNoCert: s.getComposeServiceIP(c, "whoami-no-cert") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -105,7 +119,17 @@ func (s *TCPSuite) TestTLSOptions(c *check.C) {
}
func (s *TCPSuite) TestNonTLSFallback(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/non-tls-fallback.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/non-tls-fallback.toml", struct {
WhoamiA string
WhoamiB string
WhoamiNoCert string
WhoamiNoTLS string
}{
WhoamiA: s.getComposeServiceIP(c, "whoami-a") + ":8080",
WhoamiB: s.getComposeServiceIP(c, "whoami-b") + ":8080",
WhoamiNoCert: s.getComposeServiceIP(c, "whoami-no-cert") + ":8080",
WhoamiNoTLS: s.getComposeServiceIP(c, "whoami-no-tls") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -139,7 +163,11 @@ func (s *TCPSuite) TestNonTLSFallback(c *check.C) {
}
func (s *TCPSuite) TestNonTlsTcp(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/non-tls.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/non-tls.toml", struct {
WhoamiNoTLS string
}{
WhoamiNoTLS: s.getComposeServiceIP(c, "whoami-no-tls") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -159,7 +187,11 @@ func (s *TCPSuite) TestNonTlsTcp(c *check.C) {
}
func (s *TCPSuite) TestCatchAllNoTLS(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/catch-all-no-tls.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/catch-all-no-tls.toml", struct {
WhoamiBannerAddress string
}{
WhoamiBannerAddress: s.getComposeServiceIP(c, "whoami-banner") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -179,7 +211,13 @@ func (s *TCPSuite) TestCatchAllNoTLS(c *check.C) {
}
func (s *TCPSuite) TestCatchAllNoTLSWithHTTPS(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/catch-all-no-tls-with-https.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/catch-all-no-tls-with-https.toml", struct {
WhoamiNoTLSAddress string
WhoamiURL string
}{
WhoamiNoTLSAddress: s.getComposeServiceIP(c, "whoami-no-tls") + ":8080",
WhoamiURL: "http://" + s.getComposeServiceIP(c, "whoami") + ":80",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -204,7 +242,13 @@ func (s *TCPSuite) TestCatchAllNoTLSWithHTTPS(c *check.C) {
}
func (s *TCPSuite) TestMiddlewareWhiteList(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/ip-whitelist.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/ip-whitelist.toml", struct {
WhoamiA string
WhoamiB string
}{
WhoamiA: s.getComposeServiceIP(c, "whoami-a") + ":8080",
WhoamiB: s.getComposeServiceIP(c, "whoami-b") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))
@ -228,7 +272,13 @@ func (s *TCPSuite) TestMiddlewareWhiteList(c *check.C) {
}
func (s *TCPSuite) TestWRR(c *check.C) {
file := s.adaptFile(c, "fixtures/tcp/wrr.toml", struct{}{})
file := s.adaptFile(c, "fixtures/tcp/wrr.toml", struct {
WhoamiB string
WhoamiAB string
}{
WhoamiB: s.getComposeServiceIP(c, "whoami-b") + ":8080",
WhoamiAB: s.getComposeServiceIP(c, "whoami-ab") + ":8080",
})
defer os.Remove(file)
cmd, display := s.traefikCmd(withConfigFile(file))

View file

@ -1,7 +0,0 @@
[build]
# Path relative to the root of the repository
publish = "site"
base = "docs"
# Path relative to the "base" directory
command = "sh -x scripts/netlify-run.sh"

View file

@ -53,7 +53,7 @@ func RegisterDatadog(ctx context.Context, config *types.Datadog) Registry {
}
datadogClient = dogstatsd.New(config.Prefix+".", kitlog.LoggerFunc(func(keyvals ...interface{}) error {
log.WithoutContext().WithField(log.MetricsProviderName, "datadog").Info(keyvals)
log.WithoutContext().WithField(log.MetricsProviderName, "datadog").Info(keyvals...)
return nil
}))

View file

@ -136,7 +136,7 @@ func initInfluxDBClient(ctx context.Context, config *types.InfluxDB) *influx.Inf
RetentionPolicy: config.RetentionPolicy,
},
kitlog.LoggerFunc(func(keyvals ...interface{}) error {
log.WithoutContext().WithField(log.MetricsProviderName, "influxdb").Info(keyvals)
log.WithoutContext().WithField(log.MetricsProviderName, "influxdb").Info(keyvals...)
return nil
}))
}

View file

@ -38,7 +38,7 @@ func RegisterInfluxDB2(ctx context.Context, config *types.InfluxDB2) Registry {
config.AdditionalLabels,
influxdb.BatchPointsConfig{},
kitlog.LoggerFunc(func(kv ...interface{}) error {
log.FromContext(ctx).Error(kv)
log.FromContext(ctx).Error(kv...)
return nil
}),
)

View file

@ -4,8 +4,6 @@ import (
"context"
"errors"
"net/http"
"sort"
"strings"
"sync"
"time"
@ -15,7 +13,6 @@ import (
"github.com/prometheus/client_golang/prometheus/promhttp"
"github.com/traefik/traefik/v2/pkg/config/dynamic"
"github.com/traefik/traefik/v2/pkg/log"
"github.com/traefik/traefik/v2/pkg/safe"
"github.com/traefik/traefik/v2/pkg/types"
)
@ -111,37 +108,33 @@ func initStandardRegistry(config *types.Prometheus) Registry {
buckets = config.Buckets
}
safe.Go(func() {
promState.ListenValueUpdates()
})
configReloads := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
configReloads := newCounterFrom(stdprometheus.CounterOpts{
Name: configReloadsTotalName,
Help: "Config reloads",
}, []string{})
configReloadsFailures := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
configReloadsFailures := newCounterFrom(stdprometheus.CounterOpts{
Name: configReloadsFailuresTotalName,
Help: "Config failure reloads",
}, []string{})
lastConfigReloadSuccess := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
lastConfigReloadSuccess := newGaugeFrom(stdprometheus.GaugeOpts{
Name: configLastReloadSuccessName,
Help: "Last config reload success",
}, []string{})
lastConfigReloadFailure := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
lastConfigReloadFailure := newGaugeFrom(stdprometheus.GaugeOpts{
Name: configLastReloadFailureName,
Help: "Last config reload failure",
}, []string{})
tlsCertsNotAfterTimestamp := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
tlsCertsNotAfterTimestamp := newGaugeFrom(stdprometheus.GaugeOpts{
Name: tlsCertsNotAfterTimestamp,
Help: "Certificate expiration timestamp",
}, []string{"cn", "serial", "sans"})
promState.describers = []func(chan<- *stdprometheus.Desc){
configReloads.cv.Describe,
configReloadsFailures.cv.Describe,
lastConfigReloadSuccess.gv.Describe,
lastConfigReloadFailure.gv.Describe,
tlsCertsNotAfterTimestamp.gv.Describe,
promState.vectors = []vector{
configReloads.cv,
configReloadsFailures.cv,
lastConfigReloadSuccess.gv,
lastConfigReloadFailure.gv,
tlsCertsNotAfterTimestamp.gv,
}
reg := &standardRegistry{
@ -156,30 +149,30 @@ func initStandardRegistry(config *types.Prometheus) Registry {
}
if config.AddEntryPointsLabels {
entryPointReqs := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
entryPointReqs := newCounterFrom(stdprometheus.CounterOpts{
Name: entryPointReqsTotalName,
Help: "How many HTTP requests processed on an entrypoint, partitioned by status code, protocol, and method.",
}, []string{"code", "method", "protocol", "entrypoint"})
entryPointReqsTLS := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
entryPointReqsTLS := newCounterFrom(stdprometheus.CounterOpts{
Name: entryPointReqsTLSTotalName,
Help: "How many HTTP requests with TLS processed on an entrypoint, partitioned by TLS Version and TLS cipher Used.",
}, []string{"tls_version", "tls_cipher", "entrypoint"})
entryPointReqDurations := newHistogramFrom(promState.collectors, stdprometheus.HistogramOpts{
entryPointReqDurations := newHistogramFrom(stdprometheus.HistogramOpts{
Name: entryPointReqDurationName,
Help: "How long it took to process the request on an entrypoint, partitioned by status code, protocol, and method.",
Buckets: buckets,
}, []string{"code", "method", "protocol", "entrypoint"})
entryPointOpenConns := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
entryPointOpenConns := newGaugeFrom(stdprometheus.GaugeOpts{
Name: entryPointOpenConnsName,
Help: "How many open connections exist on an entrypoint, partitioned by method and protocol.",
}, []string{"method", "protocol", "entrypoint"})
promState.describers = append(promState.describers, []func(chan<- *stdprometheus.Desc){
entryPointReqs.cv.Describe,
entryPointReqsTLS.cv.Describe,
entryPointReqDurations.hv.Describe,
entryPointOpenConns.gv.Describe,
}...)
promState.vectors = append(promState.vectors,
entryPointReqs.cv,
entryPointReqsTLS.cv,
entryPointReqDurations.hv,
entryPointOpenConns.gv,
)
reg.entryPointReqsCounter = entryPointReqs
reg.entryPointReqsTLSCounter = entryPointReqsTLS
@ -188,30 +181,30 @@ func initStandardRegistry(config *types.Prometheus) Registry {
}
if config.AddRoutersLabels {
routerReqs := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
routerReqs := newCounterFrom(stdprometheus.CounterOpts{
Name: routerReqsTotalName,
Help: "How many HTTP requests are processed on a router, partitioned by service, status code, protocol, and method.",
}, []string{"code", "method", "protocol", "router", "service"})
routerReqsTLS := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
routerReqsTLS := newCounterFrom(stdprometheus.CounterOpts{
Name: routerReqsTLSTotalName,
Help: "How many HTTP requests with TLS are processed on a router, partitioned by service, TLS Version, and TLS cipher Used.",
}, []string{"tls_version", "tls_cipher", "router", "service"})
routerReqDurations := newHistogramFrom(promState.collectors, stdprometheus.HistogramOpts{
routerReqDurations := newHistogramFrom(stdprometheus.HistogramOpts{
Name: routerReqDurationName,
Help: "How long it took to process the request on a router, partitioned by service, status code, protocol, and method.",
Buckets: buckets,
}, []string{"code", "method", "protocol", "router", "service"})
routerOpenConns := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
routerOpenConns := newGaugeFrom(stdprometheus.GaugeOpts{
Name: routerOpenConnsName,
Help: "How many open connections exist on a router, partitioned by service, method, and protocol.",
}, []string{"method", "protocol", "router", "service"})
promState.describers = append(promState.describers, []func(chan<- *stdprometheus.Desc){
routerReqs.cv.Describe,
routerReqsTLS.cv.Describe,
routerReqDurations.hv.Describe,
routerOpenConns.gv.Describe,
}...)
promState.vectors = append(promState.vectors,
routerReqs.cv,
routerReqsTLS.cv,
routerReqDurations.hv,
routerOpenConns.gv,
)
reg.routerReqsCounter = routerReqs
reg.routerReqsTLSCounter = routerReqsTLS
reg.routerReqDurationHistogram, _ = NewHistogramWithScale(routerReqDurations, time.Second)
@ -219,40 +212,40 @@ func initStandardRegistry(config *types.Prometheus) Registry {
}
if config.AddServicesLabels {
serviceReqs := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
serviceReqs := newCounterFrom(stdprometheus.CounterOpts{
Name: serviceReqsTotalName,
Help: "How many HTTP requests processed on a service, partitioned by status code, protocol, and method.",
}, []string{"code", "method", "protocol", "service"})
serviceReqsTLS := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
serviceReqsTLS := newCounterFrom(stdprometheus.CounterOpts{
Name: serviceReqsTLSTotalName,
Help: "How many HTTP requests with TLS processed on a service, partitioned by TLS version and TLS cipher.",
}, []string{"tls_version", "tls_cipher", "service"})
serviceReqDurations := newHistogramFrom(promState.collectors, stdprometheus.HistogramOpts{
serviceReqDurations := newHistogramFrom(stdprometheus.HistogramOpts{
Name: serviceReqDurationName,
Help: "How long it took to process the request on a service, partitioned by status code, protocol, and method.",
Buckets: buckets,
}, []string{"code", "method", "protocol", "service"})
serviceOpenConns := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
serviceOpenConns := newGaugeFrom(stdprometheus.GaugeOpts{
Name: serviceOpenConnsName,
Help: "How many open connections exist on a service, partitioned by method and protocol.",
}, []string{"method", "protocol", "service"})
serviceRetries := newCounterFrom(promState.collectors, stdprometheus.CounterOpts{
serviceRetries := newCounterFrom(stdprometheus.CounterOpts{
Name: serviceRetriesTotalName,
Help: "How many request retries happened on a service.",
}, []string{"service"})
serviceServerUp := newGaugeFrom(promState.collectors, stdprometheus.GaugeOpts{
serviceServerUp := newGaugeFrom(stdprometheus.GaugeOpts{
Name: serviceServerUpName,
Help: "service server is up, described by gauge value of 0 or 1.",
}, []string{"service", "url"})
promState.describers = append(promState.describers, []func(chan<- *stdprometheus.Desc){
serviceReqs.cv.Describe,
serviceReqsTLS.cv.Describe,
serviceReqDurations.hv.Describe,
serviceOpenConns.gv.Describe,
serviceRetries.cv.Describe,
serviceServerUp.gv.Describe,
}...)
promState.vectors = append(promState.vectors,
serviceReqs.cv,
serviceReqsTLS.cv,
serviceReqDurations.hv,
serviceOpenConns.gv,
serviceRetries.cv,
serviceServerUp.gv,
)
reg.serviceReqsCounter = serviceReqs
reg.serviceReqsTLSCounter = serviceReqsTLS
@ -287,64 +280,92 @@ func registerPromState(ctx context.Context) bool {
// It then converts the configuration to the optimized package internal format
// and sets it to the promState.
func OnConfigurationUpdate(conf dynamic.Configuration, entryPoints []string) {
dynamicConfig := newDynamicConfig()
dynCfg := newDynamicConfig()
for _, value := range entryPoints {
dynamicConfig.entryPoints[value] = true
dynCfg.entryPoints[value] = true
}
if conf.HTTP == nil {
promState.SetDynamicConfig(dynCfg)
return
}
for name := range conf.HTTP.Routers {
dynamicConfig.routers[name] = true
dynCfg.routers[name] = true
}
for serviceName, service := range conf.HTTP.Services {
dynamicConfig.services[serviceName] = make(map[string]bool)
dynCfg.services[serviceName] = make(map[string]bool)
if service.LoadBalancer != nil {
for _, server := range service.LoadBalancer.Servers {
dynamicConfig.services[serviceName][server.URL] = true
dynCfg.services[serviceName][server.URL] = true
}
}
}
promState.SetDynamicConfig(dynamicConfig)
promState.SetDynamicConfig(dynCfg)
}
func newPrometheusState() *prometheusState {
return &prometheusState{
collectors: make(chan *collector),
dynamicConfig: newDynamicConfig(),
state: make(map[string]*collector),
deletedURLs: make(map[string][]string),
}
}
type prometheusState struct {
collectors chan *collector
describers []func(ch chan<- *stdprometheus.Desc)
type vector interface {
stdprometheus.Collector
DeletePartialMatch(labels stdprometheus.Labels) int
}
mtx sync.Mutex
dynamicConfig *dynamicConfig
state map[string]*collector
type prometheusState struct {
vectors []vector
mtx sync.Mutex
dynamicConfig *dynamicConfig
deletedEP []string
deletedRouters []string
deletedServices []string
deletedURLs map[string][]string
}
func (ps *prometheusState) SetDynamicConfig(dynamicConfig *dynamicConfig) {
ps.mtx.Lock()
defer ps.mtx.Unlock()
ps.dynamicConfig = dynamicConfig
}
func (ps *prometheusState) ListenValueUpdates() {
for collector := range ps.collectors {
ps.mtx.Lock()
ps.state[collector.id] = collector
ps.mtx.Unlock()
for ep := range ps.dynamicConfig.entryPoints {
if _, ok := dynamicConfig.entryPoints[ep]; !ok {
ps.deletedEP = append(ps.deletedEP, ep)
}
}
for router := range ps.dynamicConfig.routers {
if _, ok := dynamicConfig.routers[router]; !ok {
ps.deletedRouters = append(ps.deletedRouters, router)
}
}
for service, serV := range ps.dynamicConfig.services {
actualService, ok := dynamicConfig.services[service]
if !ok {
ps.deletedServices = append(ps.deletedServices, service)
}
for url := range serV {
if _, ok := actualService[url]; !ok {
ps.deletedURLs[service] = append(ps.deletedURLs[service], url)
}
}
}
ps.dynamicConfig = dynamicConfig
}
// Describe implements prometheus.Collector and simply calls
// the registered describer functions.
func (ps *prometheusState) Describe(ch chan<- *stdprometheus.Desc) {
for _, desc := range ps.describers {
desc(ch)
for _, v := range ps.vectors {
v.Describe(ch)
}
}
@ -354,49 +375,54 @@ func (ps *prometheusState) Describe(ch chan<- *stdprometheus.Desc) {
// The removal happens only after their Collect method was called to ensure that
// also those metrics will be exported on the current scrape.
func (ps *prometheusState) Collect(ch chan<- stdprometheus.Metric) {
for _, v := range ps.vectors {
v.Collect(ch)
}
ps.mtx.Lock()
defer ps.mtx.Unlock()
var outdatedKeys []string
for key, cs := range ps.state {
cs.collector.Collect(ch)
if ps.isOutdated(cs) {
outdatedKeys = append(outdatedKeys, key)
for _, ep := range ps.deletedEP {
if !ps.dynamicConfig.hasEntryPoint(ep) {
ps.DeletePartialMatch(map[string]string{"entrypoint": ep})
}
}
for _, key := range outdatedKeys {
ps.state[key].delete()
delete(ps.state, key)
for _, router := range ps.deletedRouters {
if !ps.dynamicConfig.hasRouter(router) {
ps.DeletePartialMatch(map[string]string{"router": router})
}
}
for _, service := range ps.deletedServices {
if !ps.dynamicConfig.hasService(service) {
ps.DeletePartialMatch(map[string]string{"service": service})
}
}
for service, urls := range ps.deletedURLs {
for _, url := range urls {
if !ps.dynamicConfig.hasServerURL(service, url) {
ps.DeletePartialMatch(map[string]string{"service": service, "url": url})
}
}
}
ps.deletedEP = nil
ps.deletedRouters = nil
ps.deletedServices = nil
ps.deletedURLs = make(map[string][]string)
}
// isOutdated checks whether the passed collector has labels that mark
// it as belonging to an outdated configuration of Traefik.
func (ps *prometheusState) isOutdated(collector *collector) bool {
labels := collector.labels
if entrypointName, ok := labels["entrypoint"]; ok && !ps.dynamicConfig.hasEntryPoint(entrypointName) {
return true
// DeletePartialMatch deletes all metrics where the variable labels contain all of those passed in as labels.
// The order of the labels does not matter.
// It returns the number of metrics deleted.
func (ps *prometheusState) DeletePartialMatch(labels stdprometheus.Labels) int {
var count int
for _, elem := range ps.vectors {
count += elem.DeletePartialMatch(labels)
}
if routerName, ok := labels["router"]; ok {
if !ps.dynamicConfig.hasRouter(routerName) {
return true
}
}
if serviceName, ok := labels["service"]; ok {
if !ps.dynamicConfig.hasService(serviceName) {
return true
}
if url, ok := labels["url"]; ok && !ps.dynamicConfig.hasServerURL(serviceName, url) {
return true
}
}
return false
return count
}
func newDynamicConfig() *dynamicConfig {
@ -440,42 +466,15 @@ func (d *dynamicConfig) hasServerURL(serviceName, serverURL string) bool {
return false
}
func newCollector(metricName string, labels stdprometheus.Labels, c stdprometheus.Collector, deleteFn func()) *collector {
return &collector{
id: buildMetricID(metricName, labels),
labels: labels,
collector: c,
delete: deleteFn,
}
}
// collector wraps a Collector object from the Prometheus client library.
// It adds information on how many generations this metric should be present
// in the /metrics output, relative to the time it was last tracked.
type collector struct {
id string
labels stdprometheus.Labels
collector stdprometheus.Collector
delete func()
}
func buildMetricID(metricName string, labels stdprometheus.Labels) string {
var labelNamesValues []string
for name, value := range labels {
labelNamesValues = append(labelNamesValues, name, value)
}
sort.Strings(labelNamesValues)
return metricName + ":" + strings.Join(labelNamesValues, "|")
}
func newCounterFrom(collectors chan<- *collector, opts stdprometheus.CounterOpts, labelNames []string) *counter {
func newCounterFrom(opts stdprometheus.CounterOpts, labelNames []string) *counter {
cv := stdprometheus.NewCounterVec(opts, labelNames)
c := &counter{
name: opts.Name,
cv: cv,
collectors: collectors,
name: opts.Name,
cv: cv,
labelNamesValues: make([]string, 0, 16),
}
if len(labelNames) == 0 {
c.collector = cv.WithLabelValues()
c.Add(0)
}
return c
@ -485,39 +484,37 @@ type counter struct {
name string
cv *stdprometheus.CounterVec
labelNamesValues labelNamesValues
collectors chan<- *collector
collector stdprometheus.Counter
}
func (c *counter) With(labelValues ...string) metrics.Counter {
lnv := c.labelNamesValues.With(labelValues...)
return &counter{
name: c.name,
cv: c.cv,
labelNamesValues: c.labelNamesValues.With(labelValues...),
collectors: c.collectors,
labelNamesValues: lnv,
collector: c.cv.With(lnv.ToLabels()),
}
}
func (c *counter) Add(delta float64) {
labels := c.labelNamesValues.ToLabels()
collector := c.cv.With(labels)
collector.Add(delta)
c.collectors <- newCollector(c.name, labels, collector, func() {
c.cv.Delete(labels)
})
c.collector.Add(delta)
}
func (c *counter) Describe(ch chan<- *stdprometheus.Desc) {
c.cv.Describe(ch)
}
func newGaugeFrom(collectors chan<- *collector, opts stdprometheus.GaugeOpts, labelNames []string) *gauge {
func newGaugeFrom(opts stdprometheus.GaugeOpts, labelNames []string) *gauge {
gv := stdprometheus.NewGaugeVec(opts, labelNames)
g := &gauge{
name: opts.Name,
gv: gv,
collectors: collectors,
name: opts.Name,
gv: gv,
labelNamesValues: make([]string, 0, 16),
}
if len(labelNames) == 0 {
g.collector = gv.WithLabelValues()
g.Set(0)
}
return g
@ -527,46 +524,37 @@ type gauge struct {
name string
gv *stdprometheus.GaugeVec
labelNamesValues labelNamesValues
collectors chan<- *collector
collector stdprometheus.Gauge
}
func (g *gauge) With(labelValues ...string) metrics.Gauge {
lnv := g.labelNamesValues.With(labelValues...)
return &gauge{
name: g.name,
gv: g.gv,
labelNamesValues: g.labelNamesValues.With(labelValues...),
collectors: g.collectors,
labelNamesValues: lnv,
collector: g.gv.With(lnv.ToLabels()),
}
}
func (g *gauge) Add(delta float64) {
labels := g.labelNamesValues.ToLabels()
collector := g.gv.With(labels)
collector.Add(delta)
g.collectors <- newCollector(g.name, labels, collector, func() {
g.gv.Delete(labels)
})
g.collector.Add(delta)
}
func (g *gauge) Set(value float64) {
labels := g.labelNamesValues.ToLabels()
collector := g.gv.With(labels)
collector.Set(value)
g.collectors <- newCollector(g.name, labels, collector, func() {
g.gv.Delete(labels)
})
g.collector.Set(value)
}
func (g *gauge) Describe(ch chan<- *stdprometheus.Desc) {
g.gv.Describe(ch)
}
func newHistogramFrom(collectors chan<- *collector, opts stdprometheus.HistogramOpts, labelNames []string) *histogram {
func newHistogramFrom(opts stdprometheus.HistogramOpts, labelNames []string) *histogram {
hv := stdprometheus.NewHistogramVec(opts, labelNames)
return &histogram{
name: opts.Name,
hv: hv,
collectors: collectors,
name: opts.Name,
hv: hv,
labelNamesValues: make([]string, 0, 16),
}
}
@ -574,28 +562,21 @@ type histogram struct {
name string
hv *stdprometheus.HistogramVec
labelNamesValues labelNamesValues
collectors chan<- *collector
collector stdprometheus.Observer
}
func (h *histogram) With(labelValues ...string) metrics.Histogram {
lnv := h.labelNamesValues.With(labelValues...)
return &histogram{
name: h.name,
hv: h.hv,
labelNamesValues: h.labelNamesValues.With(labelValues...),
collectors: h.collectors,
labelNamesValues: lnv,
collector: h.hv.With(lnv.ToLabels()),
}
}
func (h *histogram) Observe(value float64) {
labels := h.labelNamesValues.ToLabels()
observer := h.hv.With(labels)
observer.Observe(value)
// Do a type assertion to be sure that prometheus will be able to call the Collect method.
if collector, ok := observer.(stdprometheus.Histogram); ok {
h.collectors <- newCollector(h.name, labels, collector, func() {
h.hv.Delete(labels)
})
}
h.collector.Observe(value)
}
func (h *histogram) Describe(ch chan<- *stdprometheus.Desc) {
@ -618,7 +599,7 @@ func (lvs labelNamesValues) With(labelValues ...string) labelNamesValues {
// ToLabels is a convenience method to convert a labelNamesValues
// to the native prometheus.Labels.
func (lvs labelNamesValues) ToLabels() stdprometheus.Labels {
labels := stdprometheus.Labels{}
labels := make(map[string]string, len(lvs)/2)
for i := 0; i < len(lvs); i += 2 {
labels[lvs[i]] = lvs[i+1]
}

View file

@ -17,8 +17,7 @@ import (
)
func TestRegisterPromState(t *testing.T) {
// Reset state of global promState.
defer promState.reset()
t.Cleanup(promState.reset)
testCases := []struct {
desc string
@ -88,21 +87,10 @@ func TestRegisterPromState(t *testing.T) {
}
}
// reset is a utility method for unit testing. It should be called after each
// test run that changes promState internally in order to avoid dependencies
// between unit tests.
func (ps *prometheusState) reset() {
ps.collectors = make(chan *collector)
ps.describers = []func(ch chan<- *prometheus.Desc){}
ps.dynamicConfig = newDynamicConfig()
ps.state = make(map[string]*collector)
}
func TestPrometheus(t *testing.T) {
promState = newPrometheusState()
promRegistry = prometheus.NewRegistry()
// Reset state of global promState.
defer promState.reset()
t.Cleanup(promState.reset)
prometheusRegistry := RegisterPrometheus(context.Background(), &types.Prometheus{AddEntryPointsLabels: true, AddRoutersLabels: true, AddServicesLabels: true})
defer promRegistry.Unregister(promState)
@ -361,30 +349,41 @@ func TestPrometheus(t *testing.T) {
func TestPrometheusMetricRemoval(t *testing.T) {
promState = newPrometheusState()
promRegistry = prometheus.NewRegistry()
// Reset state of global promState.
defer promState.reset()
t.Cleanup(promState.reset)
prometheusRegistry := RegisterPrometheus(context.Background(), &types.Prometheus{AddEntryPointsLabels: true, AddServicesLabels: true, AddRoutersLabels: true})
defer promRegistry.Unregister(promState)
conf := dynamic.Configuration{
conf1 := dynamic.Configuration{
HTTP: th.BuildConfiguration(
th.WithRouters(
th.WithRouter("foo@providerName",
th.WithServiceName("bar")),
th.WithRouter("foo@providerName", th.WithServiceName("bar")),
th.WithRouter("router2", th.WithServiceName("bar@providerName")),
),
th.WithLoadBalancerServices(th.WithService("bar@providerName",
th.WithServers(th.WithServer("http://localhost:9000"))),
th.WithLoadBalancerServices(
th.WithService("bar@providerName", th.WithServers(
th.WithServer("http://localhost:9000"),
th.WithServer("http://localhost:9999"),
th.WithServer("http://localhost:9998"),
)),
th.WithService("service1", th.WithServers(th.WithServer("http://localhost:9000"))),
),
func(cfg *dynamic.HTTPConfiguration) {
cfg.Services["fii"] = &dynamic.Service{
Weighted: &dynamic.WeightedRoundRobin{},
}
},
),
}
OnConfigurationUpdate(conf, []string{"entrypoint1"})
conf2 := dynamic.Configuration{
HTTP: th.BuildConfiguration(
th.WithRouters(
th.WithRouter("foo@providerName", th.WithServiceName("bar")),
),
th.WithLoadBalancerServices(
th.WithService("bar@providerName", th.WithServers(th.WithServer("http://localhost:9000"))),
),
),
}
OnConfigurationUpdate(conf1, []string{"entrypoint1", "entrypoint2"})
OnConfigurationUpdate(conf2, []string{"entrypoint1"})
// Register some metrics manually that are not part of the active configuration.
// Those metrics should be part of the /metrics output on the first scrape but
@ -393,22 +392,25 @@ func TestPrometheusMetricRemoval(t *testing.T) {
EntryPointReqsCounter().
With("entrypoint", "entrypoint2", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
prometheusRegistry.
RouterReqsCounter().
With("router", "router2", "service", "bar@providerName", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
prometheusRegistry.
ServiceReqsCounter().
With("service", "service2", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
With("service", "service1", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
prometheusRegistry.
ServiceServerUpGauge().
With("service", "service1", "url", "http://localhost:9999").
With("service", "bar@providerName", "url", "http://localhost:9999").
Set(1)
prometheusRegistry.
RouterReqsCounter().
With("router", "router2", "service", "service2", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
ServiceServerUpGauge().
With("service", "bar@providerName", "url", "http://localhost:9998").
Set(1)
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName, serviceReqsTotalName, serviceServerUpName)
assertMetricsAbsent(t, mustScrape(), entryPointReqsTotalName, serviceReqsTotalName, serviceServerUpName)
assertMetricsAbsent(t, mustScrape(), routerReqsTotalName, routerReqDurationName, routerOpenConnsName)
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName, routerReqsTotalName, serviceReqsTotalName, serviceServerUpName)
assertMetricsAbsent(t, mustScrape(), entryPointReqsTotalName, routerReqsTotalName, serviceReqsTotalName, serviceServerUpName)
// To verify that metrics belonging to active configurations are not removed
// here the counter examples.
@ -418,24 +420,80 @@ func TestPrometheusMetricRemoval(t *testing.T) {
Add(1)
prometheusRegistry.
RouterReqsCounter().
With("router", "foo@providerName", "service", "bar@providerName", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
With("router", "foo@providerName", "service", "bar", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
prometheusRegistry.
ServiceReqsCounter().
With("service", "bar@providerName", "code", strconv.Itoa(http.StatusOK), "method", http.MethodGet, "protocol", "http").
Add(1)
prometheusRegistry.
ServiceServerUpGauge().
With("service", "bar@providerName", "url", "http://localhost:9000").
Set(1)
delayForTrackingCompletion()
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName)
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName)
assertMetricsExist(t, mustScrape(), routerReqsTotalName)
assertMetricsExist(t, mustScrape(), routerReqsTotalName)
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName, serviceReqsTotalName, serviceServerUpName, routerReqsTotalName)
assertMetricsExist(t, mustScrape(), entryPointReqsTotalName, serviceReqsTotalName, serviceServerUpName, routerReqsTotalName)
}
func TestPrometheusMetricRemoveEndpointForRecoveredService(t *testing.T) {
promState = newPrometheusState()
promRegistry = prometheus.NewRegistry()
t.Cleanup(promState.reset)
prometheusRegistry := RegisterPrometheus(context.Background(), &types.Prometheus{AddServicesLabels: true})
defer promRegistry.Unregister(promState)
conf1 := dynamic.Configuration{
HTTP: th.BuildConfiguration(
th.WithLoadBalancerServices(
th.WithService("service1", th.WithServers(th.WithServer("http://localhost:9000"))),
),
),
}
conf2 := dynamic.Configuration{
HTTP: th.BuildConfiguration(),
}
conf3 := dynamic.Configuration{
HTTP: th.BuildConfiguration(
th.WithLoadBalancerServices(
th.WithService("service1", th.WithServers(th.WithServer("http://localhost:9001"))),
),
),
}
OnConfigurationUpdate(conf1, []string{})
OnConfigurationUpdate(conf2, []string{})
OnConfigurationUpdate(conf3, []string{})
prometheusRegistry.
ServiceServerUpGauge().
With("service", "service1", "url", "http://localhost:9000").
Add(1)
assertMetricsExist(t, mustScrape(), serviceServerUpName)
assertMetricsAbsent(t, mustScrape(), serviceServerUpName)
}
func TestPrometheusRemovedMetricsReset(t *testing.T) {
// Reset state of global promState.
defer promState.reset()
t.Cleanup(promState.reset)
prometheusRegistry := RegisterPrometheus(context.Background(), &types.Prometheus{AddEntryPointsLabels: true, AddServicesLabels: true})
defer promRegistry.Unregister(promState)
conf1 := dynamic.Configuration{
HTTP: th.BuildConfiguration(
th.WithLoadBalancerServices(th.WithService("service",
th.WithServers(th.WithServer("http://localhost:9000"))),
),
),
}
OnConfigurationUpdate(conf1, []string{"entrypoint1", "entrypoint2"})
OnConfigurationUpdate(dynamic.Configuration{}, nil)
labelNamesValues := []string{
"service", "service",
"code", strconv.Itoa(http.StatusOK),
@ -467,12 +525,24 @@ func TestPrometheusRemovedMetricsReset(t *testing.T) {
assertCounterValue(t, 1, findMetricFamily(serviceReqsTotalName, metricsFamilies), labelNamesValues...)
}
// reset is a utility method for unit testing.
// It should be called after each test run that changes promState internally
// in order to avoid dependencies between unit tests.
func (ps *prometheusState) reset() {
ps.dynamicConfig = newDynamicConfig()
ps.vectors = nil
ps.deletedEP = nil
ps.deletedRouters = nil
ps.deletedServices = nil
ps.deletedURLs = make(map[string][]string)
}
// Tracking and gathering the metrics happens concurrently.
// In practice this is no problem, because in case a tracked metric would miss
// the current scrape, it would just be there in the next one.
// That we can test reliably the tracking of all metrics here, we sleep
// for a short amount of time, to make sure the metric will be present
// in the next scrape.
// In practice this is no problem, because in case a tracked metric would miss the current scrape,
// it would just be there in the next one.
// That we can test reliably the tracking of all metrics here,
// we sleep for a short amount of time,
// to make sure the metric will be present in the next scrape.
func delayForTrackingCompletion() {
time.Sleep(250 * time.Millisecond)
}

View file

@ -50,7 +50,7 @@ func RegisterStatsd(ctx context.Context, config *types.Statsd) Registry {
}
statsdClient = statsd.New(config.Prefix+".", kitlog.LoggerFunc(func(keyvals ...interface{}) error {
log.WithoutContext().WithField(log.MetricsProviderName, "statsd").Info(keyvals)
log.WithoutContext().WithField(log.MetricsProviderName, "statsd").Info(keyvals...)
return nil
}))

View file

@ -142,6 +142,9 @@ func (x *XForwarded) rewrite(outreq *http.Request) {
xfProto := unsafeHeader(outreq.Header).Get(xForwardedProto)
if xfProto == "" {
// TODO: is this expected to set the X-Forwarded-Proto header value to
// ws(s) as the underlying request used to upgrade the connection is
// made over HTTP(S)?
if isWebsocketRequest(outreq) {
if outreq.TLS != nil {
unsafeHeader(outreq.Header).Set(xForwardedProto, "wss")

View file

@ -103,8 +103,9 @@ func (m *metricsMiddleware) ServeHTTP(rw http.ResponseWriter, req *http.Request)
labels = append(labels, m.baseLabels...)
labels = append(labels, "method", getMethod(req), "protocol", getRequestProtocol(req))
m.openConnsGauge.With(labels...).Add(1)
defer m.openConnsGauge.With(labels...).Add(-1)
openConnsGauge := m.openConnsGauge.With(labels...)
openConnsGauge.Add(1)
defer openConnsGauge.Add(-1)
// TLS metrics
if req.TLS != nil {
@ -122,8 +123,7 @@ func (m *metricsMiddleware) ServeHTTP(rw http.ResponseWriter, req *http.Request)
labels = append(labels, "code", strconv.Itoa(recorder.getCode()))
histograms := m.reqDurationHistogram.With(labels...)
histograms.ObserveFromStart(start)
m.reqDurationHistogram.With(labels...).ObserveFromStart(start)
m.reqsCounter.With(labels...).Add(1)
}

View file

@ -33,10 +33,10 @@ func NewRedirectScheme(ctx context.Context, next http.Handler, conf dynamic.Redi
port = ":" + conf.Port
}
return newRedirect(next, uriPattern, conf.Scheme+"://${2}"+port+"${4}", conf.Permanent, rawURLScheme, name)
return newRedirect(next, uriPattern, conf.Scheme+"://${2}"+port+"${4}", conf.Permanent, clientRequestURL, name)
}
func rawURLScheme(req *http.Request) string {
func clientRequestURL(req *http.Request) string {
scheme := schemeHTTP
host, port, err := net.SplitHostPort(req.Host)
if err != nil {
@ -64,8 +64,20 @@ func rawURLScheme(req *http.Request) string {
scheme = schemeHTTPS
}
if value := req.Header.Get(xForwardedProto); value != "" {
scheme = value
if xProto := req.Header.Get(xForwardedProto); xProto != "" {
// When the initial request is a connection upgrade request,
// X-Forwarded-Proto header might have been set by a previous hop to ws(s),
// even though the actual protocol used so far is HTTP(s).
// Given that we're in a middleware that is only used in the context of HTTP(s) requests,
// the only possible valid schemes are one of "http" or "https", so we convert back to them.
switch {
case strings.EqualFold(xProto, "ws"):
scheme = schemeHTTP
case strings.EqualFold(xProto, "wss"):
scheme = schemeHTTPS
default:
scheme = xProto
}
}
if scheme == schemeHTTP && port == ":80" || scheme == schemeHTTPS && port == ":443" {

View file

@ -63,6 +63,41 @@ func TestRedirectSchemeHandler(t *testing.T) {
},
expectedStatus: http.StatusOK,
},
{
desc: "HTTP to HTTPS, with X-Forwarded-Proto to unknown value",
config: dynamic.RedirectScheme{
Scheme: "https",
},
url: "http://foo",
headers: map[string]string{
"X-Forwarded-Proto": "bar",
},
expectedURL: "https://bar://foo",
expectedStatus: http.StatusFound,
},
{
desc: "HTTP to HTTPS, with X-Forwarded-Proto to ws",
config: dynamic.RedirectScheme{
Scheme: "https",
},
url: "http://foo",
headers: map[string]string{
"X-Forwarded-Proto": "ws",
},
expectedURL: "https://foo",
expectedStatus: http.StatusFound,
},
{
desc: "HTTP to HTTPS, with X-Forwarded-Proto to wss",
config: dynamic.RedirectScheme{
Scheme: "https",
},
url: "http://foo",
headers: map[string]string{
"X-Forwarded-Proto": "wss",
},
expectedStatus: http.StatusOK,
},
{
desc: "HTTP with port to HTTPS without port",
config: dynamic.RedirectScheme{

View file

@ -46,7 +46,7 @@ type itemData struct {
// ProviderBuilder is responsible for constructing namespaced instances of the Consul Catalog provider.
type ProviderBuilder struct {
Configuration `export:"true"`
Configuration `yaml:",inline" export:"true"`
// Deprecated: use Namespaces option instead.
Namespace string `description:"Sets the namespace used to discover services (Consul Enterprise only)." json:"namespace,omitempty" toml:"namespace,omitempty" yaml:"namespace,omitempty"`

View file

@ -11,8 +11,8 @@ import (
// TraefikService is the CRD implementation of a Traefik Service.
// TraefikService object allows to:
// - Apply weight to Services on load-balancing
// - Mirror traffic on services
// - Apply weight to Services on load-balancing
// - Mirror traffic on services
// More info: https://doc.traefik.io/traefik/v2.8/routing/providers/kubernetes-crd/#kind-traefikservice
type TraefikService struct {
metav1.TypeMeta `json:",inline"`

View file

@ -570,7 +570,7 @@ func filterIngressClassByName(ingressClassName string, ics []*networkingv1.Ingre
return ingressClasses
}
// Ingress in networking.k8s.io/v1 is supported starting 1.19.
// Ingress in networking.k8s.io/v1 is supported starting 1.19.
// thus, we query it in K8s starting 1.19.
func supportsNetworkingV1Ingress(serverVersion *version.Version) bool {
ingressNetworkingVersion := version.Must(version.NewVersion("1.19"))

View file

@ -16,7 +16,7 @@ var _ provider.Provider = (*Provider)(nil)
// ProviderBuilder is responsible for constructing namespaced instances of the Consul provider.
type ProviderBuilder struct {
kv.Provider `export:"true"`
kv.Provider `yaml:",inline" export:"true"`
// Deprecated: use Namespaces instead.
Namespace string `description:"Sets the namespace used to discover the configuration (Consul Enterprise only)." json:"namespace,omitempty" toml:"namespace,omitempty" yaml:"namespace,omitempty"`

View file

@ -94,7 +94,6 @@ func (h *httpForwarder) Accept() (net.Conn, error) {
//
// - TCP-TLS HostSNI(`foobar`) and HTTPS PathPrefix(`/`)
// - On v2.6 and v2.7, the TCP-TLS one takes precedence.
//
func Test_Routing(t *testing.T) {
// This listener simulates the backend service.
// It is capable of switching into server first communication mode,

View file

@ -55,13 +55,13 @@ QPZ6VGR7+w1jB5BQXqEZcpHQIPSzeQJBAIy9tZJ/AYNlNbcegxEnsSjy/6VdlLsY
rqPRSAtd/h6oZbs=
-----END PRIVATE KEY-----`)
// openssl req -newkey rsa:2048 \
// -new -nodes -x509 \
// -days 3650 \
// -out cert.pem \
// -keyout key.pem \
// -subj "/CN=example.com"
// -addext "subjectAltName = DNS:example.com"
// openssl req -newkey rsa:2048 \
// -new -nodes -x509 \
// -days 3650 \
// -out cert.pem \
// -keyout key.pem \
// -subj "/CN=example.com"
// -addext "subjectAltName = DNS:example.com"
var mTLSCert = []byte(`-----BEGIN CERTIFICATE-----
MIIDJTCCAg2gAwIBAgIUYKnGcLnmMosOSKqTn4ydAMURE4gwDQYJKoZIhvcNAQEL
BQAwFjEUMBIGA1UEAwwLZXhhbXBsZS5jb20wHhcNMjAwODEzMDkyNzIwWhcNMzAw

View file

@ -48,10 +48,10 @@ func (c Chain) Then(h Handler) (Handler, error) {
//
// Append returns a new chain, leaving the original one untouched.
//
// stdChain := tcp.NewChain(m1, m2)
// extChain := stdChain.Append(m3, m4)
// // requests in stdChain go m1 -> m2
// // requests in extChain go m1 -> m2 -> m3 -> m4
// stdChain := tcp.NewChain(m1, m2)
// extChain := stdChain.Append(m3, m4)
// // requests in stdChain go m1 -> m2
// // requests in extChain go m1 -> m2 -> m3 -> m4
func (c Chain) Append(constructors ...Constructor) Chain {
newCons := make([]Constructor, 0, len(c.constructors)+len(constructors))
newCons = append(newCons, c.constructors...)
@ -65,22 +65,23 @@ func (c Chain) Append(constructors ...Constructor) Chain {
//
// Extend returns a new chain, leaving the original one untouched.
//
// stdChain := tcp.NewChain(m1, m2)
// ext1Chain := tcp.NewChain(m3, m4)
// ext2Chain := stdChain.Extend(ext1Chain)
// // requests in stdChain go m1 -> m2
// // requests in ext1Chain go m3 -> m4
// // requests in ext2Chain go m1 -> m2 -> m3 -> m4
// stdChain := tcp.NewChain(m1, m2)
// ext1Chain := tcp.NewChain(m3, m4)
// ext2Chain := stdChain.Extend(ext1Chain)
// // requests in stdChain go m1 -> m2
// // requests in ext1Chain go m3 -> m4
// // requests in ext2Chain go m1 -> m2 -> m3 -> m4
//
// Another example:
// aHtmlAfterNosurf := tcp.NewChain(m2)
// aHtml := tcp.NewChain(m1, func(h tcp.Handler) tcp.Handler {
// csrf := nosurf.New(h)
// csrf.SetFailureHandler(aHtmlAfterNosurf.ThenFunc(csrfFail))
// return csrf
// }).Extend(aHtmlAfterNosurf)
// // requests to aHtml hitting nosurfs success handler go m1 -> nosurf -> m2 -> target-handler
// // requests to aHtml hitting nosurfs failure handler go m1 -> nosurf -> m2 -> csrfFail
//
// aHtmlAfterNosurf := tcp.NewChain(m2)
// aHtml := tcp.NewChain(m1, func(h tcp.Handler) tcp.Handler {
// csrf := nosurf.New(h)
// csrf.SetFailureHandler(aHtmlAfterNosurf.ThenFunc(csrfFail))
// return csrf
// }).Extend(aHtmlAfterNosurf)
// // requests to aHtml hitting nosurfs success handler go m1 -> nosurf -> m2 -> target-handler
// // requests to aHtml hitting nosurfs failure handler go m1 -> nosurf -> m2 -> csrfFail
func (c Chain) Extend(chain Chain) Chain {
return c.Append(chain.constructors...)
}

View file

@ -73,17 +73,17 @@ func (c *CertificateStore) GetBestCertificate(clientHello *tls.ClientHelloInfo)
if c == nil {
return nil
}
domainToCheck := strings.ToLower(strings.TrimSpace(clientHello.ServerName))
if len(domainToCheck) == 0 {
serverName := strings.ToLower(strings.TrimSpace(clientHello.ServerName))
if len(serverName) == 0 {
// If no ServerName is provided, Check for local IP address matches
host, _, err := net.SplitHostPort(clientHello.Conn.LocalAddr().String())
if err != nil {
log.Debugf("Could not split host/port: %v", err)
log.WithoutContext().Debugf("Could not split host/port: %v", err)
}
domainToCheck = strings.TrimSpace(host)
serverName = strings.TrimSpace(host)
}
if cert, ok := c.CertCache.Get(domainToCheck); ok {
if cert, ok := c.CertCache.Get(serverName); ok {
return cert.(*tls.Certificate)
}
@ -91,7 +91,7 @@ func (c *CertificateStore) GetBestCertificate(clientHello *tls.ClientHelloInfo)
if c.DynamicCerts != nil && c.DynamicCerts.Get() != nil {
for domains, cert := range c.DynamicCerts.Get().(map[string]*tls.Certificate) {
for _, certDomain := range strings.Split(domains, ",") {
if MatchDomain(domainToCheck, certDomain) {
if matchDomain(serverName, certDomain) {
matchedCerts[certDomain] = cert
}
}
@ -107,7 +107,7 @@ func (c *CertificateStore) GetBestCertificate(clientHello *tls.ClientHelloInfo)
sort.Strings(keys)
// cache best match
c.CertCache.SetDefault(domainToCheck, matchedCerts[keys[len(keys)-1]])
c.CertCache.SetDefault(serverName, matchedCerts[keys[len(keys)-1]])
return matchedCerts[keys[len(keys)-1]]
}
@ -121,9 +121,12 @@ func (c CertificateStore) ResetCache() {
}
}
// MatchDomain return true if a domain match the cert domain.
func MatchDomain(domain, certDomain string) bool {
if domain == certDomain {
// matchDomain returns whether the server name matches the cert domain.
// The server name, from TLS SNI, must not have trailing dots (https://datatracker.ietf.org/doc/html/rfc6066#section-3).
// This is enforced by https://github.com/golang/go/blob/d3d7998756c33f69706488cade1cd2b9b10a4c7f/src/crypto/tls/handshake_messages.go#L423-L427.
func matchDomain(serverName, certDomain string) bool {
// TODO: assert equality after removing the trailing dots?
if serverName == certDomain {
return true
}
@ -131,7 +134,7 @@ func MatchDomain(domain, certDomain string) bool {
certDomain = certDomain[:len(certDomain)-1]
}
labels := strings.Split(domain, ".")
labels := strings.Split(serverName, ".")
for i := range labels {
labels[i] = "*"
candidate := strings.Join(labels, ".")

View file

@ -10,15 +10,15 @@ type haystackLogger struct {
// Error prints the error message.
func (l haystackLogger) Error(format string, v ...interface{}) {
l.logger.Errorf(format, v)
l.logger.Errorf(format, v...)
}
// Info prints the info message.
func (l haystackLogger) Info(format string, v ...interface{}) {
l.logger.Infof(format, v)
l.logger.Infof(format, v...)
}
// Debug prints the info message.
func (l haystackLogger) Debug(format string, v ...interface{}) {
l.logger.Debug(format, v)
l.logger.Debugf(format, v...)
}

View file

@ -26,4 +26,4 @@ CGO_ENABLED=0 GOGC=off go build ${FLAGS[*]} -ldflags "-s -w \
-X github.com/traefik/traefik/v2/pkg/version.Version=$VERSION \
-X github.com/traefik/traefik/v2/pkg/version.Codename=$CODENAME \
-X github.com/traefik/traefik/v2/pkg/version.BuildDate=$DATE" \
-a -installsuffix nocgo -o dist/traefik ./cmd/traefik
-installsuffix nocgo -o dist/traefik ./cmd/traefik

View file

@ -4,11 +4,11 @@ RepositoryName = "traefik"
OutputType = "file"
FileName = "traefik_changelog.md"
# example new bugfix v2.7.3
CurrentRef = "v2.7"
PreviousRef = "v2.7.2"
BaseBranch = "v2.7"
FutureCurrentRefName = "v2.7.3"
# example new bugfix v2.8.1
CurrentRef = "v2.8"
PreviousRef = "v2.8.0"
BaseBranch = "v2.8"
FutureCurrentRefName = "v2.8.1"
ThresholdPreviousRef = 10
ThresholdCurrentRef = 10