Add support for Brotli
Co-authored-by: Mathieu Lonjaret <mathieu.lonjaret@gmail.com> Co-authored-by: Tom Moulard <tom.moulard@traefik.io> Co-authored-by: Romain <rtribotte@users.noreply.github.com> Co-authored-by: Kevin Pollet <pollet.kevin@gmail.com>
This commit is contained in:
parent
1a1cfd1adc
commit
67d9c8da0b
11 changed files with 1201 additions and 39 deletions
|
@ -5,23 +5,24 @@ description: "Traefik Proxy's HTTP middleware lets you compress responses before
|
||||||
|
|
||||||
# Compress
|
# Compress
|
||||||
|
|
||||||
Compress Responses before Sending them to the Client
|
Compress Allows Compressing Responses before Sending them to the Client
|
||||||
{: .subtitle }
|
{: .subtitle }
|
||||||
|
|
||||||
![Compress](../../assets/img/middleware/compress.png)
|
![Compress](../../assets/img/middleware/compress.png)
|
||||||
|
|
||||||
The Compress middleware uses gzip compression.
|
The Compress middleware supports gzip and Brotli compression.
|
||||||
|
The activation of compression, and the compression method choice rely (among other things) on the request's `Accept-Encoding` header.
|
||||||
|
|
||||||
## Configuration Examples
|
## Configuration Examples
|
||||||
|
|
||||||
```yaml tab="Docker"
|
```yaml tab="Docker"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
labels:
|
labels:
|
||||||
- "traefik.http.middlewares.test-compress.compress=true"
|
- "traefik.http.middlewares.test-compress.compress=true"
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml tab="Kubernetes"
|
```yaml tab="Kubernetes"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
apiVersion: traefik.containo.us/v1alpha1
|
apiVersion: traefik.containo.us/v1alpha1
|
||||||
kind: Middleware
|
kind: Middleware
|
||||||
metadata:
|
metadata:
|
||||||
|
@ -31,7 +32,7 @@ spec:
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml tab="Consul Catalog"
|
```yaml tab="Consul Catalog"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
- "traefik.http.middlewares.test-compress.compress=true"
|
- "traefik.http.middlewares.test-compress.compress=true"
|
||||||
```
|
```
|
||||||
|
|
||||||
|
@ -42,13 +43,13 @@ spec:
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml tab="Rancher"
|
```yaml tab="Rancher"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
labels:
|
labels:
|
||||||
- "traefik.http.middlewares.test-compress.compress=true"
|
- "traefik.http.middlewares.test-compress.compress=true"
|
||||||
```
|
```
|
||||||
|
|
||||||
```yaml tab="File (YAML)"
|
```yaml tab="File (YAML)"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
http:
|
http:
|
||||||
middlewares:
|
middlewares:
|
||||||
test-compress:
|
test-compress:
|
||||||
|
@ -56,7 +57,7 @@ http:
|
||||||
```
|
```
|
||||||
|
|
||||||
```toml tab="File (TOML)"
|
```toml tab="File (TOML)"
|
||||||
# Enable gzip compression
|
# Enable compression
|
||||||
[http.middlewares]
|
[http.middlewares]
|
||||||
[http.middlewares.test-compress.compress]
|
[http.middlewares.test-compress.compress]
|
||||||
```
|
```
|
||||||
|
@ -65,23 +66,34 @@ http:
|
||||||
|
|
||||||
Responses are compressed when the following criteria are all met:
|
Responses are compressed when the following criteria are all met:
|
||||||
|
|
||||||
* The response body is larger than the configured minimum amount of bytes (default is `1024`).
|
* The `Accept-Encoding` request header contains `gzip`, `*`, and/or `br` with or without [quality values](https://developer.mozilla.org/en-US/docs/Glossary/Quality_values).
|
||||||
* The `Accept-Encoding` request header contains `gzip`.
|
If the `Accept-Encoding` request header is absent, it is meant as br compression is requested.
|
||||||
|
If it is present, but its value is the empty string, then compression is disabled.
|
||||||
* The response is not already compressed, i.e. the `Content-Encoding` response header is not already set.
|
* The response is not already compressed, i.e. the `Content-Encoding` response header is not already set.
|
||||||
|
* The response`Content-Type` header is not one among the [excludedContentTypes options](#excludedcontenttypes).
|
||||||
If the `Content-Type` header is not defined, or empty, the compress middleware will automatically [detect](https://mimesniff.spec.whatwg.org/) a content type.
|
* The response body is larger than the [configured minimum amount of bytes](#minresponsebodybytes) (default is `1024`).
|
||||||
It will also set the `Content-Type` header according to the detected MIME type.
|
|
||||||
|
|
||||||
## Configuration Options
|
## Configuration Options
|
||||||
|
|
||||||
### `excludedContentTypes`
|
### `excludedContentTypes`
|
||||||
|
|
||||||
|
_Optional, Default=""_
|
||||||
|
|
||||||
`excludedContentTypes` specifies a list of content types to compare the `Content-Type` header of the incoming requests and responses before compressing.
|
`excludedContentTypes` specifies a list of content types to compare the `Content-Type` header of the incoming requests and responses before compressing.
|
||||||
|
|
||||||
The responses with content types defined in `excludedContentTypes` are not compressed.
|
The responses with content types defined in `excludedContentTypes` are not compressed.
|
||||||
|
|
||||||
Content types are compared in a case-insensitive, whitespace-ignored manner.
|
Content types are compared in a case-insensitive, whitespace-ignored manner.
|
||||||
|
|
||||||
|
!!! info "In the case of gzip"
|
||||||
|
|
||||||
|
If the `Content-Type` header is not defined, or empty, the compress middleware will automatically [detect](https://mimesniff.spec.whatwg.org/) a content type.
|
||||||
|
It will also set the `Content-Type` header according to the detected MIME type.
|
||||||
|
|
||||||
|
!!! info "gRPC"
|
||||||
|
|
||||||
|
Note that `application/grpc` is never compressed.
|
||||||
|
|
||||||
```yaml tab="Docker"
|
```yaml tab="Docker"
|
||||||
labels:
|
labels:
|
||||||
- "traefik.http.middlewares.test-compress.compress.excludedcontenttypes=text/event-stream"
|
- "traefik.http.middlewares.test-compress.compress.excludedcontenttypes=text/event-stream"
|
||||||
|
@ -130,9 +142,9 @@ http:
|
||||||
|
|
||||||
### `minResponseBodyBytes`
|
### `minResponseBodyBytes`
|
||||||
|
|
||||||
`minResponseBodyBytes` specifies the minimum amount of bytes a response body must have to be compressed.
|
_Optional, Default=1024_
|
||||||
|
|
||||||
The default value is `1024`, which should be a reasonable value for most cases.
|
`minResponseBodyBytes` specifies the minimum amount of bytes a response body must have to be compressed.
|
||||||
|
|
||||||
Responses smaller than the specified values will not be compressed.
|
Responses smaller than the specified values will not be compressed.
|
||||||
|
|
||||||
|
|
|
@ -749,7 +749,8 @@ spec:
|
||||||
excludedContentTypes:
|
excludedContentTypes:
|
||||||
description: ExcludedContentTypes defines the list of content
|
description: ExcludedContentTypes defines the list of content
|
||||||
types to compare the Content-Type header of the incoming requests
|
types to compare the Content-Type header of the incoming requests
|
||||||
and responses before compressing.
|
and responses before compressing. `application/grpc` is always
|
||||||
|
excluded.
|
||||||
items:
|
items:
|
||||||
type: string
|
type: string
|
||||||
type: array
|
type: array
|
||||||
|
|
|
@ -172,7 +172,8 @@ spec:
|
||||||
excludedContentTypes:
|
excludedContentTypes:
|
||||||
description: ExcludedContentTypes defines the list of content
|
description: ExcludedContentTypes defines the list of content
|
||||||
types to compare the Content-Type header of the incoming requests
|
types to compare the Content-Type header of the incoming requests
|
||||||
and responses before compressing.
|
and responses before compressing. `application/grpc` is always
|
||||||
|
excluded.
|
||||||
items:
|
items:
|
||||||
type: string
|
type: string
|
||||||
type: array
|
type: array
|
||||||
|
|
1
go.mod
1
go.mod
|
@ -7,6 +7,7 @@ require (
|
||||||
github.com/ExpediaDotCom/haystack-client-go v0.0.0-20190315171017-e7edbdf53a61
|
github.com/ExpediaDotCom/haystack-client-go v0.0.0-20190315171017-e7edbdf53a61
|
||||||
github.com/Masterminds/sprig/v3 v3.2.2
|
github.com/Masterminds/sprig/v3 v3.2.2
|
||||||
github.com/abbot/go-http-auth v0.0.0-00010101000000-000000000000
|
github.com/abbot/go-http-auth v0.0.0-00010101000000-000000000000
|
||||||
|
github.com/andybalholm/brotli v1.0.4
|
||||||
github.com/aws/aws-sdk-go v1.44.47
|
github.com/aws/aws-sdk-go v1.44.47
|
||||||
github.com/cenkalti/backoff/v4 v4.1.3
|
github.com/cenkalti/backoff/v4 v4.1.3
|
||||||
github.com/compose-spec/compose-go v1.0.3
|
github.com/compose-spec/compose-go v1.0.3
|
||||||
|
|
2
go.sum
2
go.sum
|
@ -212,6 +212,8 @@ github.com/aliyun/alibaba-cloud-sdk-go v1.61.1755/go.mod h1:RcDobYh8k5VP6TNybz9m
|
||||||
github.com/andres-erbsen/clock v0.0.0-20160526145045-9e14626cd129 h1:MzBOUgng9orim59UnfUTLRjMpd09C5uEVQ6RPGeCaVI=
|
github.com/andres-erbsen/clock v0.0.0-20160526145045-9e14626cd129 h1:MzBOUgng9orim59UnfUTLRjMpd09C5uEVQ6RPGeCaVI=
|
||||||
github.com/andres-erbsen/clock v0.0.0-20160526145045-9e14626cd129/go.mod h1:rFgpPQZYZ8vdbc+48xibu8ALc3yeyd64IhHS+PU6Yyg=
|
github.com/andres-erbsen/clock v0.0.0-20160526145045-9e14626cd129/go.mod h1:rFgpPQZYZ8vdbc+48xibu8ALc3yeyd64IhHS+PU6Yyg=
|
||||||
github.com/andybalholm/brotli v1.0.2/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu3qAvBg8x/Y=
|
github.com/andybalholm/brotli v1.0.2/go.mod h1:loMXtMfwqflxFJPmdbJO0a3KNoPuLBgiu3qAvBg8x/Y=
|
||||||
|
github.com/andybalholm/brotli v1.0.4 h1:V7DdXeJtZscaqfNuAdSRuRFzuiKlHSC/Zh3zl9qY3JY=
|
||||||
|
github.com/andybalholm/brotli v1.0.4/go.mod h1:fO7iG3H7G2nSZ7m0zPUDn85XEX2GTukHGRSepvi9Eig=
|
||||||
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
|
github.com/anmitsu/go-shlex v0.0.0-20161002113705-648efa622239/go.mod h1:2FmKhYUyUczH0OGQWaF5ceTx0UBShxjsH6f8oGKYe2c=
|
||||||
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
github.com/antihax/optional v1.0.0/go.mod h1:uupD/76wgC+ih3iEmQUL+0Ugr19nfwCT1kdvxnR2qWY=
|
||||||
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
github.com/apache/thrift v0.12.0/go.mod h1:cp2SuWMxlEZw2r+iP2GNCdIi4C1qmUzdZFSVb+bacwQ=
|
||||||
|
|
|
@ -749,7 +749,8 @@ spec:
|
||||||
excludedContentTypes:
|
excludedContentTypes:
|
||||||
description: ExcludedContentTypes defines the list of content
|
description: ExcludedContentTypes defines the list of content
|
||||||
types to compare the Content-Type header of the incoming requests
|
types to compare the Content-Type header of the incoming requests
|
||||||
and responses before compressing.
|
and responses before compressing. `application/grpc` is always
|
||||||
|
excluded.
|
||||||
items:
|
items:
|
||||||
type: string
|
type: string
|
||||||
type: array
|
type: array
|
||||||
|
|
|
@ -161,6 +161,7 @@ func (c *CircuitBreaker) SetDefaults() {
|
||||||
// More info: https://doc.traefik.io/traefik/v2.9/middlewares/http/compress/
|
// More info: https://doc.traefik.io/traefik/v2.9/middlewares/http/compress/
|
||||||
type Compress struct {
|
type Compress struct {
|
||||||
// ExcludedContentTypes defines the list of content types to compare the Content-Type header of the incoming requests and responses before compressing.
|
// ExcludedContentTypes defines the list of content types to compare the Content-Type header of the incoming requests and responses before compressing.
|
||||||
|
// `application/grpc` is always excluded.
|
||||||
ExcludedContentTypes []string `json:"excludedContentTypes,omitempty" toml:"excludedContentTypes,omitempty" yaml:"excludedContentTypes,omitempty" export:"true"`
|
ExcludedContentTypes []string `json:"excludedContentTypes,omitempty" toml:"excludedContentTypes,omitempty" yaml:"excludedContentTypes,omitempty" export:"true"`
|
||||||
// MinResponseBodyBytes defines the minimum amount of bytes a response body must have to be compressed.
|
// MinResponseBodyBytes defines the minimum amount of bytes a response body must have to be compressed.
|
||||||
// Default: 1024.
|
// Default: 1024.
|
||||||
|
|
338
pkg/middlewares/compress/brotli/brotli.go
Normal file
338
pkg/middlewares/compress/brotli/brotli.go
Normal file
|
@ -0,0 +1,338 @@
|
||||||
|
package brotli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bufio"
|
||||||
|
"fmt"
|
||||||
|
"io"
|
||||||
|
"mime"
|
||||||
|
"net"
|
||||||
|
"net/http"
|
||||||
|
|
||||||
|
"github.com/andybalholm/brotli"
|
||||||
|
)
|
||||||
|
|
||||||
|
const (
|
||||||
|
vary = "Vary"
|
||||||
|
acceptEncoding = "Accept-Encoding"
|
||||||
|
contentEncoding = "Content-Encoding"
|
||||||
|
contentLength = "Content-Length"
|
||||||
|
contentType = "Content-Type"
|
||||||
|
)
|
||||||
|
|
||||||
|
// Config is the Brotli handler configuration.
|
||||||
|
type Config struct {
|
||||||
|
// ExcludedContentTypes is the list of content types for which we should not compress.
|
||||||
|
ExcludedContentTypes []string
|
||||||
|
// MinSize is the minimum size (in bytes) required to enable compression.
|
||||||
|
MinSize int
|
||||||
|
}
|
||||||
|
|
||||||
|
// NewWrapper returns a new Brotli compressing wrapper.
|
||||||
|
func NewWrapper(cfg Config) (func(http.Handler) http.HandlerFunc, error) {
|
||||||
|
if cfg.MinSize < 0 {
|
||||||
|
return nil, fmt.Errorf("minimum size must be greater than or equal to zero")
|
||||||
|
}
|
||||||
|
|
||||||
|
var contentTypes []parsedContentType
|
||||||
|
for _, v := range cfg.ExcludedContentTypes {
|
||||||
|
mediaType, params, err := mime.ParseMediaType(v)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("parsing media type: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
contentTypes = append(contentTypes, parsedContentType{mediaType, params})
|
||||||
|
}
|
||||||
|
|
||||||
|
return func(h http.Handler) http.HandlerFunc {
|
||||||
|
return func(rw http.ResponseWriter, r *http.Request) {
|
||||||
|
rw.Header().Add(vary, acceptEncoding)
|
||||||
|
|
||||||
|
brw := &responseWriter{
|
||||||
|
rw: rw,
|
||||||
|
bw: brotli.NewWriter(rw),
|
||||||
|
minSize: cfg.MinSize,
|
||||||
|
statusCode: http.StatusOK,
|
||||||
|
excludedContentTypes: contentTypes,
|
||||||
|
}
|
||||||
|
defer brw.close()
|
||||||
|
|
||||||
|
h.ServeHTTP(brw, r)
|
||||||
|
}
|
||||||
|
}, nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// TODO: check whether we want to implement content-type sniffing (as gzip does)
|
||||||
|
// TODO: check whether we should support Accept-Ranges (as gzip does, see https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Ranges)
|
||||||
|
type responseWriter struct {
|
||||||
|
rw http.ResponseWriter
|
||||||
|
bw *brotli.Writer
|
||||||
|
|
||||||
|
minSize int
|
||||||
|
excludedContentTypes []parsedContentType
|
||||||
|
|
||||||
|
buf []byte
|
||||||
|
hijacked bool
|
||||||
|
compressionStarted bool
|
||||||
|
compressionDisabled bool
|
||||||
|
headersSent bool
|
||||||
|
|
||||||
|
// Mostly needed to avoid calling bw.Flush/bw.Close when no data was
|
||||||
|
// written in bw.
|
||||||
|
seenData bool
|
||||||
|
|
||||||
|
statusCodeSet bool
|
||||||
|
statusCode int
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *responseWriter) Header() http.Header {
|
||||||
|
return r.rw.Header()
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *responseWriter) WriteHeader(statusCode int) {
|
||||||
|
if r.statusCodeSet {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
r.statusCode = statusCode
|
||||||
|
r.statusCodeSet = true
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *responseWriter) Write(p []byte) (int, error) {
|
||||||
|
// i.e. has write ever been called at least once with non nil data.
|
||||||
|
if !r.seenData && len(p) > 0 {
|
||||||
|
r.seenData = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// We do not compress, either for contentEncoding or contentType reasons.
|
||||||
|
if r.compressionDisabled {
|
||||||
|
return r.rw.Write(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
// We have already buffered more than minSize,
|
||||||
|
// We are now in compression cruise mode until the end of times.
|
||||||
|
if r.compressionStarted {
|
||||||
|
// If compressionStarted we assume we have sent headers already
|
||||||
|
return r.bw.Write(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we detect a contentEncoding, we know we are never going to compress.
|
||||||
|
if r.rw.Header().Get(contentEncoding) != "" {
|
||||||
|
r.compressionDisabled = true
|
||||||
|
return r.rw.Write(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Disable compression according to user wishes in excludedContentTypes.
|
||||||
|
if ct := r.rw.Header().Get(contentType); ct != "" {
|
||||||
|
mediaType, params, err := mime.ParseMediaType(ct)
|
||||||
|
if err != nil {
|
||||||
|
return 0, fmt.Errorf("parsing media type: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, excludedContentType := range r.excludedContentTypes {
|
||||||
|
if excludedContentType.equals(mediaType, params) {
|
||||||
|
r.compressionDisabled = true
|
||||||
|
return r.rw.Write(p)
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
// We buffer until we know whether to compress (i.e. when we reach minSize received).
|
||||||
|
if len(r.buf)+len(p) < r.minSize {
|
||||||
|
r.buf = append(r.buf, p...)
|
||||||
|
return len(p), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we ever make it here, we have received at least minSize, which means we want to compress,
|
||||||
|
// and we are going to send headers right away.
|
||||||
|
r.compressionStarted = true
|
||||||
|
|
||||||
|
// Since we know we are going to compress we will never be able to know the actual length.
|
||||||
|
r.rw.Header().Del(contentLength)
|
||||||
|
|
||||||
|
r.rw.Header().Set(contentEncoding, "br")
|
||||||
|
r.rw.WriteHeader(r.statusCode)
|
||||||
|
r.headersSent = true
|
||||||
|
|
||||||
|
// Start with sending what we have previously buffered, before actually writing
|
||||||
|
// the bytes in argument.
|
||||||
|
n, err := r.bw.Write(r.buf)
|
||||||
|
if err != nil {
|
||||||
|
r.buf = r.buf[n:]
|
||||||
|
// Return zero because we haven't taken care of the bytes in argument yet.
|
||||||
|
return 0, err
|
||||||
|
}
|
||||||
|
|
||||||
|
// If we wrote less than what we wanted, we need to reclaim the leftovers + the bytes in argument,
|
||||||
|
// and keep them for a subsequent Write.
|
||||||
|
if n < len(r.buf) {
|
||||||
|
r.buf = r.buf[n:]
|
||||||
|
r.buf = append(r.buf, p...)
|
||||||
|
return len(p), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// Otherwise just reset the buffer.
|
||||||
|
r.buf = r.buf[:0]
|
||||||
|
|
||||||
|
// Now that we emptied the buffer, we can actually write the given bytes.
|
||||||
|
return r.bw.Write(p)
|
||||||
|
}
|
||||||
|
|
||||||
|
// Flush flushes data to the appropriate underlying writer(s), although it does
|
||||||
|
// not guarantee that all buffered data will be sent.
|
||||||
|
// If not enough bytes have been written to determine whether to enable compression,
|
||||||
|
// no flushing will take place.
|
||||||
|
func (r *responseWriter) Flush() {
|
||||||
|
if !r.seenData {
|
||||||
|
// we should not flush if there never was any data, because flushing the bw
|
||||||
|
// (just like closing) would send some extra end of compressionStarted stream bytes.
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// It was already established by Write that compression is disabled, we only
|
||||||
|
// have to flush the uncompressed writer.
|
||||||
|
if r.compressionDisabled {
|
||||||
|
if rw, ok := r.rw.(http.Flusher); ok {
|
||||||
|
rw.Flush()
|
||||||
|
}
|
||||||
|
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Here, nothing was ever written either to rw or to bw (since we're still
|
||||||
|
// waiting to decide whether to compress), so we do not need to flush anything.
|
||||||
|
// Note that we diverge with klauspost's gzip behavior, where they instead
|
||||||
|
// force compression and flush whatever was in the buffer in this case.
|
||||||
|
if !r.compressionStarted {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// Conversely, we here know that something was already written to bw (or is
|
||||||
|
// going to be written right after anyway), so bw will have to be flushed.
|
||||||
|
// Also, since we know that bw writes to rw, but (apparently) never flushes it,
|
||||||
|
// we have to do it ourselves.
|
||||||
|
defer func() {
|
||||||
|
// because we also ignore the error returned by Write anyway
|
||||||
|
_ = r.bw.Flush()
|
||||||
|
|
||||||
|
if rw, ok := r.rw.(http.Flusher); ok {
|
||||||
|
rw.Flush()
|
||||||
|
}
|
||||||
|
}()
|
||||||
|
|
||||||
|
// We empty whatever is left of the buffer that Write never took care of.
|
||||||
|
n, err := r.bw.Write(r.buf)
|
||||||
|
if err != nil {
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
// And just like in Write we also handle "short writes".
|
||||||
|
if n < len(r.buf) {
|
||||||
|
r.buf = r.buf[n:]
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
r.buf = r.buf[:0]
|
||||||
|
}
|
||||||
|
|
||||||
|
func (r *responseWriter) Hijack() (net.Conn, *bufio.ReadWriter, error) {
|
||||||
|
if hijacker, ok := r.rw.(http.Hijacker); ok {
|
||||||
|
// We only make use of r.hijacked in close (and not in Write/WriteHeader)
|
||||||
|
// because we want to let the stdlib catch the error on writes, as
|
||||||
|
// they already do a good job of logging it.
|
||||||
|
r.hijacked = true
|
||||||
|
return hijacker.Hijack()
|
||||||
|
}
|
||||||
|
|
||||||
|
return nil, nil, fmt.Errorf("%T is not a http.Hijacker", r.rw)
|
||||||
|
}
|
||||||
|
|
||||||
|
// close closes the underlying writers if/when appropriate.
|
||||||
|
// Note that the compressed writer should not be closed if we never used it,
|
||||||
|
// as it would otherwise send some extra "end of compression" bytes.
|
||||||
|
// Close also makes sure to flush whatever was left to write from the buffer.
|
||||||
|
func (r *responseWriter) close() error {
|
||||||
|
if r.hijacked {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// We have to take care of statusCode ourselves (in case there was never any
|
||||||
|
// call to Write or WriteHeader before us) as it's the only header we buffer.
|
||||||
|
if !r.headersSent {
|
||||||
|
r.rw.WriteHeader(r.statusCode)
|
||||||
|
r.headersSent = true
|
||||||
|
}
|
||||||
|
|
||||||
|
// Nothing was ever written anywhere, nothing to flush.
|
||||||
|
if !r.seenData {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// If compression was disabled, there never was anything in the buffer to flush,
|
||||||
|
// and nothing was ever written to bw.
|
||||||
|
if r.compressionDisabled {
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
if len(r.buf) == 0 {
|
||||||
|
// If we got here we know compression has started, so we can safely flush on bw.
|
||||||
|
return r.bw.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
// There is still data in the buffer, because we never reached minSize (to
|
||||||
|
// determine whether to compress). We therefore flush it uncompressed.
|
||||||
|
if !r.compressionStarted {
|
||||||
|
n, err := r.rw.Write(r.buf)
|
||||||
|
if err != nil {
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if n < len(r.buf) {
|
||||||
|
return io.ErrShortWrite
|
||||||
|
}
|
||||||
|
return nil
|
||||||
|
}
|
||||||
|
|
||||||
|
// There is still data in the buffer, simply because Write did not take care of it all.
|
||||||
|
// We flush it to the compressed writer.
|
||||||
|
n, err := r.bw.Write(r.buf)
|
||||||
|
if err != nil {
|
||||||
|
r.bw.Close()
|
||||||
|
return err
|
||||||
|
}
|
||||||
|
if n < len(r.buf) {
|
||||||
|
r.bw.Close()
|
||||||
|
return io.ErrShortWrite
|
||||||
|
}
|
||||||
|
return r.bw.Close()
|
||||||
|
}
|
||||||
|
|
||||||
|
// parsedContentType is the parsed representation of one of the inputs to ContentTypes.
|
||||||
|
// From https://github.com/klauspost/compress/blob/master/gzhttp/compress.go#L401.
|
||||||
|
type parsedContentType struct {
|
||||||
|
mediaType string
|
||||||
|
params map[string]string
|
||||||
|
}
|
||||||
|
|
||||||
|
// equals returns whether this content type matches another content type.
|
||||||
|
func (p parsedContentType) equals(mediaType string, params map[string]string) bool {
|
||||||
|
if p.mediaType != mediaType {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
// if p has no params, don't care about other's params
|
||||||
|
if len(p.params) == 0 {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
|
||||||
|
// if p has any params, they must be identical to other's.
|
||||||
|
if len(p.params) != len(params) {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
|
||||||
|
for k, v := range p.params {
|
||||||
|
if w, ok := params[k]; !ok || v != w {
|
||||||
|
return false
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return true
|
||||||
|
}
|
618
pkg/middlewares/compress/brotli/brotli_test.go
Normal file
618
pkg/middlewares/compress/brotli/brotli_test.go
Normal file
|
@ -0,0 +1,618 @@
|
||||||
|
package brotli
|
||||||
|
|
||||||
|
import (
|
||||||
|
"bytes"
|
||||||
|
"io"
|
||||||
|
"net/http"
|
||||||
|
"net/http/httptest"
|
||||||
|
"strings"
|
||||||
|
"testing"
|
||||||
|
|
||||||
|
"github.com/andybalholm/brotli"
|
||||||
|
"github.com/stretchr/testify/assert"
|
||||||
|
"github.com/stretchr/testify/require"
|
||||||
|
)
|
||||||
|
|
||||||
|
var (
|
||||||
|
smallTestBody = []byte("aaabbc" + strings.Repeat("aaabbbccc", 9) + "aaabbbc")
|
||||||
|
bigTestBody = []byte(strings.Repeat(strings.Repeat("aaabbbccc", 66)+" ", 6) + strings.Repeat("aaabbbccc", 66))
|
||||||
|
)
|
||||||
|
|
||||||
|
func Test_Vary(t *testing.T) {
|
||||||
|
h := newTestHandler(t, smallTestBody)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/whatever", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, rw.Code)
|
||||||
|
assert.Equal(t, acceptEncoding, rw.Header().Get(vary))
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_SmallBodyNoCompression(t *testing.T) {
|
||||||
|
h := newTestHandler(t, smallTestBody)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/whatever", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
// With less than 1024 bytes the response should not be compressed.
|
||||||
|
assert.Equal(t, http.StatusOK, rw.Code)
|
||||||
|
assert.Empty(t, rw.Header().Get(contentEncoding))
|
||||||
|
assert.Equal(t, smallTestBody, rw.Body.Bytes())
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_AlreadyCompressed(t *testing.T) {
|
||||||
|
h := newTestHandler(t, bigTestBody)
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/compressed", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, bigTestBody, rw.Body.Bytes())
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_NoBody(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
statusCode int
|
||||||
|
body []byte
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "status no content",
|
||||||
|
statusCode: http.StatusNoContent,
|
||||||
|
body: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "status not modified",
|
||||||
|
statusCode: http.StatusNotModified,
|
||||||
|
body: nil,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "status OK with empty body",
|
||||||
|
statusCode: http.StatusOK,
|
||||||
|
body: []byte{},
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "status OK with nil body",
|
||||||
|
statusCode: http.StatusOK,
|
||||||
|
body: nil,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
h := mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.WriteHeader(test.statusCode)
|
||||||
|
|
||||||
|
_, err := rw.Write(test.body)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}))
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
body, err := io.ReadAll(rw.Body)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
assert.Empty(t, rw.Header().Get(contentEncoding))
|
||||||
|
assert.Empty(t, body)
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_MinSize(t *testing.T) {
|
||||||
|
cfg := Config{
|
||||||
|
MinSize: 128,
|
||||||
|
}
|
||||||
|
|
||||||
|
var bodySize int
|
||||||
|
h := mustNewWrapper(t, cfg)(http.HandlerFunc(
|
||||||
|
func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
for i := 0; i < bodySize; i++ {
|
||||||
|
// We make sure to Write at least once less than minSize so that both
|
||||||
|
// cases below go through the same algo: i.e. they start buffering
|
||||||
|
// because they haven't reached minSize.
|
||||||
|
_, err := rw.Write([]byte{'x'})
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
},
|
||||||
|
))
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/whatever", &bytes.Buffer{})
|
||||||
|
req.Header.Add(acceptEncoding, "br")
|
||||||
|
|
||||||
|
// Short response is not compressed
|
||||||
|
bodySize = cfg.MinSize - 1
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Empty(t, rw.Result().Header.Get(contentEncoding))
|
||||||
|
|
||||||
|
// Long response is compressed
|
||||||
|
bodySize = cfg.MinSize
|
||||||
|
rw = httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, "br", rw.Result().Header.Get(contentEncoding))
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_MultipleWriteHeader(t *testing.T) {
|
||||||
|
h := mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
// We ensure that the subsequent call to WriteHeader is a noop.
|
||||||
|
rw.WriteHeader(http.StatusInternalServerError)
|
||||||
|
rw.WriteHeader(http.StatusNotFound)
|
||||||
|
}))
|
||||||
|
|
||||||
|
req := httptest.NewRequest(http.MethodGet, "/", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusInternalServerError, rw.Code)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_FlushBeforeWrite(t *testing.T) {
|
||||||
|
srv := httptest.NewServer(mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.WriteHeader(http.StatusOK)
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
|
||||||
|
_, err := rw.Write(bigTestBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
})))
|
||||||
|
defer srv.Close()
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, srv.URL, http.NoBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
res, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, res.StatusCode)
|
||||||
|
assert.Equal(t, "br", res.Header.Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(res.Body))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_FlushAfterWrite(t *testing.T) {
|
||||||
|
srv := httptest.NewServer(mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.WriteHeader(http.StatusOK)
|
||||||
|
|
||||||
|
_, err := rw.Write(bigTestBody[0:1])
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
for _, b := range bigTestBody[1:] {
|
||||||
|
_, err := rw.Write([]byte{b})
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
})))
|
||||||
|
defer srv.Close()
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, srv.URL, http.NoBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
res, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, res.StatusCode)
|
||||||
|
assert.Equal(t, "br", res.Header.Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(res.Body))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_FlushAfterWriteNil(t *testing.T) {
|
||||||
|
srv := httptest.NewServer(mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.WriteHeader(http.StatusOK)
|
||||||
|
|
||||||
|
_, err := rw.Write(nil)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
})))
|
||||||
|
defer srv.Close()
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, srv.URL, http.NoBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
res, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, res.StatusCode)
|
||||||
|
assert.Empty(t, res.Header.Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(res.Body))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Empty(t, got)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_FlushAfterAllWrites(t *testing.T) {
|
||||||
|
srv := httptest.NewServer(mustNewWrapper(t, Config{MinSize: 1024})(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
for i := range bigTestBody {
|
||||||
|
_, err := rw.Write(bigTestBody[i : i+1])
|
||||||
|
require.NoError(t, err)
|
||||||
|
}
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
})))
|
||||||
|
defer srv.Close()
|
||||||
|
|
||||||
|
req, err := http.NewRequest(http.MethodGet, srv.URL, http.NoBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
res, err := http.DefaultClient.Do(req)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
defer res.Body.Close()
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, res.StatusCode)
|
||||||
|
assert.Equal(t, "br", res.Header.Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(res.Body))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_ExcludedContentTypes(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
contentType string
|
||||||
|
excludedContentTypes []string
|
||||||
|
expCompression bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "Always compress when content types are empty",
|
||||||
|
contentType: "",
|
||||||
|
excludedContentTypes: []string{},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match",
|
||||||
|
contentType: "application/json",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME no match",
|
||||||
|
contentType: "text/xml",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with no other directive ignores non-MIME directives",
|
||||||
|
contentType: "application/json; charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, different charset",
|
||||||
|
contentType: "application/json; charset=ascii",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, same charset",
|
||||||
|
contentType: "application/json; charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, missing charset",
|
||||||
|
contentType: "application/json",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=ascii"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match case insensitive",
|
||||||
|
contentType: "Application/Json",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match ignore whitespace",
|
||||||
|
contentType: "application/json;charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
cfg := Config{
|
||||||
|
MinSize: 1024,
|
||||||
|
ExcludedContentTypes: test.excludedContentTypes,
|
||||||
|
}
|
||||||
|
h := mustNewWrapper(t, cfg)(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.Header().Set(contentType, test.contentType)
|
||||||
|
|
||||||
|
rw.WriteHeader(http.StatusOK)
|
||||||
|
|
||||||
|
_, err := rw.Write(bigTestBody)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}))
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/whatever", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, rw.Code)
|
||||||
|
|
||||||
|
if test.expCompression {
|
||||||
|
assert.Equal(t, "br", rw.Header().Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(rw.Body))
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
} else {
|
||||||
|
assert.NotEqual(t, "br", rw.Header().Get("Content-Encoding"))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(rw.Body)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func Test_FlushExcludedContentTypes(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
contentType string
|
||||||
|
excludedContentTypes []string
|
||||||
|
expCompression bool
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "Always compress when content types are empty",
|
||||||
|
contentType: "",
|
||||||
|
excludedContentTypes: []string{},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match",
|
||||||
|
contentType: "application/json",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME no match",
|
||||||
|
contentType: "text/xml",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with no other directive ignores non-MIME directives",
|
||||||
|
contentType: "application/json; charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, different charset",
|
||||||
|
contentType: "application/json; charset=ascii",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, same charset",
|
||||||
|
contentType: "application/json; charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match with other directives requires all directives be equal, missing charset",
|
||||||
|
contentType: "application/json",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=ascii"},
|
||||||
|
expCompression: true,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match case insensitive",
|
||||||
|
contentType: "Application/Json",
|
||||||
|
excludedContentTypes: []string{"application/json"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "MIME match ignore whitespace",
|
||||||
|
contentType: "application/json;charset=utf-8",
|
||||||
|
excludedContentTypes: []string{"application/json; charset=utf-8"},
|
||||||
|
expCompression: false,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
cfg := Config{
|
||||||
|
MinSize: 1024,
|
||||||
|
ExcludedContentTypes: test.excludedContentTypes,
|
||||||
|
}
|
||||||
|
h := mustNewWrapper(t, cfg)(http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
rw.Header().Set(contentType, test.contentType)
|
||||||
|
rw.WriteHeader(http.StatusOK)
|
||||||
|
|
||||||
|
tb := bigTestBody
|
||||||
|
for len(tb) > 0 {
|
||||||
|
// Write 100 bytes per run
|
||||||
|
// Detection should not be affected (we send 100 bytes)
|
||||||
|
toWrite := 100
|
||||||
|
if toWrite > len(tb) {
|
||||||
|
toWrite = len(tb)
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := rw.Write(tb[:toWrite])
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
// Flush between each write
|
||||||
|
rw.(http.Flusher).Flush()
|
||||||
|
tb = tb[toWrite:]
|
||||||
|
}
|
||||||
|
}))
|
||||||
|
|
||||||
|
req, _ := http.NewRequest(http.MethodGet, "/whatever", nil)
|
||||||
|
req.Header.Set(acceptEncoding, "br")
|
||||||
|
|
||||||
|
// This doesn't allow checking flushes, but we validate if content is correct.
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
h.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, http.StatusOK, rw.Code)
|
||||||
|
|
||||||
|
if test.expCompression {
|
||||||
|
assert.Equal(t, "br", rw.Header().Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(rw.Body))
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
} else {
|
||||||
|
assert.NotEqual(t, "br", rw.Header().Get(contentEncoding))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(rw.Body)
|
||||||
|
assert.Nil(t, err)
|
||||||
|
assert.Equal(t, bigTestBody, got)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
func mustNewWrapper(t *testing.T, cfg Config) func(http.Handler) http.HandlerFunc {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
w, err := NewWrapper(cfg)
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
return w
|
||||||
|
}
|
||||||
|
|
||||||
|
func newTestHandler(t *testing.T, body []byte) http.Handler {
|
||||||
|
t.Helper()
|
||||||
|
|
||||||
|
return mustNewWrapper(t, Config{MinSize: 1024})(
|
||||||
|
http.HandlerFunc(func(rw http.ResponseWriter, req *http.Request) {
|
||||||
|
if req.URL.Path == "/compressed" {
|
||||||
|
rw.Header().Set("Content-Encoding", "br")
|
||||||
|
}
|
||||||
|
|
||||||
|
_, err := rw.Write(body)
|
||||||
|
require.NoError(t, err)
|
||||||
|
}),
|
||||||
|
)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestParseContentType_equals(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
pct parsedContentType
|
||||||
|
mediaType string
|
||||||
|
params map[string]string
|
||||||
|
expect assert.BoolAssertionFunc
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "empty parsed content type",
|
||||||
|
expect: assert.True,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "simple content type",
|
||||||
|
pct: parsedContentType{
|
||||||
|
mediaType: "plain/text",
|
||||||
|
},
|
||||||
|
mediaType: "plain/text",
|
||||||
|
expect: assert.True,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "content type with params",
|
||||||
|
pct: parsedContentType{
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "utf8",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "utf8",
|
||||||
|
},
|
||||||
|
expect: assert.True,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "different content type",
|
||||||
|
pct: parsedContentType{
|
||||||
|
mediaType: "plain/text",
|
||||||
|
},
|
||||||
|
mediaType: "application/json",
|
||||||
|
expect: assert.False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "content type with params",
|
||||||
|
pct: parsedContentType{
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "utf8",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "latin-1",
|
||||||
|
},
|
||||||
|
expect: assert.False,
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "different number of parameters",
|
||||||
|
pct: parsedContentType{
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "utf8",
|
||||||
|
},
|
||||||
|
},
|
||||||
|
mediaType: "plain/text",
|
||||||
|
params: map[string]string{
|
||||||
|
"charset": "utf8",
|
||||||
|
"q": "0.8",
|
||||||
|
},
|
||||||
|
expect: assert.False,
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
test.expect(t, test.pct.equals(test.mediaType, test.params))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
|
@ -1,22 +1,26 @@
|
||||||
package compress
|
package compress
|
||||||
|
|
||||||
import (
|
import (
|
||||||
"compress/gzip"
|
|
||||||
"context"
|
"context"
|
||||||
|
"fmt"
|
||||||
"mime"
|
"mime"
|
||||||
"net/http"
|
"net/http"
|
||||||
|
"strings"
|
||||||
|
|
||||||
"github.com/klauspost/compress/gzhttp"
|
"github.com/klauspost/compress/gzhttp"
|
||||||
"github.com/opentracing/opentracing-go/ext"
|
"github.com/opentracing/opentracing-go/ext"
|
||||||
"github.com/traefik/traefik/v2/pkg/config/dynamic"
|
"github.com/traefik/traefik/v2/pkg/config/dynamic"
|
||||||
"github.com/traefik/traefik/v2/pkg/log"
|
"github.com/traefik/traefik/v2/pkg/log"
|
||||||
"github.com/traefik/traefik/v2/pkg/middlewares"
|
"github.com/traefik/traefik/v2/pkg/middlewares"
|
||||||
|
"github.com/traefik/traefik/v2/pkg/middlewares/compress/brotli"
|
||||||
"github.com/traefik/traefik/v2/pkg/tracing"
|
"github.com/traefik/traefik/v2/pkg/tracing"
|
||||||
)
|
)
|
||||||
|
|
||||||
const (
|
const typeName = "Compress"
|
||||||
typeName = "Compress"
|
|
||||||
)
|
// DefaultMinSize is the default minimum size (in bytes) required to enable compression.
|
||||||
|
// See https://github.com/klauspost/compress/blob/9559b037e79ad673c71f6ef7c732c00949014cd2/gzhttp/compress.go#L47.
|
||||||
|
const DefaultMinSize = 1024
|
||||||
|
|
||||||
// Compress is a middleware that allows to compress the response.
|
// Compress is a middleware that allows to compress the response.
|
||||||
type compress struct {
|
type compress struct {
|
||||||
|
@ -24,6 +28,9 @@ type compress struct {
|
||||||
name string
|
name string
|
||||||
excludes []string
|
excludes []string
|
||||||
minSize int
|
minSize int
|
||||||
|
|
||||||
|
brotliHandler http.Handler
|
||||||
|
gzipHandler http.Handler
|
||||||
}
|
}
|
||||||
|
|
||||||
// New creates a new compress middleware.
|
// New creates a new compress middleware.
|
||||||
|
@ -40,42 +47,117 @@ func New(ctx context.Context, next http.Handler, conf dynamic.Compress, name str
|
||||||
excludes = append(excludes, mediaType)
|
excludes = append(excludes, mediaType)
|
||||||
}
|
}
|
||||||
|
|
||||||
minSize := gzhttp.DefaultMinSize
|
minSize := DefaultMinSize
|
||||||
if conf.MinResponseBodyBytes > 0 {
|
if conf.MinResponseBodyBytes > 0 {
|
||||||
minSize = conf.MinResponseBodyBytes
|
minSize = conf.MinResponseBodyBytes
|
||||||
}
|
}
|
||||||
|
|
||||||
return &compress{next: next, name: name, excludes: excludes, minSize: minSize}, nil
|
c := &compress{
|
||||||
|
next: next,
|
||||||
|
name: name,
|
||||||
|
excludes: excludes,
|
||||||
|
minSize: minSize,
|
||||||
|
}
|
||||||
|
|
||||||
|
var err error
|
||||||
|
c.brotliHandler, err = c.newBrotliHandler()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
c.gzipHandler, err = c.newGzipHandler()
|
||||||
|
if err != nil {
|
||||||
|
return nil, err
|
||||||
|
}
|
||||||
|
|
||||||
|
return c, nil
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *compress) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
|
func (c *compress) ServeHTTP(rw http.ResponseWriter, req *http.Request) {
|
||||||
mediaType, _, err := mime.ParseMediaType(req.Header.Get("Content-Type"))
|
logger := log.FromContext(middlewares.GetLoggerCtx(req.Context(), c.name, typeName))
|
||||||
if err != nil {
|
|
||||||
log.FromContext(middlewares.GetLoggerCtx(context.Background(), c.name, typeName)).Debug(err)
|
if req.Method == http.MethodHead {
|
||||||
|
c.next.ServeHTTP(rw, req)
|
||||||
|
return
|
||||||
}
|
}
|
||||||
|
|
||||||
|
mediaType, _, err := mime.ParseMediaType(req.Header.Get("Content-Type"))
|
||||||
|
if err != nil {
|
||||||
|
logger.WithError(err).Debug("Unable to parse MIME type")
|
||||||
|
}
|
||||||
|
|
||||||
|
// Notably for text/event-stream requests the response should not be compressed.
|
||||||
|
// See https://github.com/traefik/traefik/issues/2576
|
||||||
if contains(c.excludes, mediaType) {
|
if contains(c.excludes, mediaType) {
|
||||||
c.next.ServeHTTP(rw, req)
|
c.next.ServeHTTP(rw, req)
|
||||||
} else {
|
return
|
||||||
ctx := middlewares.GetLoggerCtx(req.Context(), c.name, typeName)
|
|
||||||
c.gzipHandler(ctx).ServeHTTP(rw, req)
|
|
||||||
}
|
}
|
||||||
|
|
||||||
|
// Client allows us to do whatever we want, so we br compress.
|
||||||
|
// See https://www.rfc-editor.org/rfc/rfc9110.html#section-12.5.3
|
||||||
|
acceptEncoding, ok := req.Header["Accept-Encoding"]
|
||||||
|
if !ok {
|
||||||
|
c.brotliHandler.ServeHTTP(rw, req)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if encodingAccepts(acceptEncoding, "br") {
|
||||||
|
c.brotliHandler.ServeHTTP(rw, req)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
if encodingAccepts(acceptEncoding, "gzip") {
|
||||||
|
c.gzipHandler.ServeHTTP(rw, req)
|
||||||
|
return
|
||||||
|
}
|
||||||
|
|
||||||
|
c.next.ServeHTTP(rw, req)
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *compress) GetTracingInformation() (string, ext.SpanKindEnum) {
|
func (c *compress) GetTracingInformation() (string, ext.SpanKindEnum) {
|
||||||
return c.name, tracing.SpanKindNoneEnum
|
return c.name, tracing.SpanKindNoneEnum
|
||||||
}
|
}
|
||||||
|
|
||||||
func (c *compress) gzipHandler(ctx context.Context) http.Handler {
|
func (c *compress) newGzipHandler() (http.Handler, error) {
|
||||||
wrapper, err := gzhttp.NewWrapper(
|
wrapper, err := gzhttp.NewWrapper(
|
||||||
gzhttp.ExceptContentTypes(c.excludes),
|
gzhttp.ExceptContentTypes(c.excludes),
|
||||||
gzhttp.CompressionLevel(gzip.DefaultCompression),
|
gzhttp.MinSize(c.minSize),
|
||||||
gzhttp.MinSize(c.minSize))
|
)
|
||||||
if err != nil {
|
if err != nil {
|
||||||
log.FromContext(ctx).Error(err)
|
return nil, fmt.Errorf("new gzip wrapper: %w", err)
|
||||||
}
|
}
|
||||||
|
|
||||||
return wrapper(c.next)
|
return wrapper(c.next), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func (c *compress) newBrotliHandler() (http.Handler, error) {
|
||||||
|
cfg := brotli.Config{
|
||||||
|
ExcludedContentTypes: c.excludes,
|
||||||
|
MinSize: c.minSize,
|
||||||
|
}
|
||||||
|
|
||||||
|
wrapper, err := brotli.NewWrapper(cfg)
|
||||||
|
if err != nil {
|
||||||
|
return nil, fmt.Errorf("new brotli wrapper: %w", err)
|
||||||
|
}
|
||||||
|
|
||||||
|
return wrapper(c.next), nil
|
||||||
|
}
|
||||||
|
|
||||||
|
func encodingAccepts(acceptEncoding []string, typ string) bool {
|
||||||
|
for _, ae := range acceptEncoding {
|
||||||
|
for _, e := range strings.Split(ae, ",") {
|
||||||
|
parsed := strings.Split(strings.TrimSpace(e), ";")
|
||||||
|
if len(parsed) == 0 {
|
||||||
|
continue
|
||||||
|
}
|
||||||
|
if parsed[0] == typ || parsed[0] == "*" {
|
||||||
|
return true
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
|
return false
|
||||||
}
|
}
|
||||||
|
|
||||||
func contains(values []string, val string) bool {
|
func contains(values []string, val string) bool {
|
||||||
|
@ -84,5 +166,6 @@ func contains(values []string, val string) bool {
|
||||||
return true
|
return true
|
||||||
}
|
}
|
||||||
}
|
}
|
||||||
|
|
||||||
return false
|
return false
|
||||||
}
|
}
|
||||||
|
|
|
@ -1,12 +1,14 @@
|
||||||
package compress
|
package compress
|
||||||
|
|
||||||
import (
|
import (
|
||||||
|
"compress/gzip"
|
||||||
"context"
|
"context"
|
||||||
"io"
|
"io"
|
||||||
"net/http"
|
"net/http"
|
||||||
"net/http/httptest"
|
"net/http/httptest"
|
||||||
"testing"
|
"testing"
|
||||||
|
|
||||||
|
"github.com/andybalholm/brotli"
|
||||||
"github.com/klauspost/compress/gzhttp"
|
"github.com/klauspost/compress/gzhttp"
|
||||||
"github.com/stretchr/testify/assert"
|
"github.com/stretchr/testify/assert"
|
||||||
"github.com/stretchr/testify/require"
|
"github.com/stretchr/testify/require"
|
||||||
|
@ -20,8 +22,81 @@ const (
|
||||||
contentTypeHeader = "Content-Type"
|
contentTypeHeader = "Content-Type"
|
||||||
varyHeader = "Vary"
|
varyHeader = "Vary"
|
||||||
gzipValue = "gzip"
|
gzipValue = "gzip"
|
||||||
|
brotliValue = "br"
|
||||||
)
|
)
|
||||||
|
|
||||||
|
func TestNegotiation(t *testing.T) {
|
||||||
|
testCases := []struct {
|
||||||
|
desc string
|
||||||
|
acceptEncHeader string
|
||||||
|
expEncoding string
|
||||||
|
}{
|
||||||
|
{
|
||||||
|
desc: "no accept header",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "unsupported accept header",
|
||||||
|
acceptEncHeader: "notreal",
|
||||||
|
expEncoding: "",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "accept any header",
|
||||||
|
acceptEncHeader: "*",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "gzip accept header",
|
||||||
|
acceptEncHeader: "gzip",
|
||||||
|
expEncoding: "gzip",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "br accept header",
|
||||||
|
acceptEncHeader: "br",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "multi accept header, prefer br",
|
||||||
|
acceptEncHeader: "br;q=0.8, gzip;q=0.6",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "multi accept header, prefer br",
|
||||||
|
acceptEncHeader: "gzip;q=1.0, br;q=0.8",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
{
|
||||||
|
desc: "multi accept header list, prefer br",
|
||||||
|
acceptEncHeader: "gzip, br",
|
||||||
|
expEncoding: "br",
|
||||||
|
},
|
||||||
|
}
|
||||||
|
|
||||||
|
for _, test := range testCases {
|
||||||
|
test := test
|
||||||
|
|
||||||
|
t.Run(test.desc, func(t *testing.T) {
|
||||||
|
t.Parallel()
|
||||||
|
|
||||||
|
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
||||||
|
if test.acceptEncHeader != "" {
|
||||||
|
req.Header.Add(acceptEncodingHeader, test.acceptEncHeader)
|
||||||
|
}
|
||||||
|
|
||||||
|
next := http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
|
||||||
|
_, _ = rw.Write(generateBytes(10))
|
||||||
|
})
|
||||||
|
handler, err := New(context.Background(), next, dynamic.Compress{MinResponseBodyBytes: 1}, "testing")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, test.expEncoding, rw.Header().Get(contentEncodingHeader))
|
||||||
|
})
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
||||||
func TestShouldCompressWhenNoContentEncodingHeader(t *testing.T) {
|
func TestShouldCompressWhenNoContentEncodingHeader(t *testing.T) {
|
||||||
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
||||||
req.Header.Add(acceptEncodingHeader, gzipValue)
|
req.Header.Add(acceptEncodingHeader, gzipValue)
|
||||||
|
@ -41,9 +116,12 @@ func TestShouldCompressWhenNoContentEncodingHeader(t *testing.T) {
|
||||||
assert.Equal(t, gzipValue, rw.Header().Get(contentEncodingHeader))
|
assert.Equal(t, gzipValue, rw.Header().Get(contentEncodingHeader))
|
||||||
assert.Equal(t, acceptEncodingHeader, rw.Header().Get(varyHeader))
|
assert.Equal(t, acceptEncodingHeader, rw.Header().Get(varyHeader))
|
||||||
|
|
||||||
if assert.ObjectsAreEqualValues(rw.Body.Bytes(), baseBody) {
|
gr, err := gzip.NewReader(rw.Body)
|
||||||
assert.Fail(t, "expected a compressed body", "got %v", rw.Body.Bytes())
|
require.NoError(t, err)
|
||||||
}
|
|
||||||
|
got, err := io.ReadAll(gr)
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, got, baseBody)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestShouldNotCompressWhenContentEncodingHeader(t *testing.T) {
|
func TestShouldNotCompressWhenContentEncodingHeader(t *testing.T) {
|
||||||
|
@ -71,7 +149,7 @@ func TestShouldNotCompressWhenContentEncodingHeader(t *testing.T) {
|
||||||
assert.EqualValues(t, rw.Body.Bytes(), fakeCompressedBody)
|
assert.EqualValues(t, rw.Body.Bytes(), fakeCompressedBody)
|
||||||
}
|
}
|
||||||
|
|
||||||
func TestShouldNotCompressWhenNoAcceptEncodingHeader(t *testing.T) {
|
func TestShouldCompressWhenNoAcceptEncodingHeader(t *testing.T) {
|
||||||
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
req := testhelpers.MustNewRequest(http.MethodGet, "http://localhost", nil)
|
||||||
|
|
||||||
fakeBody := generateBytes(gzhttp.DefaultMinSize)
|
fakeBody := generateBytes(gzhttp.DefaultMinSize)
|
||||||
|
@ -87,7 +165,33 @@ func TestShouldNotCompressWhenNoAcceptEncodingHeader(t *testing.T) {
|
||||||
rw := httptest.NewRecorder()
|
rw := httptest.NewRecorder()
|
||||||
handler.ServeHTTP(rw, req)
|
handler.ServeHTTP(rw, req)
|
||||||
|
|
||||||
|
assert.Equal(t, brotliValue, rw.Header().Get(contentEncodingHeader))
|
||||||
|
assert.Equal(t, acceptEncodingHeader, rw.Header().Get(varyHeader))
|
||||||
|
|
||||||
|
got, err := io.ReadAll(brotli.NewReader(rw.Body))
|
||||||
|
require.NoError(t, err)
|
||||||
|
assert.Equal(t, got, fakeBody)
|
||||||
|
}
|
||||||
|
|
||||||
|
func TestShouldNotCompressHeadRequest(t *testing.T) {
|
||||||
|
req := testhelpers.MustNewRequest(http.MethodHead, "http://localhost", nil)
|
||||||
|
req.Header.Add(acceptEncodingHeader, gzipValue)
|
||||||
|
|
||||||
|
fakeBody := generateBytes(gzhttp.DefaultMinSize)
|
||||||
|
next := http.HandlerFunc(func(rw http.ResponseWriter, r *http.Request) {
|
||||||
|
_, err := rw.Write(fakeBody)
|
||||||
|
if err != nil {
|
||||||
|
http.Error(rw, err.Error(), http.StatusInternalServerError)
|
||||||
|
}
|
||||||
|
})
|
||||||
|
handler, err := New(context.Background(), next, dynamic.Compress{}, "testing")
|
||||||
|
require.NoError(t, err)
|
||||||
|
|
||||||
|
rw := httptest.NewRecorder()
|
||||||
|
handler.ServeHTTP(rw, req)
|
||||||
|
|
||||||
assert.Empty(t, rw.Header().Get(contentEncodingHeader))
|
assert.Empty(t, rw.Header().Get(contentEncodingHeader))
|
||||||
|
assert.Empty(t, rw.Header().Get(varyHeader))
|
||||||
assert.EqualValues(t, rw.Body.Bytes(), fakeBody)
|
assert.EqualValues(t, rw.Body.Bytes(), fakeBody)
|
||||||
}
|
}
|
||||||
|
|
||||||
|
|
Loading…
Reference in a new issue