Merge pull request #1144 from timoreimann/vendor-dependencies

Vendor dependencies.
This commit is contained in:
Timo Reimann 2017-03-09 16:23:26 +01:00 committed by GitHub
commit 91bf627275
5202 changed files with 1777496 additions and 42 deletions

View file

@ -1,5 +1,3 @@
dist/ /dist/
vendor/ !/dist/traefik
!dist/traefik /site/
site/
**/*.test

View file

@ -2,7 +2,7 @@
### Building ### Building
You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` and `glide` (Method 2) in order to build traefik. You need either [Docker](https://github.com/docker/docker) and `make` (Method 1), or `go` (Method 2) in order to build traefik. For changes to its dependencies, the `glide` dependency management tool and `glide-vc` plugin are required.
#### Method 1: Using `Docker` and `Makefile` #### Method 1: Using `Docker` and `Makefile`
@ -26,7 +26,7 @@ $ ls dist/
traefik* traefik*
``` ```
#### Method 2: Using `go` and `glide` #### Method 2: Using `go`
###### Setting up your `go` environment ###### Setting up your `go` environment
@ -42,20 +42,25 @@ This can be verified via `$ go env`
- You will want to add those 2 export lines to your `.bashrc` or `.bash_profile` - You will want to add those 2 export lines to your `.bashrc` or `.bash_profile`
- You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required) - You need `go-bindata` to be able to use `go generate` command (needed to build) : `$ go get github.com/jteeuwen/go-bindata/...` (Please note, the ellipses are required)
###### Setting up your `glide` environment #### Setting up `glide` and `glide-vc` for dependency management
- Glide is not required for building; however, it is necessary to modify dependencies (i.e., add, update, or remove third-party packages)
- Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh` - Glide can be installed either via homebrew: `$ brew install glide` or via the official glide script: `$ curl https://glide.sh/get | sh`
- The glide plugin `glide-vc` must be installed from source: `go get github.com/sgotti/glide-vc`
The idea behind `glide` is the following : If you want to add a dependency, use `$ glide get` to have glide put it into the vendor folder and update the glide manifest/lock files (`glide.yaml` and `glide.lock`, respectively). A following `glide-vc` run should be triggered to trim down the size of the vendor folder. The final result must be committed into VCS.
- when checkout(ing) a project, run `$ glide install -v` from the cloned directory to install Dependencies for the integration tests in the `integration` folder are managed in a separate `integration/glide.yaml` file using the same toolset.
(`go get …`) the dependencies in your `GOPATH`.
- if you need another dependency, import and use it in Care must be taken to choose the right arguments to `glide` when dealing with either main or integration test dependencies, or otherwise risk ending up with a broken build. For that reason, the helper script `script/glide.sh` encapsulates the gory details and conveniently calls `glide-vc` as well. Call it without parameters for basic usage instructions.
the source, and run `$ glide get github.com/Masterminds/cookoo` to save it in
`vendor` and add it to your `glide.yaml`. Here's a full example:
```bash ```bash
$ glide install --strip-vendor # install the new main dependency github.com/foo/bar and minimize vendor size
$ ./script/glide.sh get github.com/foo/bar
# install another dependency, this time for the integration tests
$ ( cd integration && ../script/glide.sh get github.com/baz/quuz )
# generate (Only required to integrate other components such as web dashboard) # generate (Only required to integrate other components such as web dashboard)
$ go generate $ go generate
# Standard go build # Standard go build
@ -105,14 +110,12 @@ TESTFLAGS="-check.f MyTestSuite.*Test" make test-integration
More: https://labix.org/gocheck More: https://labix.org/gocheck
##### Method 2: `go` and `glide` ##### Method 2: `go`
- Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to: - Tests can be run from the cloned directory, by `$ go test ./...` which should return `ok` similar to:
``` ```
ok _/home/vincent/src/github/vdemeester/traefik 0.004s ok _/home/vincent/src/github/vdemeester/traefik 0.004s
``` ```
- Note that `$ go test ./...` will run all tests (including the ones in the vendor directory for the dependencies that glide have fetched). If you only want to run the tests for traefik use `$ go test $(glide novendor)` instead.
### Documentation ### Documentation

12
.gitignore vendored
View file

@ -1,15 +1,13 @@
/dist /dist
gen.go /autogen/gen.go
.idea .idea
.intellij .intellij
*.iml *.iml
traefik /traefik
traefik.toml /traefik.toml
*.test /static/
vendor/
static/
.vscode/ .vscode/
site/ /site/
*.log *.log
*.exe *.exe
.DS_Store .DS_Store

View file

@ -45,7 +45,7 @@ test-integration: build ## run the integration tests
$(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration $(DOCKER_RUN_TRAEFIK) ./script/make.sh generate binary test-integration
validate: build ## validate gofmt, golint and go vet validate: build ## validate gofmt, golint and go vet
$(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell $(DOCKER_RUN_TRAEFIK) ./script/make.sh validate-glide validate-gofmt validate-govet validate-golint validate-misspell validate-vendor
build: dist build: dist
docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile . docker build $(DOCKER_BUILD_ARGS) -t "$(TRAEFIK_DEV_IMAGE)" -f build.Dockerfile .

View file

@ -1,10 +1,18 @@
FROM golang:1.7 FROM golang:1.7
# Install a more recent version of mercurial to avoid mismatching results
# between glide run on a decently updated host system and the build container.
RUN awk '$1 ~ "^deb" { $3 = $3 "-backports"; print; exit }' /etc/apt/sources.list > /etc/apt/sources.list.d/backports.list && \
DEBIAN_FRONTEND=noninteractive apt-get update && \
DEBIAN_FRONTEND=noninteractive apt-get install -t jessie-backports --yes --no-install-recommends mercurial=3.9.1-1~bpo8+1 && \
rm -fr /var/lib/apt/lists/
RUN go get github.com/jteeuwen/go-bindata/... \ RUN go get github.com/jteeuwen/go-bindata/... \
&& go get github.com/golang/lint/golint \ && go get github.com/golang/lint/golint \
&& go get github.com/kisielk/errcheck \ && go get github.com/kisielk/errcheck \
&& go get github.com/client9/misspell/cmd/misspell \ && go get github.com/client9/misspell/cmd/misspell \
&& go get github.com/mattfarina/glide-hash && go get github.com/mattfarina/glide-hash \
&& go get github.com/sgotti/glide-vc
# Which docker version to test on # Which docker version to test on
ARG DOCKER_VERSION=1.10.3 ARG DOCKER_VERSION=1.10.3
@ -24,13 +32,4 @@ RUN mkdir -p /usr/local/bin \
| tar -xzC /usr/local/bin --transform 's#^.+/##x' | tar -xzC /usr/local/bin --transform 's#^.+/##x'
WORKDIR /go/src/github.com/containous/traefik WORKDIR /go/src/github.com/containous/traefik
COPY glide.yaml glide.yaml
COPY glide.lock glide.lock
RUN glide install -v
COPY integration/glide.yaml integration/glide.yaml
COPY integration/glide.lock integration/glide.lock
RUN cd integration && glide install
COPY . /go/src/github.com/containous/traefik COPY . /go/src/github.com/containous/traefik

View file

@ -0,0 +1,357 @@
package eureka
import (
"crypto/tls"
"crypto/x509"
"encoding/json"
"errors"
"io"
"io/ioutil"
"net"
"net/http"
"net/url"
"os"
"path"
"time"
"strings"
)
const (
defaultBufferSize = 10
UP = "UP"
DOWN = "DOWN"
STARTING = "STARTING"
)
type Config struct {
CertFile string `json:"certFile"`
KeyFile string `json:"keyFile"`
CaCertFile []string `json:"caCertFiles"`
DialTimeout time.Duration `json:"timeout"`
Consistency string `json:"consistency"`
}
type Client struct {
Config Config `json:"config"`
Cluster *Cluster `json:"cluster"`
httpClient *http.Client
persistence io.Writer
cURLch chan string
// CheckRetry can be used to control the policy for failed requests
// and modify the cluster if needed.
// The client calls it before sending requests again, and
// stops retrying if CheckRetry returns some error. The cases that
// this function needs to handle include no response and unexpected
// http status code of response.
// If CheckRetry is nil, client will call the default one
// `DefaultCheckRetry`.
// Argument cluster is the eureka.Cluster object that these requests have been made on.
// Argument numReqs is the number of http.Requests that have been made so far.
// Argument lastResp is the http.Responses from the last request.
// Argument err is the reason of the failure.
CheckRetry func(cluster *Cluster, numReqs int,
lastResp http.Response, err error) error
}
// NewClient create a basic client that is configured to be used
// with the given machine list.
func NewClient(machines []string) *Client {
config := Config{
// default timeout is one second
DialTimeout: time.Second,
}
client := &Client{
Cluster: NewCluster(machines),
Config: config,
}
client.initHTTPClient()
return client
}
// NewTLSClient create a basic client with TLS configuration
func NewTLSClient(machines []string, cert string, key string, caCerts []string) (*Client, error) {
// overwrite the default machine to use https
if len(machines) == 0 {
machines = []string{"https://127.0.0.1:4001"}
}
config := Config{
// default timeout is one second
DialTimeout: time.Second,
CertFile: cert,
KeyFile: key,
CaCertFile: make([]string, 0),
}
client := &Client{
Cluster: NewCluster(machines),
Config: config,
}
err := client.initHTTPSClient(cert, key)
if err != nil {
return nil, err
}
for _, caCert := range caCerts {
if err := client.AddRootCA(caCert); err != nil {
return nil, err
}
}
return client, nil
}
// NewClientFromFile creates a client from a given file path.
// The given file is expected to use the JSON format.
func NewClientFromFile(fpath string) (*Client, error) {
fi, err := os.Open(fpath)
if err != nil {
return nil, err
}
defer func() {
if err := fi.Close(); err != nil {
panic(err)
}
}()
return NewClientFromReader(fi)
}
// NewClientFromReader creates a Client configured from a given reader.
// The configuration is expected to use the JSON format.
func NewClientFromReader(reader io.Reader) (*Client, error) {
c := new(Client)
b, err := ioutil.ReadAll(reader)
if err != nil {
return nil, err
}
err = json.Unmarshal(b, c)
if err != nil {
return nil, err
}
if c.Config.CertFile == "" {
c.initHTTPClient()
} else {
err = c.initHTTPSClient(c.Config.CertFile, c.Config.KeyFile)
}
if err != nil {
return nil, err
}
for _, caCert := range c.Config.CaCertFile {
if err := c.AddRootCA(caCert); err != nil {
return nil, err
}
}
return c, nil
}
// Override the Client's HTTP Transport object
func (c *Client) SetTransport(tr *http.Transport) {
c.httpClient.Transport = tr
}
// initHTTPClient initializes a HTTP client for eureka client
func (c *Client) initHTTPClient() {
tr := &http.Transport{
Dial: c.dial,
TLSClientConfig: &tls.Config{
InsecureSkipVerify: true,
},
}
c.httpClient = &http.Client{Transport: tr}
}
// initHTTPClient initializes a HTTPS client for eureka client
func (c *Client) initHTTPSClient(cert, key string) error {
if cert == "" || key == "" {
return errors.New("Require both cert and key path")
}
tlsCert, err := tls.LoadX509KeyPair(cert, key)
if err != nil {
return err
}
tlsConfig := &tls.Config{
Certificates: []tls.Certificate{tlsCert},
InsecureSkipVerify: true,
}
tr := &http.Transport{
TLSClientConfig: tlsConfig,
Dial: c.dial,
}
c.httpClient = &http.Client{Transport: tr}
return nil
}
// Sets the DialTimeout value
func (c *Client) SetDialTimeout(d time.Duration) {
c.Config.DialTimeout = d
}
// AddRootCA adds a root CA cert for the eureka client
func (c *Client) AddRootCA(caCert string) error {
if c.httpClient == nil {
return errors.New("Client has not been initialized yet!")
}
certBytes, err := ioutil.ReadFile(caCert)
if err != nil {
return err
}
tr, ok := c.httpClient.Transport.(*http.Transport)
if !ok {
panic("AddRootCA(): Transport type assert should not fail")
}
if tr.TLSClientConfig.RootCAs == nil {
caCertPool := x509.NewCertPool()
ok = caCertPool.AppendCertsFromPEM(certBytes)
if ok {
tr.TLSClientConfig.RootCAs = caCertPool
}
tr.TLSClientConfig.InsecureSkipVerify = false
} else {
ok = tr.TLSClientConfig.RootCAs.AppendCertsFromPEM(certBytes)
}
if !ok {
err = errors.New("Unable to load caCert")
}
c.Config.CaCertFile = append(c.Config.CaCertFile, caCert)
return err
}
// SetCluster updates cluster information using the given machine list.
func (c *Client) SetCluster(machines []string) bool {
success := c.internalSyncCluster(machines)
return success
}
func (c *Client) GetCluster() []string {
return c.Cluster.Machines
}
// SyncCluster updates the cluster information using the internal machine list.
func (c *Client) SyncCluster() bool {
return c.internalSyncCluster(c.Cluster.Machines)
}
// internalSyncCluster syncs cluster information using the given machine list.
func (c *Client) internalSyncCluster(machines []string) bool {
for _, machine := range machines {
httpPath := c.createHttpPath(machine, "machines")
resp, err := c.httpClient.Get(httpPath)
if err != nil {
// try another machine in the cluster
continue
} else {
b, err := ioutil.ReadAll(resp.Body)
resp.Body.Close()
if err != nil {
// try another machine in the cluster
continue
}
// update Machines List
c.Cluster.updateFromStr(string(b))
// update leader
// the first one in the machine list is the leader
c.Cluster.switchLeader(0)
logger.Debug("sync.machines " + strings.Join(c.Cluster.Machines, ", "))
return true
}
}
return false
}
// createHttpPath creates a complete HTTP URL.
// serverName should contain both the host name and a port number, if any.
func (c *Client) createHttpPath(serverName string, _path string) string {
u, err := url.Parse(serverName)
if err != nil {
panic(err)
}
u.Path = path.Join(u.Path, _path)
if u.Scheme == "" {
u.Scheme = "http"
}
return u.String()
}
// dial attempts to open a TCP connection to the provided address, explicitly
// enabling keep-alives with a one-second interval.
func (c *Client) dial(network, addr string) (net.Conn, error) {
conn, err := net.DialTimeout(network, addr, c.Config.DialTimeout)
if err != nil {
return nil, err
}
tcpConn, ok := conn.(*net.TCPConn)
if !ok {
return nil, errors.New("Failed type-assertion of net.Conn as *net.TCPConn")
}
// Keep TCP alive to check whether or not the remote machine is down
if err = tcpConn.SetKeepAlive(true); err != nil {
return nil, err
}
if err = tcpConn.SetKeepAlivePeriod(time.Second); err != nil {
return nil, err
}
return tcpConn, nil
}
// MarshalJSON implements the Marshaller interface
// as defined by the standard JSON package.
func (c *Client) MarshalJSON() ([]byte, error) {
b, err := json.Marshal(struct {
Config Config `json:"config"`
Cluster *Cluster `json:"cluster"`
}{
Config: c.Config,
Cluster: c.Cluster,
})
if err != nil {
return nil, err
}
return b, nil
}
// UnmarshalJSON implements the Unmarshaller interface
// as defined by the standard JSON package.
func (c *Client) UnmarshalJSON(b []byte) error {
temp := struct {
Config Config `json:"config"`
Cluster *Cluster `json:"cluster"`
}{}
err := json.Unmarshal(b, &temp)
if err != nil {
return err
}
c.Cluster = temp.Cluster
c.Config = temp.Config
return nil
}

View file

@ -0,0 +1,51 @@
package eureka
import (
"net/url"
"strings"
)
type Cluster struct {
Leader string `json:"leader"`
Machines []string `json:"machines"`
}
func NewCluster(machines []string) *Cluster {
// if an empty slice was sent in then just assume HTTP 4001 on localhost
if len(machines) == 0 {
machines = []string{"http://127.0.0.1:4001"}
}
// default leader and machines
return &Cluster{
Leader: machines[0],
Machines: machines,
}
}
// switchLeader switch the current leader to machines[num]
func (cl *Cluster) switchLeader(num int) {
logger.Debug("switch.leader[from %v to %v]",
cl.Leader, cl.Machines[num])
cl.Leader = cl.Machines[num]
}
func (cl *Cluster) updateFromStr(machines string) {
cl.Machines = strings.Split(machines, ", ")
}
func (cl *Cluster) updateLeader(leader string) {
logger.Debug("update.leader[%s,%s]", cl.Leader, leader)
cl.Leader = leader
}
func (cl *Cluster) updateLeaderFromURL(u *url.URL) {
var leader string
if u.Scheme == "" {
leader = "http://" + u.Host
} else {
leader = u.Scheme + "://" + u.Host
}
cl.updateLeader(leader)
}

View file

@ -0,0 +1,21 @@
package eureka
import (
"github.com/ArthurHlt/gominlog"
"log"
)
var logger *gominlog.MinLog
func GetLogger() *log.Logger {
return logger.GetLogger()
}
func SetLogger(loggerLog *log.Logger) {
logger.SetLogger(loggerLog)
}
func init() {
// Default logger uses the go default log.
logger = gominlog.NewClassicMinLogWithPackageName("go-eureka-client")
}

View file

@ -0,0 +1,10 @@
package eureka
import "strings"
func (c *Client) UnregisterInstance(appId, instanceId string) error {
values := []string{"apps", appId, instanceId}
path := strings.Join(values, "/")
_, err := c.Delete(path)
return err
}

View file

@ -0,0 +1,48 @@
package eureka
import (
"encoding/json"
"fmt"
)
const (
ErrCodeEurekaNotReachable = 501
)
var (
errorMap = map[int]string{
ErrCodeEurekaNotReachable: "All the given peers are not reachable",
}
)
type EurekaError struct {
ErrorCode int `json:"errorCode"`
Message string `json:"message"`
Cause string `json:"cause,omitempty"`
Index uint64 `json:"index"`
}
func (e EurekaError) Error() string {
return fmt.Sprintf("%v: %v (%v) [%v]", e.ErrorCode, e.Message, e.Cause, e.Index)
}
func newError(errorCode int, cause string, index uint64) *EurekaError {
return &EurekaError{
ErrorCode: errorCode,
Message: errorMap[errorCode],
Cause: cause,
Index: index,
}
}
func handleError(b []byte) error {
eurekaErr := new(EurekaError)
err := json.Unmarshal(b, eurekaErr)
if err != nil {
logger.Warning("cannot unmarshal eureka error: %v", err)
return err
}
return eurekaErr
}

View file

@ -0,0 +1,38 @@
package eureka
import (
"encoding/xml"
"strings"
)
func (c *Client) GetApplications() (*Applications, error) {
response, err := c.Get("apps");
if err != nil {
return nil, err
}
var applications *Applications = new(Applications)
err = xml.Unmarshal(response.Body, applications)
return applications, err
}
func (c *Client) GetApplication(appId string) (*Application, error) {
values := []string{"apps", appId}
path := strings.Join(values, "/")
response, err := c.Get(path);
if err != nil {
return nil, err
}
var application *Application = new(Application)
err = xml.Unmarshal(response.Body, application)
return application, err
}
func (c *Client) GetInstance(appId, instanceId string) (*InstanceInfo, error) {
values := []string{"apps", appId, instanceId}
path := strings.Join(values, "/")
response, err := c.Get(path);
if err != nil {
return nil, err
}
var instance *InstanceInfo = new(InstanceInfo)
err = xml.Unmarshal(response.Body, instance)
return instance, err
}

View file

@ -0,0 +1,95 @@
package eureka
import (
"encoding/xml"
"encoding/json"
"regexp"
)
type MetaData struct {
Map map[string]string
Class string
}
type Vraw struct {
Content []byte `xml:",innerxml"`
Class string `xml:"class,attr" json:"@class"`
}
func (s *MetaData) MarshalXML(e *xml.Encoder, start xml.StartElement) error {
var attributes []xml.Attr = make([]xml.Attr, 0)
if s.Class != "" {
attributes = append(attributes, xml.Attr{
Name: xml.Name{
Local: "class",
},
Value: s.Class,
})
}
start.Attr = attributes
tokens := []xml.Token{start}
for key, value := range s.Map {
t := xml.StartElement{Name: xml.Name{"", key}}
tokens = append(tokens, t, xml.CharData(value), xml.EndElement{t.Name})
}
tokens = append(tokens, xml.EndElement{
Name: start.Name,
})
for _, t := range tokens {
err := e.EncodeToken(t)
if err != nil {
return err
}
}
// flush to ensure tokens are written
err := e.Flush()
if err != nil {
return err
}
return nil
}
func (s *MetaData) UnmarshalXML(d *xml.Decoder, start xml.StartElement) error {
s.Map = make(map[string]string)
vraw := &Vraw{}
d.DecodeElement(vraw, &start)
dataInString := string(vraw.Content)
regex, err := regexp.Compile("\\s*<([^<>]+)>([^<>]+)</[^<>]+>\\s*")
if err != nil {
return err
}
subMatches := regex.FindAllStringSubmatch(dataInString, -1)
for _, subMatch := range subMatches {
s.Map[subMatch[1]] = subMatch[2]
}
s.Class = vraw.Class
return nil
}
func (s *MetaData) MarshalJSON() ([]byte, error) {
mapIt := make(map[string]string)
for key, value := range s.Map {
mapIt[key] = value
}
if s.Class != "" {
mapIt["@class"] = s.Class
}
return json.Marshal(mapIt)
}
func (s *MetaData) UnmarshalJSON(data []byte) error {
dataUnmarshal := make(map[string]string)
err := json.Unmarshal(data, dataUnmarshal)
s.Map = dataUnmarshal
if val, ok := s.Map["@class"]; ok {
s.Class = val
delete(s.Map, "@class")
}
return err
}

View file

@ -0,0 +1,21 @@
package eureka
import (
"encoding/json"
"strings"
)
func (c *Client) RegisterInstance(appId string, instanceInfo *InstanceInfo) error {
values := []string{"apps", appId}
path := strings.Join(values, "/")
instance := &Instance{
Instance: instanceInfo,
}
body, err := json.Marshal(instance)
if err != nil {
return err
}
_, err = c.Post(path, body)
return err
}

View file

@ -0,0 +1,10 @@
package eureka
import "strings"
func (c *Client) SendHeartbeat(appId, instanceId string) error {
values := []string{"apps", appId, instanceId}
path := strings.Join(values, "/")
_, err := c.Put(path, nil)
return err
}

View file

@ -0,0 +1,437 @@
package eureka
import (
"bytes"
"errors"
"fmt"
"io"
"io/ioutil"
"math/rand"
"net/http"
"net/url"
"sync"
"time"
"strconv"
)
// Errors introduced by handling requests
var (
ErrRequestCancelled = errors.New("sending request is cancelled")
)
type RawRequest struct {
method string
relativePath string
body []byte
cancel <-chan bool
}
type Applications struct {
VersionsDelta int `xml:"versions__delta"`
AppsHashcode string `xml:"apps__hashcode"`
Applications []Application `xml:"application,omitempty"`
}
type Application struct {
Name string `xml:"name"`
Instances []InstanceInfo `xml:"instance"`
}
type Instance struct {
Instance *InstanceInfo `xml:"instance" json:"instance"`
}
type Port struct {
Port int `xml:",chardata" json:"$"`
Enabled bool `xml:"enabled,attr" json:"@enabled"`
}
type InstanceInfo struct {
HostName string `xml:"hostName" json:"hostName"`
HomePageUrl string `xml:"homePageUrl,omitempty" json:"homePageUrl,omitempty"`
StatusPageUrl string `xml:"statusPageUrl" json:"statusPageUrl"`
HealthCheckUrl string `xml:"healthCheckUrl,omitempty" json:"healthCheckUrl,omitempty"`
App string `xml:"app" json:"app"`
IpAddr string `xml:"ipAddr" json:"ipAddr"`
VipAddress string `xml:"vipAddress" json:"vipAddress"`
secureVipAddress string `xml:"secureVipAddress,omitempty" json:"secureVipAddress,omitempty"`
Status string `xml:"status" json:"status"`
Port *Port `xml:"port,omitempty" json:"port,omitempty"`
SecurePort *Port `xml:"securePort,omitempty" json:"securePort,omitempty"`
DataCenterInfo *DataCenterInfo `xml:"dataCenterInfo" json:"dataCenterInfo"`
LeaseInfo *LeaseInfo `xml:"leaseInfo,omitempty" json:"leaseInfo,omitempty"`
Metadata *MetaData `xml:"metadata,omitempty" json:"metadata,omitempty"`
IsCoordinatingDiscoveryServer bool `xml:"isCoordinatingDiscoveryServer,omitempty" json:"isCoordinatingDiscoveryServer,omitempty"`
LastUpdatedTimestamp int `xml:"lastUpdatedTimestamp,omitempty" json:"lastUpdatedTimestamp,omitempty"`
LastDirtyTimestamp int `xml:"lastDirtyTimestamp,omitempty" json:"lastDirtyTimestamp,omitempty"`
ActionType string `xml:"actionType,omitempty" json:"actionType,omitempty"`
Overriddenstatus string `xml:"overriddenstatus,omitempty" json:"overriddenstatus,omitempty"`
CountryId int `xml:"countryId,omitempty" json:"countryId,omitempty"`
}
type DataCenterInfo struct {
Name string `xml:"name" json:"name"`
Class string `xml:"class,attr" json:"@class"`
Metadata DataCenterMetadata `xml:"metadata,omitempty" json:"metadata,omitempty"`
}
type DataCenterMetadata struct {
AmiLaunchIndex string `xml:"ami-launch-index,omitempty" json:"ami-launch-index,omitempty"`
LocalHostname string `xml:"local-hostname,omitempty" json:"local-hostname,omitempty"`
AvailabilityZone string `xml:"availability-zone,omitempty" json:"availability-zone,omitempty"`
InstanceId string `xml:"instance-id,omitempty" json:"instance-id,omitempty"`
PublicIpv4 string `xml:"public-ipv4,omitempty" json:"public-ipv4,omitempty"`
PublicHostname string `xml:"public-hostname,omitempty" json:"public-hostname,omitempty"`
AmiManifestPath string `xml:"ami-manifest-path,omitempty" json:"ami-manifest-path,omitempty"`
LocalIpv4 string `xml:"local-ipv4,omitempty" json:"local-ipv4,omitempty"`
Hostname string `xml:"hostname,omitempty" json:"hostname,omitempty"`
AmiId string `xml:"ami-id,omitempty" json:"ami-id,omitempty"`
InstanceType string `xml:"instance-type,omitempty" json:"instance-type,omitempty"`
}
type LeaseInfo struct {
EvictionDurationInSecs uint `xml:"evictionDurationInSecs,omitempty" json:"evictionDurationInSecs,omitempty"`
RenewalIntervalInSecs int `xml:"renewalIntervalInSecs,omitempty" json:"renewalIntervalInSecs,omitempty"`
DurationInSecs int `xml:"durationInSecs,omitempty" json:"durationInSecs,omitempty"`
RegistrationTimestamp int `xml:"registrationTimestamp,omitempty" json:"registrationTimestamp,omitempty"`
LastRenewalTimestamp int `xml:"lastRenewalTimestamp,omitempty" json:"lastRenewalTimestamp,omitempty"`
EvictionTimestamp int `xml:"evictionTimestamp,omitempty" json:"evictionTimestamp,omitempty"`
ServiceUpTimestamp int `xml:"serviceUpTimestamp,omitempty" json:"serviceUpTimestamp,omitempty"`
}
func NewRawRequest(method, relativePath string, body []byte, cancel <-chan bool) *RawRequest {
return &RawRequest{
method: method,
relativePath: relativePath,
body: body,
cancel: cancel,
}
}
func NewInstanceInfo(hostName, app, ip string, port int, ttl uint, isSsl bool) *InstanceInfo {
dataCenterInfo := &DataCenterInfo{
Name: "MyOwn",
}
leaseInfo := &LeaseInfo{
EvictionDurationInSecs: ttl,
}
instanceInfo := &InstanceInfo{
HostName: hostName,
App: app,
IpAddr: ip,
Status: UP,
DataCenterInfo: dataCenterInfo,
LeaseInfo: leaseInfo,
Metadata: nil,
}
stringPort := ""
if (port != 80 && port != 443) {
stringPort = ":" + strconv.Itoa(port)
}
var protocol string = "http"
if (isSsl) {
protocol = "https"
instanceInfo.secureVipAddress = protocol + "://" + hostName + stringPort
instanceInfo.SecurePort = &Port{
Port: port,
Enabled: true,
}
}else {
instanceInfo.VipAddress = protocol + "://" + hostName + stringPort
instanceInfo.Port = &Port{
Port: port,
Enabled: true,
}
}
instanceInfo.StatusPageUrl = protocol + "://" + hostName + stringPort + "/info"
return instanceInfo
}
// getCancelable issues a cancelable GET request
func (c *Client) getCancelable(endpoint string,
cancel <-chan bool) (*RawResponse, error) {
logger.Debug("get %s [%s]", endpoint, c.Cluster.Leader)
p := endpoint
req := NewRawRequest("GET", p, nil, cancel)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// get issues a GET request
func (c *Client) Get(endpoint string) (*RawResponse, error) {
return c.getCancelable(endpoint, nil)
}
// put issues a PUT request
func (c *Client) Put(endpoint string, body []byte) (*RawResponse, error) {
logger.Debug("put %s, %s, [%s]", endpoint, body, c.Cluster.Leader)
p := endpoint
req := NewRawRequest("PUT", p, body, nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// post issues a POST request
func (c *Client) Post(endpoint string, body []byte) (*RawResponse, error) {
logger.Debug("post %s, %s, [%s]", endpoint, body, c.Cluster.Leader)
p := endpoint
req := NewRawRequest("POST", p, body, nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
// delete issues a DELETE request
func (c *Client) Delete(endpoint string) (*RawResponse, error) {
logger.Debug("delete %s [%s]", endpoint, c.Cluster.Leader)
p := endpoint
req := NewRawRequest("DELETE", p, nil, nil)
resp, err := c.SendRequest(req)
if err != nil {
return nil, err
}
return resp, nil
}
func (c *Client) SendRequest(rr *RawRequest) (*RawResponse, error) {
var req *http.Request
var resp *http.Response
var httpPath string
var err error
var respBody []byte
var numReqs = 1
checkRetry := c.CheckRetry
if checkRetry == nil {
checkRetry = DefaultCheckRetry
}
cancelled := make(chan bool, 1)
reqLock := new(sync.Mutex)
if rr.cancel != nil {
cancelRoutine := make(chan bool)
defer close(cancelRoutine)
go func() {
select {
case <-rr.cancel:
cancelled <- true
logger.Debug("send.request is cancelled")
case <-cancelRoutine:
return
}
// Repeat canceling request until this thread is stopped
// because we have no idea about whether it succeeds.
for {
reqLock.Lock()
c.httpClient.Transport.(*http.Transport).CancelRequest(req)
reqLock.Unlock()
select {
case <-time.After(100 * time.Millisecond):
case <-cancelRoutine:
return
}
}
}()
}
// If we connect to a follower and consistency is required, retry until
// we connect to a leader
sleep := 25 * time.Millisecond
maxSleep := time.Second
for attempt := 0;; attempt++ {
if attempt > 0 {
select {
case <-cancelled:
return nil, ErrRequestCancelled
case <-time.After(sleep):
sleep = sleep * 2
if sleep > maxSleep {
sleep = maxSleep
}
}
}
logger.Debug("Connecting to eureka: attempt %d for %s", attempt + 1, rr.relativePath)
httpPath = c.getHttpPath(false, rr.relativePath)
logger.Debug("send.request.to %s | method %s", httpPath, rr.method)
req, err := func() (*http.Request, error) {
reqLock.Lock()
defer reqLock.Unlock()
if req, err = http.NewRequest(rr.method, httpPath, bytes.NewReader(rr.body)); err != nil {
return nil, err
}
req.Header.Set("Content-Type",
"application/json")
return req, nil
}()
if err != nil {
return nil, err
}
resp, err = c.httpClient.Do(req)
defer func() {
if resp != nil {
resp.Body.Close()
}
}()
// If the request was cancelled, return ErrRequestCancelled directly
select {
case <-cancelled:
return nil, ErrRequestCancelled
default:
}
numReqs++
// network error, change a machine!
if err != nil {
logger.Error("network error: %v", err.Error())
lastResp := http.Response{}
if checkErr := checkRetry(c.Cluster, numReqs, lastResp, err); checkErr != nil {
return nil, checkErr
}
c.Cluster.switchLeader(attempt % len(c.Cluster.Machines))
continue
}
// if there is no error, it should receive response
logger.Debug("recv.response.from "+httpPath)
if validHttpStatusCode[resp.StatusCode] {
// try to read byte code and break the loop
respBody, err = ioutil.ReadAll(resp.Body)
if err == nil {
logger.Debug("recv.success "+ httpPath)
break
}
// ReadAll error may be caused due to cancel request
select {
case <-cancelled:
return nil, ErrRequestCancelled
default:
}
if err == io.ErrUnexpectedEOF {
// underlying connection was closed prematurely, probably by timeout
// TODO: empty body or unexpectedEOF can cause http.Transport to get hosed;
// this allows the client to detect that and take evasive action. Need
// to revisit once code.google.com/p/go/issues/detail?id=8648 gets fixed.
respBody = []byte{}
break
}
}
// if resp is TemporaryRedirect, set the new leader and retry
if resp.StatusCode == http.StatusTemporaryRedirect {
u, err := resp.Location()
if err != nil {
logger.Warning("%v", err)
} else {
// Update cluster leader based on redirect location
// because it should point to the leader address
c.Cluster.updateLeaderFromURL(u)
logger.Debug("recv.response.relocate "+ u.String())
}
resp.Body.Close()
continue
}
if checkErr := checkRetry(c.Cluster, numReqs, *resp,
errors.New("Unexpected HTTP status code")); checkErr != nil {
return nil, checkErr
}
resp.Body.Close()
}
r := &RawResponse{
StatusCode: resp.StatusCode,
Body: respBody,
Header: resp.Header,
}
return r, nil
}
// DefaultCheckRetry defines the retrying behaviour for bad HTTP requests
// If we have retried 2 * machine number, stop retrying.
// If status code is InternalServerError, sleep for 200ms.
func DefaultCheckRetry(cluster *Cluster, numReqs int, lastResp http.Response,
err error) error {
if numReqs >= 2 * len(cluster.Machines) {
return newError(ErrCodeEurekaNotReachable,
"Tried to connect to each peer twice and failed", 0)
}
code := lastResp.StatusCode
if code == http.StatusInternalServerError {
time.Sleep(time.Millisecond * 200)
}
logger.Warning("bad response status code %d", code)
return nil
}
func (c *Client) getHttpPath(random bool, s ...string) string {
var machine string
if random {
machine = c.Cluster.Machines[rand.Intn(len(c.Cluster.Machines))]
} else {
machine = c.Cluster.Leader
}
fullPath := machine
for _, seg := range s {
fullPath += "/" + seg
}
return fullPath
}
// buildValues builds a url.Values map according to the given value and ttl
func buildValues(value string, ttl uint64) url.Values {
v := url.Values{}
if value != "" {
v.Set("value", value)
}
if ttl > 0 {
v.Set("ttl", fmt.Sprintf("%v", ttl))
}
return v
}

View file

@ -0,0 +1,21 @@
package eureka
import "net/http"
type RawResponse struct {
StatusCode int
Body []byte
Header http.Header
}
var (
validHttpStatusCode = map[int]bool{
http.StatusNoContent: true,
http.StatusCreated: true,
http.StatusOK: true,
http.StatusBadRequest: true,
http.StatusNotFound: true,
http.StatusPreconditionFailed: true,
http.StatusForbidden: true,
}
)

View file

@ -0,0 +1,3 @@
package eureka
const version = "v2"

View file

@ -0,0 +1,23 @@
The MIT License (MIT)
Copyright (c) 2015 Arthur Halet
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
The Software should rather be used for Good, not Evil.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View file

@ -0,0 +1,185 @@
package gominlog
import (
"log"
"os"
"fmt"
"runtime"
"github.com/daviddengcn/go-colortext"
"regexp"
"strings"
"io"
)
type Level int
const (
Loff = Level(^uint(0) >> 1)
Lsevere = Level(1000)
Lerror = Level(900)
Lwarning = Level(800)
Linfo = Level(700)
Ldebug = Level(600)
Lall = Level(-Loff - 1)
)
type MinLog struct {
log *log.Logger
level Level
packageName string
isColorized bool
}
func NewClassicMinLog() *MinLog {
minLog := &MinLog{}
logWriter := os.Stdout
flags := log.Lshortfile | log.Ldate | log.Ltime
minLog.log = log.New(logWriter, "", flags)
minLog.isColorized = true
minLog.packageName = ""
minLog.level = Lall
return minLog
}
func NewClassicMinLogWithPackageName(packageName string) *MinLog {
minLog := NewClassicMinLog()
minLog.SetPackageName(packageName)
return minLog
}
func NewMinLog(appName string, level Level, withColor bool, flag int) *MinLog {
minLog := &MinLog{}
logWriter := os.Stdout
minLog.log = log.New(logWriter, "", flag)
minLog.isColorized = withColor
minLog.packageName = appName
minLog.level = level
return minLog
}
func NewMinLogWithLogger(packageName string, level Level, withColor bool, logger *log.Logger) *MinLog {
minLog := &MinLog{}
minLog.log = logger
minLog.isColorized = withColor
minLog.packageName = packageName
minLog.level = level
return minLog
}
func (this *MinLog) GetLevel() Level {
return Level(this.level)
}
func (this *MinLog) SetWriter(writer io.Writer) {
this.log.SetOutput(writer)
}
func (this *MinLog) SetLevel(level Level) {
this.level = level
}
func (this *MinLog) SetPackageName(newPackageName string) {
this.packageName = newPackageName
}
func (this *MinLog) GetPackageName() string {
return this.packageName
}
func (this *MinLog) SetLogger(l *log.Logger) {
this.log = l
}
func (this *MinLog) WithColor(isColorized bool) {
this.isColorized = isColorized
}
func (this *MinLog) IsColorized() bool {
return this.isColorized
}
func (this *MinLog) GetLogger() *log.Logger {
return this.log
}
func (this *MinLog) logMessage(typeLog string, colorFg ct.Color, colorBg ct.Color, args ...interface{}) {
var text string
msg := ""
flags := this.log.Flags()
if (log.Lshortfile | flags) == flags {
msg += this.trace()
this.log.SetFlags(flags - log.Lshortfile)
}
text, ok := args[0].(string);
if !ok {
panic("Firt argument should be a string")
}
if len(args) > 1 {
newArgs := args[1:]
msg += typeLog + ": " + fmt.Sprintf(text, newArgs...)
}else {
msg += typeLog + ": " + text
}
this.writeMsgInLogger(msg, colorFg, colorBg)
this.log.SetFlags(flags)
}
func (this *MinLog) writeMsgInLogger(msg string, colorFg ct.Color, colorBg ct.Color) {
if this.isColorized && colorFg > 0 {
ct.Foreground(colorFg, false)
}
if this.isColorized && colorBg > 0 {
ct.ChangeColor(colorFg, false, colorBg, false)
}
this.log.Print(msg)
if this.isColorized {
ct.ResetColor()
}
}
func (this *MinLog) Error(args ...interface{}) {
if this.level > Lerror {
return
}
this.logMessage("ERROR", ct.Red, 0, args...)
}
func (this *MinLog) Severe(args ...interface{}) {
if this.level > Lsevere {
return
}
this.logMessage("SEVERE", ct.Red, ct.Yellow, args...)
}
func (this *MinLog) Debug(args ...interface{}) {
if this.level > Ldebug {
return
}
this.logMessage("DEBUG", ct.Blue, 0, args...)
}
func (this *MinLog) Info(args ...interface{}) {
if this.level > Linfo {
return
}
this.logMessage("INFO", ct.Cyan, 0, args...)
}
func (this *MinLog) Warning(args ...interface{}) {
if this.level > Lwarning {
return
}
this.logMessage("WARNING", ct.Yellow, 0, args...)
}
func (this *MinLog) trace() string {
var shortFile string
pc := make([]uintptr, 10)
runtime.Callers(2, pc)
f := runtime.FuncForPC(pc[2])
file, line := f.FileLine(pc[2])
if this.packageName == "" {
execFileSplit := strings.Split(os.Args[0], "/")
this.packageName = execFileSplit[len(execFileSplit) - 1]
}
regex, err := regexp.Compile(regexp.QuoteMeta(this.packageName) + "/(.*)")
if err != nil {
panic(err)
}
subMatch := regex.FindStringSubmatch(file)
if len(subMatch) < 2 {
fileSplit := strings.Split(file, "/")
shortFile = fileSplit[len(fileSplit) - 1]
}else {
shortFile = subMatch[1]
}
return fmt.Sprintf("/%s/%s:%d ", this.packageName, shortFile, line)
}

View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2015 Microsoft Corporation
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View file

@ -0,0 +1,188 @@
package ansiterm
const LogEnv = "DEBUG_TERMINAL"
// ANSI constants
// References:
// -- http://www.ecma-international.org/publications/standards/Ecma-048.htm
// -- http://man7.org/linux/man-pages/man4/console_codes.4.html
// -- http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html
// -- http://en.wikipedia.org/wiki/ANSI_escape_code
// -- http://vt100.net/emu/dec_ansi_parser
// -- http://vt100.net/emu/vt500_parser.svg
// -- http://invisible-island.net/xterm/ctlseqs/ctlseqs.html
// -- http://www.inwap.com/pdp10/ansicode.txt
const (
// ECMA-48 Set Graphics Rendition
// Note:
// -- Constants leading with an underscore (e.g., _ANSI_xxx) are unsupported or reserved
// -- Fonts could possibly be supported via SetCurrentConsoleFontEx
// -- Windows does not expose the per-window cursor (i.e., caret) blink times
ANSI_SGR_RESET = 0
ANSI_SGR_BOLD = 1
ANSI_SGR_DIM = 2
_ANSI_SGR_ITALIC = 3
ANSI_SGR_UNDERLINE = 4
_ANSI_SGR_BLINKSLOW = 5
_ANSI_SGR_BLINKFAST = 6
ANSI_SGR_REVERSE = 7
_ANSI_SGR_INVISIBLE = 8
_ANSI_SGR_LINETHROUGH = 9
_ANSI_SGR_FONT_00 = 10
_ANSI_SGR_FONT_01 = 11
_ANSI_SGR_FONT_02 = 12
_ANSI_SGR_FONT_03 = 13
_ANSI_SGR_FONT_04 = 14
_ANSI_SGR_FONT_05 = 15
_ANSI_SGR_FONT_06 = 16
_ANSI_SGR_FONT_07 = 17
_ANSI_SGR_FONT_08 = 18
_ANSI_SGR_FONT_09 = 19
_ANSI_SGR_FONT_10 = 20
_ANSI_SGR_DOUBLEUNDERLINE = 21
ANSI_SGR_BOLD_DIM_OFF = 22
_ANSI_SGR_ITALIC_OFF = 23
ANSI_SGR_UNDERLINE_OFF = 24
_ANSI_SGR_BLINK_OFF = 25
_ANSI_SGR_RESERVED_00 = 26
ANSI_SGR_REVERSE_OFF = 27
_ANSI_SGR_INVISIBLE_OFF = 28
_ANSI_SGR_LINETHROUGH_OFF = 29
ANSI_SGR_FOREGROUND_BLACK = 30
ANSI_SGR_FOREGROUND_RED = 31
ANSI_SGR_FOREGROUND_GREEN = 32
ANSI_SGR_FOREGROUND_YELLOW = 33
ANSI_SGR_FOREGROUND_BLUE = 34
ANSI_SGR_FOREGROUND_MAGENTA = 35
ANSI_SGR_FOREGROUND_CYAN = 36
ANSI_SGR_FOREGROUND_WHITE = 37
_ANSI_SGR_RESERVED_01 = 38
ANSI_SGR_FOREGROUND_DEFAULT = 39
ANSI_SGR_BACKGROUND_BLACK = 40
ANSI_SGR_BACKGROUND_RED = 41
ANSI_SGR_BACKGROUND_GREEN = 42
ANSI_SGR_BACKGROUND_YELLOW = 43
ANSI_SGR_BACKGROUND_BLUE = 44
ANSI_SGR_BACKGROUND_MAGENTA = 45
ANSI_SGR_BACKGROUND_CYAN = 46
ANSI_SGR_BACKGROUND_WHITE = 47
_ANSI_SGR_RESERVED_02 = 48
ANSI_SGR_BACKGROUND_DEFAULT = 49
// 50 - 65: Unsupported
ANSI_MAX_CMD_LENGTH = 4096
MAX_INPUT_EVENTS = 128
DEFAULT_WIDTH = 80
DEFAULT_HEIGHT = 24
ANSI_BEL = 0x07
ANSI_BACKSPACE = 0x08
ANSI_TAB = 0x09
ANSI_LINE_FEED = 0x0A
ANSI_VERTICAL_TAB = 0x0B
ANSI_FORM_FEED = 0x0C
ANSI_CARRIAGE_RETURN = 0x0D
ANSI_ESCAPE_PRIMARY = 0x1B
ANSI_ESCAPE_SECONDARY = 0x5B
ANSI_OSC_STRING_ENTRY = 0x5D
ANSI_COMMAND_FIRST = 0x40
ANSI_COMMAND_LAST = 0x7E
DCS_ENTRY = 0x90
CSI_ENTRY = 0x9B
OSC_STRING = 0x9D
ANSI_PARAMETER_SEP = ";"
ANSI_CMD_G0 = '('
ANSI_CMD_G1 = ')'
ANSI_CMD_G2 = '*'
ANSI_CMD_G3 = '+'
ANSI_CMD_DECPNM = '>'
ANSI_CMD_DECPAM = '='
ANSI_CMD_OSC = ']'
ANSI_CMD_STR_TERM = '\\'
KEY_CONTROL_PARAM_2 = ";2"
KEY_CONTROL_PARAM_3 = ";3"
KEY_CONTROL_PARAM_4 = ";4"
KEY_CONTROL_PARAM_5 = ";5"
KEY_CONTROL_PARAM_6 = ";6"
KEY_CONTROL_PARAM_7 = ";7"
KEY_CONTROL_PARAM_8 = ";8"
KEY_ESC_CSI = "\x1B["
KEY_ESC_N = "\x1BN"
KEY_ESC_O = "\x1BO"
FILL_CHARACTER = ' '
)
func getByteRange(start byte, end byte) []byte {
bytes := make([]byte, 0, 32)
for i := start; i <= end; i++ {
bytes = append(bytes, byte(i))
}
return bytes
}
var toGroundBytes = getToGroundBytes()
var executors = getExecuteBytes()
// SPACE 20+A0 hex Always and everywhere a blank space
// Intermediate 20-2F hex !"#$%&'()*+,-./
var intermeds = getByteRange(0x20, 0x2F)
// Parameters 30-3F hex 0123456789:;<=>?
// CSI Parameters 30-39, 3B hex 0123456789;
var csiParams = getByteRange(0x30, 0x3F)
var csiCollectables = append(getByteRange(0x30, 0x39), getByteRange(0x3B, 0x3F)...)
// Uppercase 40-5F hex @ABCDEFGHIJKLMNOPQRSTUVWXYZ[\]^_
var upperCase = getByteRange(0x40, 0x5F)
// Lowercase 60-7E hex `abcdefghijlkmnopqrstuvwxyz{|}~
var lowerCase = getByteRange(0x60, 0x7E)
// Alphabetics 40-7E hex (all of upper and lower case)
var alphabetics = append(upperCase, lowerCase...)
var printables = getByteRange(0x20, 0x7F)
var escapeIntermediateToGroundBytes = getByteRange(0x30, 0x7E)
var escapeToGroundBytes = getEscapeToGroundBytes()
// See http://www.vt100.net/emu/vt500_parser.png for description of the complex
// byte ranges below
func getEscapeToGroundBytes() []byte {
escapeToGroundBytes := getByteRange(0x30, 0x4F)
escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x51, 0x57)...)
escapeToGroundBytes = append(escapeToGroundBytes, 0x59)
escapeToGroundBytes = append(escapeToGroundBytes, 0x5A)
escapeToGroundBytes = append(escapeToGroundBytes, 0x5C)
escapeToGroundBytes = append(escapeToGroundBytes, getByteRange(0x60, 0x7E)...)
return escapeToGroundBytes
}
func getExecuteBytes() []byte {
executeBytes := getByteRange(0x00, 0x17)
executeBytes = append(executeBytes, 0x19)
executeBytes = append(executeBytes, getByteRange(0x1C, 0x1F)...)
return executeBytes
}
func getToGroundBytes() []byte {
groundBytes := []byte{0x18}
groundBytes = append(groundBytes, 0x1A)
groundBytes = append(groundBytes, getByteRange(0x80, 0x8F)...)
groundBytes = append(groundBytes, getByteRange(0x91, 0x97)...)
groundBytes = append(groundBytes, 0x99)
groundBytes = append(groundBytes, 0x9A)
groundBytes = append(groundBytes, 0x9C)
return groundBytes
}
// Delete 7F hex Always and everywhere ignored
// C1 Control 80-9F hex 32 additional control characters
// G1 Displayable A1-FE hex 94 additional displayable characters
// Special A0+FF hex Same as SPACE and DELETE

View file

@ -0,0 +1,7 @@
package ansiterm
type ansiContext struct {
currentChar byte
paramBuffer []byte
interBuffer []byte
}

View file

@ -0,0 +1,49 @@
package ansiterm
type csiEntryState struct {
baseState
}
func (csiState csiEntryState) Handle(b byte) (s state, e error) {
logger.Infof("CsiEntry::Handle %#x", b)
nextState, err := csiState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(alphabetics, b):
return csiState.parser.ground, nil
case sliceContains(csiCollectables, b):
return csiState.parser.csiParam, nil
case sliceContains(executors, b):
return csiState, csiState.parser.execute()
}
return csiState, nil
}
func (csiState csiEntryState) Transition(s state) error {
logger.Infof("CsiEntry::Transition %s --> %s", csiState.Name(), s.Name())
csiState.baseState.Transition(s)
switch s {
case csiState.parser.ground:
return csiState.parser.csiDispatch()
case csiState.parser.csiParam:
switch {
case sliceContains(csiParams, csiState.parser.context.currentChar):
csiState.parser.collectParam()
case sliceContains(intermeds, csiState.parser.context.currentChar):
csiState.parser.collectInter()
}
}
return nil
}
func (csiState csiEntryState) Enter() error {
csiState.parser.clear()
return nil
}

View file

@ -0,0 +1,38 @@
package ansiterm
type csiParamState struct {
baseState
}
func (csiState csiParamState) Handle(b byte) (s state, e error) {
logger.Infof("CsiParam::Handle %#x", b)
nextState, err := csiState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(alphabetics, b):
return csiState.parser.ground, nil
case sliceContains(csiCollectables, b):
csiState.parser.collectParam()
return csiState, nil
case sliceContains(executors, b):
return csiState, csiState.parser.execute()
}
return csiState, nil
}
func (csiState csiParamState) Transition(s state) error {
logger.Infof("CsiParam::Transition %s --> %s", csiState.Name(), s.Name())
csiState.baseState.Transition(s)
switch s {
case csiState.parser.ground:
return csiState.parser.csiDispatch()
}
return nil
}

View file

@ -0,0 +1,36 @@
package ansiterm
type escapeIntermediateState struct {
baseState
}
func (escState escapeIntermediateState) Handle(b byte) (s state, e error) {
logger.Infof("escapeIntermediateState::Handle %#x", b)
nextState, err := escState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(intermeds, b):
return escState, escState.parser.collectInter()
case sliceContains(executors, b):
return escState, escState.parser.execute()
case sliceContains(escapeIntermediateToGroundBytes, b):
return escState.parser.ground, nil
}
return escState, nil
}
func (escState escapeIntermediateState) Transition(s state) error {
logger.Infof("escapeIntermediateState::Transition %s --> %s", escState.Name(), s.Name())
escState.baseState.Transition(s)
switch s {
case escState.parser.ground:
return escState.parser.escDispatch()
}
return nil
}

View file

@ -0,0 +1,47 @@
package ansiterm
type escapeState struct {
baseState
}
func (escState escapeState) Handle(b byte) (s state, e error) {
logger.Infof("escapeState::Handle %#x", b)
nextState, err := escState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case b == ANSI_ESCAPE_SECONDARY:
return escState.parser.csiEntry, nil
case b == ANSI_OSC_STRING_ENTRY:
return escState.parser.oscString, nil
case sliceContains(executors, b):
return escState, escState.parser.execute()
case sliceContains(escapeToGroundBytes, b):
return escState.parser.ground, nil
case sliceContains(intermeds, b):
return escState.parser.escapeIntermediate, nil
}
return escState, nil
}
func (escState escapeState) Transition(s state) error {
logger.Infof("Escape::Transition %s --> %s", escState.Name(), s.Name())
escState.baseState.Transition(s)
switch s {
case escState.parser.ground:
return escState.parser.escDispatch()
case escState.parser.escapeIntermediate:
return escState.parser.collectInter()
}
return nil
}
func (escState escapeState) Enter() error {
escState.parser.clear()
return nil
}

View file

@ -0,0 +1,90 @@
package ansiterm
type AnsiEventHandler interface {
// Print
Print(b byte) error
// Execute C0 commands
Execute(b byte) error
// CUrsor Up
CUU(int) error
// CUrsor Down
CUD(int) error
// CUrsor Forward
CUF(int) error
// CUrsor Backward
CUB(int) error
// Cursor to Next Line
CNL(int) error
// Cursor to Previous Line
CPL(int) error
// Cursor Horizontal position Absolute
CHA(int) error
// Vertical line Position Absolute
VPA(int) error
// CUrsor Position
CUP(int, int) error
// Horizontal and Vertical Position (depends on PUM)
HVP(int, int) error
// Text Cursor Enable Mode
DECTCEM(bool) error
// Origin Mode
DECOM(bool) error
// 132 Column Mode
DECCOLM(bool) error
// Erase in Display
ED(int) error
// Erase in Line
EL(int) error
// Insert Line
IL(int) error
// Delete Line
DL(int) error
// Insert Character
ICH(int) error
// Delete Character
DCH(int) error
// Set Graphics Rendition
SGR([]int) error
// Pan Down
SU(int) error
// Pan Up
SD(int) error
// Device Attributes
DA([]string) error
// Set Top and Bottom Margins
DECSTBM(int, int) error
// Index
IND() error
// Reverse Index
RI() error
// Flush updates from previous commands
Flush() error
}

View file

@ -0,0 +1,24 @@
package ansiterm
type groundState struct {
baseState
}
func (gs groundState) Handle(b byte) (s state, e error) {
gs.parser.context.currentChar = b
nextState, err := gs.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case sliceContains(printables, b):
return gs, gs.parser.print()
case sliceContains(executors, b):
return gs, gs.parser.execute()
}
return gs, nil
}

View file

@ -0,0 +1,31 @@
package ansiterm
type oscStringState struct {
baseState
}
func (oscState oscStringState) Handle(b byte) (s state, e error) {
logger.Infof("OscString::Handle %#x", b)
nextState, err := oscState.baseState.Handle(b)
if nextState != nil || err != nil {
return nextState, err
}
switch {
case isOscStringTerminator(b):
return oscState.parser.ground, nil
}
return oscState, nil
}
// See below for OSC string terminators for linux
// http://man7.org/linux/man-pages/man4/console_codes.4.html
func isOscStringTerminator(b byte) bool {
if b == ANSI_BEL || b == 0x5C {
return true
}
return false
}

View file

@ -0,0 +1,136 @@
package ansiterm
import (
"errors"
"io/ioutil"
"os"
"github.com/Sirupsen/logrus"
)
var logger *logrus.Logger
type AnsiParser struct {
currState state
eventHandler AnsiEventHandler
context *ansiContext
csiEntry state
csiParam state
dcsEntry state
escape state
escapeIntermediate state
error state
ground state
oscString state
stateMap []state
}
func CreateParser(initialState string, evtHandler AnsiEventHandler) *AnsiParser {
logFile := ioutil.Discard
if isDebugEnv := os.Getenv(LogEnv); isDebugEnv == "1" {
logFile, _ = os.Create("ansiParser.log")
}
logger = &logrus.Logger{
Out: logFile,
Formatter: new(logrus.TextFormatter),
Level: logrus.InfoLevel,
}
parser := &AnsiParser{
eventHandler: evtHandler,
context: &ansiContext{},
}
parser.csiEntry = csiEntryState{baseState{name: "CsiEntry", parser: parser}}
parser.csiParam = csiParamState{baseState{name: "CsiParam", parser: parser}}
parser.dcsEntry = dcsEntryState{baseState{name: "DcsEntry", parser: parser}}
parser.escape = escapeState{baseState{name: "Escape", parser: parser}}
parser.escapeIntermediate = escapeIntermediateState{baseState{name: "EscapeIntermediate", parser: parser}}
parser.error = errorState{baseState{name: "Error", parser: parser}}
parser.ground = groundState{baseState{name: "Ground", parser: parser}}
parser.oscString = oscStringState{baseState{name: "OscString", parser: parser}}
parser.stateMap = []state{
parser.csiEntry,
parser.csiParam,
parser.dcsEntry,
parser.escape,
parser.escapeIntermediate,
parser.error,
parser.ground,
parser.oscString,
}
parser.currState = getState(initialState, parser.stateMap)
logger.Infof("CreateParser: parser %p", parser)
return parser
}
func getState(name string, states []state) state {
for _, el := range states {
if el.Name() == name {
return el
}
}
return nil
}
func (ap *AnsiParser) Parse(bytes []byte) (int, error) {
for i, b := range bytes {
if err := ap.handle(b); err != nil {
return i, err
}
}
return len(bytes), ap.eventHandler.Flush()
}
func (ap *AnsiParser) handle(b byte) error {
ap.context.currentChar = b
newState, err := ap.currState.Handle(b)
if err != nil {
return err
}
if newState == nil {
logger.Warning("newState is nil")
return errors.New("New state of 'nil' is invalid.")
}
if newState != ap.currState {
if err := ap.changeState(newState); err != nil {
return err
}
}
return nil
}
func (ap *AnsiParser) changeState(newState state) error {
logger.Infof("ChangeState %s --> %s", ap.currState.Name(), newState.Name())
// Exit old state
if err := ap.currState.Exit(); err != nil {
logger.Infof("Exit state '%s' failed with : '%v'", ap.currState.Name(), err)
return err
}
// Perform transition action
if err := ap.currState.Transition(newState); err != nil {
logger.Infof("Transition from '%s' to '%s' failed with: '%v'", ap.currState.Name(), newState.Name, err)
return err
}
// Enter new state
if err := newState.Enter(); err != nil {
logger.Infof("Enter state '%s' failed with: '%v'", newState.Name(), err)
return err
}
ap.currState = newState
return nil
}

View file

@ -0,0 +1,103 @@
package ansiterm
import (
"strconv"
)
func parseParams(bytes []byte) ([]string, error) {
paramBuff := make([]byte, 0, 0)
params := []string{}
for _, v := range bytes {
if v == ';' {
if len(paramBuff) > 0 {
// Completed parameter, append it to the list
s := string(paramBuff)
params = append(params, s)
paramBuff = make([]byte, 0, 0)
}
} else {
paramBuff = append(paramBuff, v)
}
}
// Last parameter may not be terminated with ';'
if len(paramBuff) > 0 {
s := string(paramBuff)
params = append(params, s)
}
logger.Infof("Parsed params: %v with length: %d", params, len(params))
return params, nil
}
func parseCmd(context ansiContext) (string, error) {
return string(context.currentChar), nil
}
func getInt(params []string, dflt int) int {
i := getInts(params, 1, dflt)[0]
logger.Infof("getInt: %v", i)
return i
}
func getInts(params []string, minCount int, dflt int) []int {
ints := []int{}
for _, v := range params {
i, _ := strconv.Atoi(v)
// Zero is mapped to the default value in VT100.
if i == 0 {
i = dflt
}
ints = append(ints, i)
}
if len(ints) < minCount {
remaining := minCount - len(ints)
for i := 0; i < remaining; i++ {
ints = append(ints, dflt)
}
}
logger.Infof("getInts: %v", ints)
return ints
}
func (ap *AnsiParser) modeDispatch(param string, set bool) error {
switch param {
case "?3":
return ap.eventHandler.DECCOLM(set)
case "?6":
return ap.eventHandler.DECOM(set)
case "?25":
return ap.eventHandler.DECTCEM(set)
}
return nil
}
func (ap *AnsiParser) hDispatch(params []string) error {
if len(params) == 1 {
return ap.modeDispatch(params[0], true)
}
return nil
}
func (ap *AnsiParser) lDispatch(params []string) error {
if len(params) == 1 {
return ap.modeDispatch(params[0], false)
}
return nil
}
func getEraseParam(params []string) int {
param := getInt(params, 0)
if param < 0 || 3 < param {
param = 0
}
return param
}

View file

@ -0,0 +1,122 @@
package ansiterm
import (
"fmt"
)
func (ap *AnsiParser) collectParam() error {
currChar := ap.context.currentChar
logger.Infof("collectParam %#x", currChar)
ap.context.paramBuffer = append(ap.context.paramBuffer, currChar)
return nil
}
func (ap *AnsiParser) collectInter() error {
currChar := ap.context.currentChar
logger.Infof("collectInter %#x", currChar)
ap.context.paramBuffer = append(ap.context.interBuffer, currChar)
return nil
}
func (ap *AnsiParser) escDispatch() error {
cmd, _ := parseCmd(*ap.context)
intermeds := ap.context.interBuffer
logger.Infof("escDispatch currentChar: %#x", ap.context.currentChar)
logger.Infof("escDispatch: %v(%v)", cmd, intermeds)
switch cmd {
case "D": // IND
return ap.eventHandler.IND()
case "E": // NEL, equivalent to CRLF
err := ap.eventHandler.Execute(ANSI_CARRIAGE_RETURN)
if err == nil {
err = ap.eventHandler.Execute(ANSI_LINE_FEED)
}
return err
case "M": // RI
return ap.eventHandler.RI()
}
return nil
}
func (ap *AnsiParser) csiDispatch() error {
cmd, _ := parseCmd(*ap.context)
params, _ := parseParams(ap.context.paramBuffer)
logger.Infof("csiDispatch: %v(%v)", cmd, params)
switch cmd {
case "@":
return ap.eventHandler.ICH(getInt(params, 1))
case "A":
return ap.eventHandler.CUU(getInt(params, 1))
case "B":
return ap.eventHandler.CUD(getInt(params, 1))
case "C":
return ap.eventHandler.CUF(getInt(params, 1))
case "D":
return ap.eventHandler.CUB(getInt(params, 1))
case "E":
return ap.eventHandler.CNL(getInt(params, 1))
case "F":
return ap.eventHandler.CPL(getInt(params, 1))
case "G":
return ap.eventHandler.CHA(getInt(params, 1))
case "H":
ints := getInts(params, 2, 1)
x, y := ints[0], ints[1]
return ap.eventHandler.CUP(x, y)
case "J":
param := getEraseParam(params)
return ap.eventHandler.ED(param)
case "K":
param := getEraseParam(params)
return ap.eventHandler.EL(param)
case "L":
return ap.eventHandler.IL(getInt(params, 1))
case "M":
return ap.eventHandler.DL(getInt(params, 1))
case "P":
return ap.eventHandler.DCH(getInt(params, 1))
case "S":
return ap.eventHandler.SU(getInt(params, 1))
case "T":
return ap.eventHandler.SD(getInt(params, 1))
case "c":
return ap.eventHandler.DA(params)
case "d":
return ap.eventHandler.VPA(getInt(params, 1))
case "f":
ints := getInts(params, 2, 1)
x, y := ints[0], ints[1]
return ap.eventHandler.HVP(x, y)
case "h":
return ap.hDispatch(params)
case "l":
return ap.lDispatch(params)
case "m":
return ap.eventHandler.SGR(getInts(params, 1, 0))
case "r":
ints := getInts(params, 2, 1)
top, bottom := ints[0], ints[1]
return ap.eventHandler.DECSTBM(top, bottom)
default:
logger.Errorf(fmt.Sprintf("Unsupported CSI command: '%s', with full context: %v", cmd, ap.context))
return nil
}
}
func (ap *AnsiParser) print() error {
return ap.eventHandler.Print(ap.context.currentChar)
}
func (ap *AnsiParser) clear() error {
ap.context = &ansiContext{}
return nil
}
func (ap *AnsiParser) execute() error {
return ap.eventHandler.Execute(ap.context.currentChar)
}

View file

@ -0,0 +1,71 @@
package ansiterm
type stateID int
type state interface {
Enter() error
Exit() error
Handle(byte) (state, error)
Name() string
Transition(state) error
}
type baseState struct {
name string
parser *AnsiParser
}
func (base baseState) Enter() error {
return nil
}
func (base baseState) Exit() error {
return nil
}
func (base baseState) Handle(b byte) (s state, e error) {
switch {
case b == CSI_ENTRY:
return base.parser.csiEntry, nil
case b == DCS_ENTRY:
return base.parser.dcsEntry, nil
case b == ANSI_ESCAPE_PRIMARY:
return base.parser.escape, nil
case b == OSC_STRING:
return base.parser.oscString, nil
case sliceContains(toGroundBytes, b):
return base.parser.ground, nil
}
return nil, nil
}
func (base baseState) Name() string {
return base.name
}
func (base baseState) Transition(s state) error {
if s == base.parser.ground {
execBytes := []byte{0x18}
execBytes = append(execBytes, 0x1A)
execBytes = append(execBytes, getByteRange(0x80, 0x8F)...)
execBytes = append(execBytes, getByteRange(0x91, 0x97)...)
execBytes = append(execBytes, 0x99)
execBytes = append(execBytes, 0x9A)
if sliceContains(execBytes, base.parser.context.currentChar) {
return base.parser.execute()
}
}
return nil
}
type dcsEntryState struct {
baseState
}
type errorState struct {
baseState
}

View file

@ -0,0 +1,21 @@
package ansiterm
import (
"strconv"
)
func sliceContains(bytes []byte, b byte) bool {
for _, v := range bytes {
if v == b {
return true
}
}
return false
}
func convertBytesToInteger(bytes []byte) int {
s := string(bytes)
i, _ := strconv.Atoi(s)
return i
}

View file

@ -0,0 +1,182 @@
// +build windows
package winterm
import (
"fmt"
"os"
"strconv"
"strings"
"syscall"
"github.com/Azure/go-ansiterm"
)
// Windows keyboard constants
// See https://msdn.microsoft.com/en-us/library/windows/desktop/dd375731(v=vs.85).aspx.
const (
VK_PRIOR = 0x21 // PAGE UP key
VK_NEXT = 0x22 // PAGE DOWN key
VK_END = 0x23 // END key
VK_HOME = 0x24 // HOME key
VK_LEFT = 0x25 // LEFT ARROW key
VK_UP = 0x26 // UP ARROW key
VK_RIGHT = 0x27 // RIGHT ARROW key
VK_DOWN = 0x28 // DOWN ARROW key
VK_SELECT = 0x29 // SELECT key
VK_PRINT = 0x2A // PRINT key
VK_EXECUTE = 0x2B // EXECUTE key
VK_SNAPSHOT = 0x2C // PRINT SCREEN key
VK_INSERT = 0x2D // INS key
VK_DELETE = 0x2E // DEL key
VK_HELP = 0x2F // HELP key
VK_F1 = 0x70 // F1 key
VK_F2 = 0x71 // F2 key
VK_F3 = 0x72 // F3 key
VK_F4 = 0x73 // F4 key
VK_F5 = 0x74 // F5 key
VK_F6 = 0x75 // F6 key
VK_F7 = 0x76 // F7 key
VK_F8 = 0x77 // F8 key
VK_F9 = 0x78 // F9 key
VK_F10 = 0x79 // F10 key
VK_F11 = 0x7A // F11 key
VK_F12 = 0x7B // F12 key
RIGHT_ALT_PRESSED = 0x0001
LEFT_ALT_PRESSED = 0x0002
RIGHT_CTRL_PRESSED = 0x0004
LEFT_CTRL_PRESSED = 0x0008
SHIFT_PRESSED = 0x0010
NUMLOCK_ON = 0x0020
SCROLLLOCK_ON = 0x0040
CAPSLOCK_ON = 0x0080
ENHANCED_KEY = 0x0100
)
type ansiCommand struct {
CommandBytes []byte
Command string
Parameters []string
IsSpecial bool
}
func newAnsiCommand(command []byte) *ansiCommand {
if isCharacterSelectionCmdChar(command[1]) {
// Is Character Set Selection commands
return &ansiCommand{
CommandBytes: command,
Command: string(command),
IsSpecial: true,
}
}
// last char is command character
lastCharIndex := len(command) - 1
ac := &ansiCommand{
CommandBytes: command,
Command: string(command[lastCharIndex]),
IsSpecial: false,
}
// more than a single escape
if lastCharIndex != 0 {
start := 1
// skip if double char escape sequence
if command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_ESCAPE_SECONDARY {
start++
}
// convert this to GetNextParam method
ac.Parameters = strings.Split(string(command[start:lastCharIndex]), ansiterm.ANSI_PARAMETER_SEP)
}
return ac
}
func (ac *ansiCommand) paramAsSHORT(index int, defaultValue int16) int16 {
if index < 0 || index >= len(ac.Parameters) {
return defaultValue
}
param, err := strconv.ParseInt(ac.Parameters[index], 10, 16)
if err != nil {
return defaultValue
}
return int16(param)
}
func (ac *ansiCommand) String() string {
return fmt.Sprintf("0x%v \"%v\" (\"%v\")",
bytesToHex(ac.CommandBytes),
ac.Command,
strings.Join(ac.Parameters, "\",\""))
}
// isAnsiCommandChar returns true if the passed byte falls within the range of ANSI commands.
// See http://manpages.ubuntu.com/manpages/intrepid/man4/console_codes.4.html.
func isAnsiCommandChar(b byte) bool {
switch {
case ansiterm.ANSI_COMMAND_FIRST <= b && b <= ansiterm.ANSI_COMMAND_LAST && b != ansiterm.ANSI_ESCAPE_SECONDARY:
return true
case b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_OSC || b == ansiterm.ANSI_CMD_DECPAM || b == ansiterm.ANSI_CMD_DECPNM:
// non-CSI escape sequence terminator
return true
case b == ansiterm.ANSI_CMD_STR_TERM || b == ansiterm.ANSI_BEL:
// String escape sequence terminator
return true
}
return false
}
func isXtermOscSequence(command []byte, current byte) bool {
return (len(command) >= 2 && command[0] == ansiterm.ANSI_ESCAPE_PRIMARY && command[1] == ansiterm.ANSI_CMD_OSC && current != ansiterm.ANSI_BEL)
}
func isCharacterSelectionCmdChar(b byte) bool {
return (b == ansiterm.ANSI_CMD_G0 || b == ansiterm.ANSI_CMD_G1 || b == ansiterm.ANSI_CMD_G2 || b == ansiterm.ANSI_CMD_G3)
}
// bytesToHex converts a slice of bytes to a human-readable string.
func bytesToHex(b []byte) string {
hex := make([]string, len(b))
for i, ch := range b {
hex[i] = fmt.Sprintf("%X", ch)
}
return strings.Join(hex, "")
}
// ensureInRange adjusts the passed value, if necessary, to ensure it is within
// the passed min / max range.
func ensureInRange(n int16, min int16, max int16) int16 {
if n < min {
return min
} else if n > max {
return max
} else {
return n
}
}
func GetStdFile(nFile int) (*os.File, uintptr) {
var file *os.File
switch nFile {
case syscall.STD_INPUT_HANDLE:
file = os.Stdin
case syscall.STD_OUTPUT_HANDLE:
file = os.Stdout
case syscall.STD_ERROR_HANDLE:
file = os.Stderr
default:
panic(fmt.Errorf("Invalid standard handle identifier: %v", nFile))
}
fd, err := syscall.GetStdHandle(nFile)
if err != nil {
panic(fmt.Errorf("Invalid standard handle indentifier: %v -- %v", nFile, err))
}
return file, uintptr(fd)
}

View file

@ -0,0 +1,322 @@
// +build windows
package winterm
import (
"fmt"
"syscall"
"unsafe"
)
//===========================================================================================================
// IMPORTANT NOTE:
//
// The methods below make extensive use of the "unsafe" package to obtain the required pointers.
// Beginning in Go 1.3, the garbage collector may release local variables (e.g., incoming arguments, stack
// variables) the pointers reference *before* the API completes.
//
// As a result, in those cases, the code must hint that the variables remain in active by invoking the
// dummy method "use" (see below). Newer versions of Go are planned to change the mechanism to no longer
// require unsafe pointers.
//
// If you add or modify methods, ENSURE protection of local variables through the "use" builtin to inform
// the garbage collector the variables remain in use if:
//
// -- The value is not a pointer (e.g., int32, struct)
// -- The value is not referenced by the method after passing the pointer to Windows
//
// See http://golang.org/doc/go1.3.
//===========================================================================================================
var (
kernel32DLL = syscall.NewLazyDLL("kernel32.dll")
getConsoleCursorInfoProc = kernel32DLL.NewProc("GetConsoleCursorInfo")
setConsoleCursorInfoProc = kernel32DLL.NewProc("SetConsoleCursorInfo")
setConsoleCursorPositionProc = kernel32DLL.NewProc("SetConsoleCursorPosition")
setConsoleModeProc = kernel32DLL.NewProc("SetConsoleMode")
getConsoleScreenBufferInfoProc = kernel32DLL.NewProc("GetConsoleScreenBufferInfo")
setConsoleScreenBufferSizeProc = kernel32DLL.NewProc("SetConsoleScreenBufferSize")
scrollConsoleScreenBufferProc = kernel32DLL.NewProc("ScrollConsoleScreenBufferA")
setConsoleTextAttributeProc = kernel32DLL.NewProc("SetConsoleTextAttribute")
setConsoleWindowInfoProc = kernel32DLL.NewProc("SetConsoleWindowInfo")
writeConsoleOutputProc = kernel32DLL.NewProc("WriteConsoleOutputW")
readConsoleInputProc = kernel32DLL.NewProc("ReadConsoleInputW")
waitForSingleObjectProc = kernel32DLL.NewProc("WaitForSingleObject")
)
// Windows Console constants
const (
// Console modes
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx.
ENABLE_PROCESSED_INPUT = 0x0001
ENABLE_LINE_INPUT = 0x0002
ENABLE_ECHO_INPUT = 0x0004
ENABLE_WINDOW_INPUT = 0x0008
ENABLE_MOUSE_INPUT = 0x0010
ENABLE_INSERT_MODE = 0x0020
ENABLE_QUICK_EDIT_MODE = 0x0040
ENABLE_EXTENDED_FLAGS = 0x0080
ENABLE_PROCESSED_OUTPUT = 0x0001
ENABLE_WRAP_AT_EOL_OUTPUT = 0x0002
// Character attributes
// Note:
// -- The attributes are combined to produce various colors (e.g., Blue + Green will create Cyan).
// Clearing all foreground or background colors results in black; setting all creates white.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682088(v=vs.85).aspx#_win32_character_attributes.
FOREGROUND_BLUE uint16 = 0x0001
FOREGROUND_GREEN uint16 = 0x0002
FOREGROUND_RED uint16 = 0x0004
FOREGROUND_INTENSITY uint16 = 0x0008
FOREGROUND_MASK uint16 = 0x000F
BACKGROUND_BLUE uint16 = 0x0010
BACKGROUND_GREEN uint16 = 0x0020
BACKGROUND_RED uint16 = 0x0040
BACKGROUND_INTENSITY uint16 = 0x0080
BACKGROUND_MASK uint16 = 0x00F0
COMMON_LVB_MASK uint16 = 0xFF00
COMMON_LVB_REVERSE_VIDEO uint16 = 0x4000
COMMON_LVB_UNDERSCORE uint16 = 0x8000
// Input event types
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx.
KEY_EVENT = 0x0001
MOUSE_EVENT = 0x0002
WINDOW_BUFFER_SIZE_EVENT = 0x0004
MENU_EVENT = 0x0008
FOCUS_EVENT = 0x0010
// WaitForSingleObject return codes
WAIT_ABANDONED = 0x00000080
WAIT_FAILED = 0xFFFFFFFF
WAIT_SIGNALED = 0x0000000
WAIT_TIMEOUT = 0x00000102
// WaitForSingleObject wait duration
WAIT_INFINITE = 0xFFFFFFFF
WAIT_ONE_SECOND = 1000
WAIT_HALF_SECOND = 500
WAIT_QUARTER_SECOND = 250
)
// Windows API Console types
// -- See https://msdn.microsoft.com/en-us/library/windows/desktop/ms682101(v=vs.85).aspx for Console specific types (e.g., COORD)
// -- See https://msdn.microsoft.com/en-us/library/aa296569(v=vs.60).aspx for comments on alignment
type (
CHAR_INFO struct {
UnicodeChar uint16
Attributes uint16
}
CONSOLE_CURSOR_INFO struct {
Size uint32
Visible int32
}
CONSOLE_SCREEN_BUFFER_INFO struct {
Size COORD
CursorPosition COORD
Attributes uint16
Window SMALL_RECT
MaximumWindowSize COORD
}
COORD struct {
X int16
Y int16
}
SMALL_RECT struct {
Left int16
Top int16
Right int16
Bottom int16
}
// INPUT_RECORD is a C/C++ union of which KEY_EVENT_RECORD is one case, it is also the largest
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683499(v=vs.85).aspx.
INPUT_RECORD struct {
EventType uint16
KeyEvent KEY_EVENT_RECORD
}
KEY_EVENT_RECORD struct {
KeyDown int32
RepeatCount uint16
VirtualKeyCode uint16
VirtualScanCode uint16
UnicodeChar uint16
ControlKeyState uint32
}
WINDOW_BUFFER_SIZE struct {
Size COORD
}
)
// boolToBOOL converts a Go bool into a Windows int32.
func boolToBOOL(f bool) int32 {
if f {
return int32(1)
} else {
return int32(0)
}
}
// GetConsoleCursorInfo retrieves information about the size and visiblity of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms683163(v=vs.85).aspx.
func GetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error {
r1, r2, err := getConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0)
return checkError(r1, r2, err)
}
// SetConsoleCursorInfo sets the size and visiblity of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686019(v=vs.85).aspx.
func SetConsoleCursorInfo(handle uintptr, cursorInfo *CONSOLE_CURSOR_INFO) error {
r1, r2, err := setConsoleCursorInfoProc.Call(handle, uintptr(unsafe.Pointer(cursorInfo)), 0)
return checkError(r1, r2, err)
}
// SetConsoleCursorPosition location of the console cursor.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686025(v=vs.85).aspx.
func SetConsoleCursorPosition(handle uintptr, coord COORD) error {
r1, r2, err := setConsoleCursorPositionProc.Call(handle, coordToPointer(coord))
use(coord)
return checkError(r1, r2, err)
}
// GetConsoleMode gets the console mode for given file descriptor
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683167(v=vs.85).aspx.
func GetConsoleMode(handle uintptr) (mode uint32, err error) {
err = syscall.GetConsoleMode(syscall.Handle(handle), &mode)
return mode, err
}
// SetConsoleMode sets the console mode for given file descriptor
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686033(v=vs.85).aspx.
func SetConsoleMode(handle uintptr, mode uint32) error {
r1, r2, err := setConsoleModeProc.Call(handle, uintptr(mode), 0)
use(mode)
return checkError(r1, r2, err)
}
// GetConsoleScreenBufferInfo retrieves information about the specified console screen buffer.
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms683171(v=vs.85).aspx.
func GetConsoleScreenBufferInfo(handle uintptr) (*CONSOLE_SCREEN_BUFFER_INFO, error) {
info := CONSOLE_SCREEN_BUFFER_INFO{}
err := checkError(getConsoleScreenBufferInfoProc.Call(handle, uintptr(unsafe.Pointer(&info)), 0))
if err != nil {
return nil, err
}
return &info, nil
}
func ScrollConsoleScreenBuffer(handle uintptr, scrollRect SMALL_RECT, clipRect SMALL_RECT, destOrigin COORD, char CHAR_INFO) error {
r1, r2, err := scrollConsoleScreenBufferProc.Call(handle, uintptr(unsafe.Pointer(&scrollRect)), uintptr(unsafe.Pointer(&clipRect)), coordToPointer(destOrigin), uintptr(unsafe.Pointer(&char)))
use(scrollRect)
use(clipRect)
use(destOrigin)
use(char)
return checkError(r1, r2, err)
}
// SetConsoleScreenBufferSize sets the size of the console screen buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686044(v=vs.85).aspx.
func SetConsoleScreenBufferSize(handle uintptr, coord COORD) error {
r1, r2, err := setConsoleScreenBufferSizeProc.Call(handle, coordToPointer(coord))
use(coord)
return checkError(r1, r2, err)
}
// SetConsoleTextAttribute sets the attributes of characters written to the
// console screen buffer by the WriteFile or WriteConsole function.
// See http://msdn.microsoft.com/en-us/library/windows/desktop/ms686047(v=vs.85).aspx.
func SetConsoleTextAttribute(handle uintptr, attribute uint16) error {
r1, r2, err := setConsoleTextAttributeProc.Call(handle, uintptr(attribute), 0)
use(attribute)
return checkError(r1, r2, err)
}
// SetConsoleWindowInfo sets the size and position of the console screen buffer's window.
// Note that the size and location must be within and no larger than the backing console screen buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms686125(v=vs.85).aspx.
func SetConsoleWindowInfo(handle uintptr, isAbsolute bool, rect SMALL_RECT) error {
r1, r2, err := setConsoleWindowInfoProc.Call(handle, uintptr(boolToBOOL(isAbsolute)), uintptr(unsafe.Pointer(&rect)))
use(isAbsolute)
use(rect)
return checkError(r1, r2, err)
}
// WriteConsoleOutput writes the CHAR_INFOs from the provided buffer to the active console buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687404(v=vs.85).aspx.
func WriteConsoleOutput(handle uintptr, buffer []CHAR_INFO, bufferSize COORD, bufferCoord COORD, writeRegion *SMALL_RECT) error {
r1, r2, err := writeConsoleOutputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), coordToPointer(bufferSize), coordToPointer(bufferCoord), uintptr(unsafe.Pointer(writeRegion)))
use(buffer)
use(bufferSize)
use(bufferCoord)
return checkError(r1, r2, err)
}
// ReadConsoleInput reads (and removes) data from the console input buffer.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms684961(v=vs.85).aspx.
func ReadConsoleInput(handle uintptr, buffer []INPUT_RECORD, count *uint32) error {
r1, r2, err := readConsoleInputProc.Call(handle, uintptr(unsafe.Pointer(&buffer[0])), uintptr(len(buffer)), uintptr(unsafe.Pointer(count)))
use(buffer)
return checkError(r1, r2, err)
}
// WaitForSingleObject waits for the passed handle to be signaled.
// It returns true if the handle was signaled; false otherwise.
// See https://msdn.microsoft.com/en-us/library/windows/desktop/ms687032(v=vs.85).aspx.
func WaitForSingleObject(handle uintptr, msWait uint32) (bool, error) {
r1, _, err := waitForSingleObjectProc.Call(handle, uintptr(uint32(msWait)))
switch r1 {
case WAIT_ABANDONED, WAIT_TIMEOUT:
return false, nil
case WAIT_SIGNALED:
return true, nil
}
use(msWait)
return false, err
}
// String helpers
func (info CONSOLE_SCREEN_BUFFER_INFO) String() string {
return fmt.Sprintf("Size(%v) Cursor(%v) Window(%v) Max(%v)", info.Size, info.CursorPosition, info.Window, info.MaximumWindowSize)
}
func (coord COORD) String() string {
return fmt.Sprintf("%v,%v", coord.X, coord.Y)
}
func (rect SMALL_RECT) String() string {
return fmt.Sprintf("(%v,%v),(%v,%v)", rect.Left, rect.Top, rect.Right, rect.Bottom)
}
// checkError evaluates the results of a Windows API call and returns the error if it failed.
func checkError(r1, r2 uintptr, err error) error {
// Windows APIs return non-zero to indicate success
if r1 != 0 {
return nil
}
// Return the error if provided, otherwise default to EINVAL
if err != nil {
return err
}
return syscall.EINVAL
}
// coordToPointer converts a COORD into a uintptr (by fooling the type system).
func coordToPointer(c COORD) uintptr {
// Note: This code assumes the two SHORTs are correctly laid out; the "cast" to uint32 is just to get a pointer to pass.
return uintptr(*((*uint32)(unsafe.Pointer(&c))))
}
// use is a no-op, but the compiler cannot see that it is.
// Calling use(p) ensures that p is kept live until that point.
func use(p interface{}) {}

View file

@ -0,0 +1,100 @@
// +build windows
package winterm
import "github.com/Azure/go-ansiterm"
const (
FOREGROUND_COLOR_MASK = FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE
BACKGROUND_COLOR_MASK = BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE
)
// collectAnsiIntoWindowsAttributes modifies the passed Windows text mode flags to reflect the
// request represented by the passed ANSI mode.
func collectAnsiIntoWindowsAttributes(windowsMode uint16, inverted bool, baseMode uint16, ansiMode int16) (uint16, bool) {
switch ansiMode {
// Mode styles
case ansiterm.ANSI_SGR_BOLD:
windowsMode = windowsMode | FOREGROUND_INTENSITY
case ansiterm.ANSI_SGR_DIM, ansiterm.ANSI_SGR_BOLD_DIM_OFF:
windowsMode &^= FOREGROUND_INTENSITY
case ansiterm.ANSI_SGR_UNDERLINE:
windowsMode = windowsMode | COMMON_LVB_UNDERSCORE
case ansiterm.ANSI_SGR_REVERSE:
inverted = true
case ansiterm.ANSI_SGR_REVERSE_OFF:
inverted = false
case ansiterm.ANSI_SGR_UNDERLINE_OFF:
windowsMode &^= COMMON_LVB_UNDERSCORE
// Foreground colors
case ansiterm.ANSI_SGR_FOREGROUND_DEFAULT:
windowsMode = (windowsMode &^ FOREGROUND_MASK) | (baseMode & FOREGROUND_MASK)
case ansiterm.ANSI_SGR_FOREGROUND_BLACK:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK)
case ansiterm.ANSI_SGR_FOREGROUND_RED:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED
case ansiterm.ANSI_SGR_FOREGROUND_GREEN:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN
case ansiterm.ANSI_SGR_FOREGROUND_YELLOW:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN
case ansiterm.ANSI_SGR_FOREGROUND_BLUE:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_MAGENTA:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_CYAN:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_GREEN | FOREGROUND_BLUE
case ansiterm.ANSI_SGR_FOREGROUND_WHITE:
windowsMode = (windowsMode &^ FOREGROUND_COLOR_MASK) | FOREGROUND_RED | FOREGROUND_GREEN | FOREGROUND_BLUE
// Background colors
case ansiterm.ANSI_SGR_BACKGROUND_DEFAULT:
// Black with no intensity
windowsMode = (windowsMode &^ BACKGROUND_MASK) | (baseMode & BACKGROUND_MASK)
case ansiterm.ANSI_SGR_BACKGROUND_BLACK:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK)
case ansiterm.ANSI_SGR_BACKGROUND_RED:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED
case ansiterm.ANSI_SGR_BACKGROUND_GREEN:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN
case ansiterm.ANSI_SGR_BACKGROUND_YELLOW:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN
case ansiterm.ANSI_SGR_BACKGROUND_BLUE:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_MAGENTA:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_CYAN:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_GREEN | BACKGROUND_BLUE
case ansiterm.ANSI_SGR_BACKGROUND_WHITE:
windowsMode = (windowsMode &^ BACKGROUND_COLOR_MASK) | BACKGROUND_RED | BACKGROUND_GREEN | BACKGROUND_BLUE
}
return windowsMode, inverted
}
// invertAttributes inverts the foreground and background colors of a Windows attributes value
func invertAttributes(windowsMode uint16) uint16 {
return (COMMON_LVB_MASK & windowsMode) | ((FOREGROUND_MASK & windowsMode) << 4) | ((BACKGROUND_MASK & windowsMode) >> 4)
}

View file

@ -0,0 +1,101 @@
// +build windows
package winterm
const (
horizontal = iota
vertical
)
func (h *windowsAnsiEventHandler) getCursorWindow(info *CONSOLE_SCREEN_BUFFER_INFO) SMALL_RECT {
if h.originMode {
sr := h.effectiveSr(info.Window)
return SMALL_RECT{
Top: sr.top,
Bottom: sr.bottom,
Left: 0,
Right: info.Size.X - 1,
}
} else {
return SMALL_RECT{
Top: info.Window.Top,
Bottom: info.Window.Bottom,
Left: 0,
Right: info.Size.X - 1,
}
}
}
// setCursorPosition sets the cursor to the specified position, bounded to the screen size
func (h *windowsAnsiEventHandler) setCursorPosition(position COORD, window SMALL_RECT) error {
position.X = ensureInRange(position.X, window.Left, window.Right)
position.Y = ensureInRange(position.Y, window.Top, window.Bottom)
err := SetConsoleCursorPosition(h.fd, position)
if err != nil {
return err
}
logger.Infof("Cursor position set: (%d, %d)", position.X, position.Y)
return err
}
func (h *windowsAnsiEventHandler) moveCursorVertical(param int) error {
return h.moveCursor(vertical, param)
}
func (h *windowsAnsiEventHandler) moveCursorHorizontal(param int) error {
return h.moveCursor(horizontal, param)
}
func (h *windowsAnsiEventHandler) moveCursor(moveMode int, param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
switch moveMode {
case horizontal:
position.X += int16(param)
case vertical:
position.Y += int16(param)
}
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) moveCursorLine(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
position.X = 0
position.Y += int16(param)
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) moveCursorColumn(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
position := info.CursorPosition
position.X = int16(param) - 1
if err = h.setCursorPosition(position, h.getCursorWindow(info)); err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,84 @@
// +build windows
package winterm
import "github.com/Azure/go-ansiterm"
func (h *windowsAnsiEventHandler) clearRange(attributes uint16, fromCoord COORD, toCoord COORD) error {
// Ignore an invalid (negative area) request
if toCoord.Y < fromCoord.Y {
return nil
}
var err error
var coordStart = COORD{}
var coordEnd = COORD{}
xCurrent, yCurrent := fromCoord.X, fromCoord.Y
xEnd, yEnd := toCoord.X, toCoord.Y
// Clear any partial initial line
if xCurrent > 0 {
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yCurrent
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
xCurrent = 0
yCurrent += 1
}
// Clear intervening rectangular section
if yCurrent < yEnd {
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yEnd-1
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
xCurrent = 0
yCurrent = yEnd
}
// Clear remaining partial ending line
coordStart.X, coordStart.Y = xCurrent, yCurrent
coordEnd.X, coordEnd.Y = xEnd, yEnd
err = h.clearRect(attributes, coordStart, coordEnd)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) clearRect(attributes uint16, fromCoord COORD, toCoord COORD) error {
region := SMALL_RECT{Top: fromCoord.Y, Left: fromCoord.X, Bottom: toCoord.Y, Right: toCoord.X}
width := toCoord.X - fromCoord.X + 1
height := toCoord.Y - fromCoord.Y + 1
size := uint32(width) * uint32(height)
if size <= 0 {
return nil
}
buffer := make([]CHAR_INFO, size)
char := CHAR_INFO{ansiterm.FILL_CHARACTER, attributes}
for i := 0; i < int(size); i++ {
buffer[i] = char
}
err := WriteConsoleOutput(h.fd, buffer, COORD{X: width, Y: height}, COORD{X: 0, Y: 0}, &region)
if err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,118 @@
// +build windows
package winterm
// effectiveSr gets the current effective scroll region in buffer coordinates
func (h *windowsAnsiEventHandler) effectiveSr(window SMALL_RECT) scrollRegion {
top := addInRange(window.Top, h.sr.top, window.Top, window.Bottom)
bottom := addInRange(window.Top, h.sr.bottom, window.Top, window.Bottom)
if top >= bottom {
top = window.Top
bottom = window.Bottom
}
return scrollRegion{top: top, bottom: bottom}
}
func (h *windowsAnsiEventHandler) scrollUp(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
sr := h.effectiveSr(info.Window)
return h.scroll(param, sr, info)
}
func (h *windowsAnsiEventHandler) scrollDown(param int) error {
return h.scrollUp(-param)
}
func (h *windowsAnsiEventHandler) deleteLines(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
start := info.CursorPosition.Y
sr := h.effectiveSr(info.Window)
// Lines cannot be inserted or deleted outside the scrolling region.
if start >= sr.top && start <= sr.bottom {
sr.top = start
return h.scroll(param, sr, info)
} else {
return nil
}
}
func (h *windowsAnsiEventHandler) insertLines(param int) error {
return h.deleteLines(-param)
}
// scroll scrolls the provided scroll region by param lines. The scroll region is in buffer coordinates.
func (h *windowsAnsiEventHandler) scroll(param int, sr scrollRegion, info *CONSOLE_SCREEN_BUFFER_INFO) error {
logger.Infof("scroll: scrollTop: %d, scrollBottom: %d", sr.top, sr.bottom)
logger.Infof("scroll: windowTop: %d, windowBottom: %d", info.Window.Top, info.Window.Bottom)
// Copy from and clip to the scroll region (full buffer width)
scrollRect := SMALL_RECT{
Top: sr.top,
Bottom: sr.bottom,
Left: 0,
Right: info.Size.X - 1,
}
// Origin to which area should be copied
destOrigin := COORD{
X: 0,
Y: sr.top - int16(param),
}
char := CHAR_INFO{
UnicodeChar: ' ',
Attributes: h.attributes,
}
if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) deleteCharacters(param int) error {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
return h.scrollLine(param, info.CursorPosition, info)
}
func (h *windowsAnsiEventHandler) insertCharacters(param int) error {
return h.deleteCharacters(-param)
}
// scrollLine scrolls a line horizontally starting at the provided position by a number of columns.
func (h *windowsAnsiEventHandler) scrollLine(columns int, position COORD, info *CONSOLE_SCREEN_BUFFER_INFO) error {
// Copy from and clip to the scroll region (full buffer width)
scrollRect := SMALL_RECT{
Top: position.Y,
Bottom: position.Y,
Left: position.X,
Right: info.Size.X - 1,
}
// Origin to which area should be copied
destOrigin := COORD{
X: position.X - int16(columns),
Y: position.Y,
}
char := CHAR_INFO{
UnicodeChar: ' ',
Attributes: h.attributes,
}
if err := ScrollConsoleScreenBuffer(h.fd, scrollRect, scrollRect, destOrigin, char); err != nil {
return err
}
return nil
}

View file

@ -0,0 +1,9 @@
// +build windows
package winterm
// AddInRange increments a value by the passed quantity while ensuring the values
// always remain within the supplied min / max range.
func addInRange(n int16, increment int16, min int16, max int16) int16 {
return ensureInRange(n+increment, min, max)
}

View file

@ -0,0 +1,726 @@
// +build windows
package winterm
import (
"bytes"
"io/ioutil"
"os"
"strconv"
"github.com/Azure/go-ansiterm"
"github.com/Sirupsen/logrus"
)
var logger *logrus.Logger
type windowsAnsiEventHandler struct {
fd uintptr
file *os.File
infoReset *CONSOLE_SCREEN_BUFFER_INFO
sr scrollRegion
buffer bytes.Buffer
attributes uint16
inverted bool
wrapNext bool
drewMarginByte bool
originMode bool
marginByte byte
curInfo *CONSOLE_SCREEN_BUFFER_INFO
curPos COORD
}
func CreateWinEventHandler(fd uintptr, file *os.File) ansiterm.AnsiEventHandler {
logFile := ioutil.Discard
if isDebugEnv := os.Getenv(ansiterm.LogEnv); isDebugEnv == "1" {
logFile, _ = os.Create("winEventHandler.log")
}
logger = &logrus.Logger{
Out: logFile,
Formatter: new(logrus.TextFormatter),
Level: logrus.DebugLevel,
}
infoReset, err := GetConsoleScreenBufferInfo(fd)
if err != nil {
return nil
}
return &windowsAnsiEventHandler{
fd: fd,
file: file,
infoReset: infoReset,
attributes: infoReset.Attributes,
}
}
type scrollRegion struct {
top int16
bottom int16
}
// simulateLF simulates a LF or CR+LF by scrolling if necessary to handle the
// current cursor position and scroll region settings, in which case it returns
// true. If no special handling is necessary, then it does nothing and returns
// false.
//
// In the false case, the caller should ensure that a carriage return
// and line feed are inserted or that the text is otherwise wrapped.
func (h *windowsAnsiEventHandler) simulateLF(includeCR bool) (bool, error) {
if h.wrapNext {
if err := h.Flush(); err != nil {
return false, err
}
h.clearWrap()
}
pos, info, err := h.getCurrentInfo()
if err != nil {
return false, err
}
sr := h.effectiveSr(info.Window)
if pos.Y == sr.bottom {
// Scrolling is necessary. Let Windows automatically scroll if the scrolling region
// is the full window.
if sr.top == info.Window.Top && sr.bottom == info.Window.Bottom {
if includeCR {
pos.X = 0
h.updatePos(pos)
}
return false, nil
}
// A custom scroll region is active. Scroll the window manually to simulate
// the LF.
if err := h.Flush(); err != nil {
return false, err
}
logger.Info("Simulating LF inside scroll region")
if err := h.scrollUp(1); err != nil {
return false, err
}
if includeCR {
pos.X = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return false, err
}
}
return true, nil
} else if pos.Y < info.Window.Bottom {
// Let Windows handle the LF.
pos.Y++
if includeCR {
pos.X = 0
}
h.updatePos(pos)
return false, nil
} else {
// The cursor is at the bottom of the screen but outside the scroll
// region. Skip the LF.
logger.Info("Simulating LF outside scroll region")
if includeCR {
if err := h.Flush(); err != nil {
return false, err
}
pos.X = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return false, err
}
}
return true, nil
}
}
// executeLF executes a LF without a CR.
func (h *windowsAnsiEventHandler) executeLF() error {
handled, err := h.simulateLF(false)
if err != nil {
return err
}
if !handled {
// Windows LF will reset the cursor column position. Write the LF
// and restore the cursor position.
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED)
if pos.X != 0 {
if err := h.Flush(); err != nil {
return err
}
logger.Info("Resetting cursor position for LF without CR")
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
}
}
return nil
}
func (h *windowsAnsiEventHandler) Print(b byte) error {
if h.wrapNext {
h.buffer.WriteByte(h.marginByte)
h.clearWrap()
if _, err := h.simulateLF(true); err != nil {
return err
}
}
pos, info, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X == info.Size.X-1 {
h.wrapNext = true
h.marginByte = b
} else {
pos.X++
h.updatePos(pos)
h.buffer.WriteByte(b)
}
return nil
}
func (h *windowsAnsiEventHandler) Execute(b byte) error {
switch b {
case ansiterm.ANSI_TAB:
logger.Info("Execute(TAB)")
// Move to the next tab stop, but preserve auto-wrap if already set.
if !h.wrapNext {
pos, info, err := h.getCurrentInfo()
if err != nil {
return err
}
pos.X = (pos.X + 8) - pos.X%8
if pos.X >= info.Size.X {
pos.X = info.Size.X - 1
}
if err := h.Flush(); err != nil {
return err
}
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
}
return nil
case ansiterm.ANSI_BEL:
h.buffer.WriteByte(ansiterm.ANSI_BEL)
return nil
case ansiterm.ANSI_BACKSPACE:
if h.wrapNext {
if err := h.Flush(); err != nil {
return err
}
h.clearWrap()
}
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X > 0 {
pos.X--
h.updatePos(pos)
h.buffer.WriteByte(ansiterm.ANSI_BACKSPACE)
}
return nil
case ansiterm.ANSI_VERTICAL_TAB, ansiterm.ANSI_FORM_FEED:
// Treat as true LF.
return h.executeLF()
case ansiterm.ANSI_LINE_FEED:
// Simulate a CR and LF for now since there is no way in go-ansiterm
// to tell if the LF should include CR (and more things break when it's
// missing than when it's incorrectly added).
handled, err := h.simulateLF(true)
if handled || err != nil {
return err
}
return h.buffer.WriteByte(ansiterm.ANSI_LINE_FEED)
case ansiterm.ANSI_CARRIAGE_RETURN:
if h.wrapNext {
if err := h.Flush(); err != nil {
return err
}
h.clearWrap()
}
pos, _, err := h.getCurrentInfo()
if err != nil {
return err
}
if pos.X != 0 {
pos.X = 0
h.updatePos(pos)
h.buffer.WriteByte(ansiterm.ANSI_CARRIAGE_RETURN)
}
return nil
default:
return nil
}
}
func (h *windowsAnsiEventHandler) CUU(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CUU: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorVertical(-param)
}
func (h *windowsAnsiEventHandler) CUD(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CUD: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorVertical(param)
}
func (h *windowsAnsiEventHandler) CUF(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CUF: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorHorizontal(param)
}
func (h *windowsAnsiEventHandler) CUB(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CUB: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorHorizontal(-param)
}
func (h *windowsAnsiEventHandler) CNL(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CNL: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorLine(param)
}
func (h *windowsAnsiEventHandler) CPL(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CPL: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorLine(-param)
}
func (h *windowsAnsiEventHandler) CHA(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CHA: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.moveCursorColumn(param)
}
func (h *windowsAnsiEventHandler) VPA(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("VPA: [[%d]]", param)
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
window := h.getCursorWindow(info)
position := info.CursorPosition
position.Y = window.Top + int16(param) - 1
return h.setCursorPosition(position, window)
}
func (h *windowsAnsiEventHandler) CUP(row int, col int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("CUP: [[%d %d]]", row, col)
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
window := h.getCursorWindow(info)
position := COORD{window.Left + int16(col) - 1, window.Top + int16(row) - 1}
return h.setCursorPosition(position, window)
}
func (h *windowsAnsiEventHandler) HVP(row int, col int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("HVP: [[%d %d]]", row, col)
h.clearWrap()
return h.CUP(row, col)
}
func (h *windowsAnsiEventHandler) DECTCEM(visible bool) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DECTCEM: [%v]", []string{strconv.FormatBool(visible)})
h.clearWrap()
return nil
}
func (h *windowsAnsiEventHandler) DECOM(enable bool) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DECOM: [%v]", []string{strconv.FormatBool(enable)})
h.clearWrap()
h.originMode = enable
return h.CUP(1, 1)
}
func (h *windowsAnsiEventHandler) DECCOLM(use132 bool) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DECCOLM: [%v]", []string{strconv.FormatBool(use132)})
h.clearWrap()
if err := h.ED(2); err != nil {
return err
}
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
targetWidth := int16(80)
if use132 {
targetWidth = 132
}
if info.Size.X < targetWidth {
if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil {
logger.Info("set buffer failed:", err)
return err
}
}
window := info.Window
window.Left = 0
window.Right = targetWidth - 1
if err := SetConsoleWindowInfo(h.fd, true, window); err != nil {
logger.Info("set window failed:", err)
return err
}
if info.Size.X > targetWidth {
if err := SetConsoleScreenBufferSize(h.fd, COORD{targetWidth, info.Size.Y}); err != nil {
logger.Info("set buffer failed:", err)
return err
}
}
return SetConsoleCursorPosition(h.fd, COORD{0, 0})
}
func (h *windowsAnsiEventHandler) ED(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("ED: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
// [J -- Erases from the cursor to the end of the screen, including the cursor position.
// [1J -- Erases from the beginning of the screen to the cursor, including the cursor position.
// [2J -- Erases the complete display. The cursor does not move.
// Notes:
// -- Clearing the entire buffer, versus just the Window, works best for Windows Consoles
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
var start COORD
var end COORD
switch param {
case 0:
start = info.CursorPosition
end = COORD{info.Size.X - 1, info.Size.Y - 1}
case 1:
start = COORD{0, 0}
end = info.CursorPosition
case 2:
start = COORD{0, 0}
end = COORD{info.Size.X - 1, info.Size.Y - 1}
}
err = h.clearRange(h.attributes, start, end)
if err != nil {
return err
}
// If the whole buffer was cleared, move the window to the top while preserving
// the window-relative cursor position.
if param == 2 {
pos := info.CursorPosition
window := info.Window
pos.Y -= window.Top
window.Bottom -= window.Top
window.Top = 0
if err := SetConsoleCursorPosition(h.fd, pos); err != nil {
return err
}
if err := SetConsoleWindowInfo(h.fd, true, window); err != nil {
return err
}
}
return nil
}
func (h *windowsAnsiEventHandler) EL(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("EL: [%v]", strconv.Itoa(param))
h.clearWrap()
// [K -- Erases from the cursor to the end of the line, including the cursor position.
// [1K -- Erases from the beginning of the line to the cursor, including the cursor position.
// [2K -- Erases the complete line.
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
var start COORD
var end COORD
switch param {
case 0:
start = info.CursorPosition
end = COORD{info.Size.X, info.CursorPosition.Y}
case 1:
start = COORD{0, info.CursorPosition.Y}
end = info.CursorPosition
case 2:
start = COORD{0, info.CursorPosition.Y}
end = COORD{info.Size.X, info.CursorPosition.Y}
}
err = h.clearRange(h.attributes, start, end)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) IL(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("IL: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.insertLines(param)
}
func (h *windowsAnsiEventHandler) DL(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DL: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.deleteLines(param)
}
func (h *windowsAnsiEventHandler) ICH(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("ICH: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.insertCharacters(param)
}
func (h *windowsAnsiEventHandler) DCH(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DCH: [%v]", strconv.Itoa(param))
h.clearWrap()
return h.deleteCharacters(param)
}
func (h *windowsAnsiEventHandler) SGR(params []int) error {
if err := h.Flush(); err != nil {
return err
}
strings := []string{}
for _, v := range params {
strings = append(strings, strconv.Itoa(v))
}
logger.Infof("SGR: [%v]", strings)
if len(params) <= 0 {
h.attributes = h.infoReset.Attributes
h.inverted = false
} else {
for _, attr := range params {
if attr == ansiterm.ANSI_SGR_RESET {
h.attributes = h.infoReset.Attributes
h.inverted = false
continue
}
h.attributes, h.inverted = collectAnsiIntoWindowsAttributes(h.attributes, h.inverted, h.infoReset.Attributes, int16(attr))
}
}
attributes := h.attributes
if h.inverted {
attributes = invertAttributes(attributes)
}
err := SetConsoleTextAttribute(h.fd, attributes)
if err != nil {
return err
}
return nil
}
func (h *windowsAnsiEventHandler) SU(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("SU: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.scrollUp(param)
}
func (h *windowsAnsiEventHandler) SD(param int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("SD: [%v]", []string{strconv.Itoa(param)})
h.clearWrap()
return h.scrollDown(param)
}
func (h *windowsAnsiEventHandler) DA(params []string) error {
logger.Infof("DA: [%v]", params)
// DA cannot be implemented because it must send data on the VT100 input stream,
// which is not available to go-ansiterm.
return nil
}
func (h *windowsAnsiEventHandler) DECSTBM(top int, bottom int) error {
if err := h.Flush(); err != nil {
return err
}
logger.Infof("DECSTBM: [%d, %d]", top, bottom)
// Windows is 0 indexed, Linux is 1 indexed
h.sr.top = int16(top - 1)
h.sr.bottom = int16(bottom - 1)
// This command also moves the cursor to the origin.
h.clearWrap()
return h.CUP(1, 1)
}
func (h *windowsAnsiEventHandler) RI() error {
if err := h.Flush(); err != nil {
return err
}
logger.Info("RI: []")
h.clearWrap()
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
sr := h.effectiveSr(info.Window)
if info.CursorPosition.Y == sr.top {
return h.scrollDown(1)
}
return h.moveCursorVertical(-1)
}
func (h *windowsAnsiEventHandler) IND() error {
logger.Info("IND: []")
return h.executeLF()
}
func (h *windowsAnsiEventHandler) Flush() error {
h.curInfo = nil
if h.buffer.Len() > 0 {
logger.Infof("Flush: [%s]", h.buffer.Bytes())
if _, err := h.buffer.WriteTo(h.file); err != nil {
return err
}
}
if h.wrapNext && !h.drewMarginByte {
logger.Infof("Flush: drawing margin byte '%c'", h.marginByte)
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return err
}
charInfo := []CHAR_INFO{{UnicodeChar: uint16(h.marginByte), Attributes: info.Attributes}}
size := COORD{1, 1}
position := COORD{0, 0}
region := SMALL_RECT{Left: info.CursorPosition.X, Top: info.CursorPosition.Y, Right: info.CursorPosition.X, Bottom: info.CursorPosition.Y}
if err := WriteConsoleOutput(h.fd, charInfo, size, position, &region); err != nil {
return err
}
h.drewMarginByte = true
}
return nil
}
// cacheConsoleInfo ensures that the current console screen information has been queried
// since the last call to Flush(). It must be called before accessing h.curInfo or h.curPos.
func (h *windowsAnsiEventHandler) getCurrentInfo() (COORD, *CONSOLE_SCREEN_BUFFER_INFO, error) {
if h.curInfo == nil {
info, err := GetConsoleScreenBufferInfo(h.fd)
if err != nil {
return COORD{}, nil, err
}
h.curInfo = info
h.curPos = info.CursorPosition
}
return h.curPos, h.curInfo, nil
}
func (h *windowsAnsiEventHandler) updatePos(pos COORD) {
if h.curInfo == nil {
panic("failed to call getCurrentInfo before calling updatePos")
}
h.curPos = pos
}
// clearWrap clears the state where the cursor is in the margin
// waiting for the next character before wrapping the line. This must
// be done before most operations that act on the cursor.
func (h *windowsAnsiEventHandler) clearWrap() {
h.wrapNext = false
h.drewMarginByte = false
}

14
integration/vendor/github.com/BurntSushi/toml/COPYING generated vendored Normal file
View file

@ -0,0 +1,14 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

509
integration/vendor/github.com/BurntSushi/toml/decode.go generated vendored Normal file
View file

@ -0,0 +1,509 @@
package toml
import (
"fmt"
"io"
"io/ioutil"
"math"
"reflect"
"strings"
"time"
)
func e(format string, args ...interface{}) error {
return fmt.Errorf("toml: "+format, args...)
}
// Unmarshaler is the interface implemented by objects that can unmarshal a
// TOML description of themselves.
type Unmarshaler interface {
UnmarshalTOML(interface{}) error
}
// Unmarshal decodes the contents of `p` in TOML format into a pointer `v`.
func Unmarshal(p []byte, v interface{}) error {
_, err := Decode(string(p), v)
return err
}
// Primitive is a TOML value that hasn't been decoded into a Go value.
// When using the various `Decode*` functions, the type `Primitive` may
// be given to any value, and its decoding will be delayed.
//
// A `Primitive` value can be decoded using the `PrimitiveDecode` function.
//
// The underlying representation of a `Primitive` value is subject to change.
// Do not rely on it.
//
// N.B. Primitive values are still parsed, so using them will only avoid
// the overhead of reflection. They can be useful when you don't know the
// exact type of TOML data until run time.
type Primitive struct {
undecoded interface{}
context Key
}
// DEPRECATED!
//
// Use MetaData.PrimitiveDecode instead.
func PrimitiveDecode(primValue Primitive, v interface{}) error {
md := MetaData{decoded: make(map[string]bool)}
return md.unify(primValue.undecoded, rvalue(v))
}
// PrimitiveDecode is just like the other `Decode*` functions, except it
// decodes a TOML value that has already been parsed. Valid primitive values
// can *only* be obtained from values filled by the decoder functions,
// including this method. (i.e., `v` may contain more `Primitive`
// values.)
//
// Meta data for primitive values is included in the meta data returned by
// the `Decode*` functions with one exception: keys returned by the Undecoded
// method will only reflect keys that were decoded. Namely, any keys hidden
// behind a Primitive will be considered undecoded. Executing this method will
// update the undecoded keys in the meta data. (See the example.)
func (md *MetaData) PrimitiveDecode(primValue Primitive, v interface{}) error {
md.context = primValue.context
defer func() { md.context = nil }()
return md.unify(primValue.undecoded, rvalue(v))
}
// Decode will decode the contents of `data` in TOML format into a pointer
// `v`.
//
// TOML hashes correspond to Go structs or maps. (Dealer's choice. They can be
// used interchangeably.)
//
// TOML arrays of tables correspond to either a slice of structs or a slice
// of maps.
//
// TOML datetimes correspond to Go `time.Time` values.
//
// All other TOML types (float, string, int, bool and array) correspond
// to the obvious Go types.
//
// An exception to the above rules is if a type implements the
// encoding.TextUnmarshaler interface. In this case, any primitive TOML value
// (floats, strings, integers, booleans and datetimes) will be converted to
// a byte string and given to the value's UnmarshalText method. See the
// Unmarshaler example for a demonstration with time duration strings.
//
// Key mapping
//
// TOML keys can map to either keys in a Go map or field names in a Go
// struct. The special `toml` struct tag may be used to map TOML keys to
// struct fields that don't match the key name exactly. (See the example.)
// A case insensitive match to struct names will be tried if an exact match
// can't be found.
//
// The mapping between TOML values and Go values is loose. That is, there
// may exist TOML values that cannot be placed into your representation, and
// there may be parts of your representation that do not correspond to
// TOML values. This loose mapping can be made stricter by using the IsDefined
// and/or Undecoded methods on the MetaData returned.
//
// This decoder will not handle cyclic types. If a cyclic type is passed,
// `Decode` will not terminate.
func Decode(data string, v interface{}) (MetaData, error) {
rv := reflect.ValueOf(v)
if rv.Kind() != reflect.Ptr {
return MetaData{}, e("Decode of non-pointer %s", reflect.TypeOf(v))
}
if rv.IsNil() {
return MetaData{}, e("Decode of nil %s", reflect.TypeOf(v))
}
p, err := parse(data)
if err != nil {
return MetaData{}, err
}
md := MetaData{
p.mapping, p.types, p.ordered,
make(map[string]bool, len(p.ordered)), nil,
}
return md, md.unify(p.mapping, indirect(rv))
}
// DecodeFile is just like Decode, except it will automatically read the
// contents of the file at `fpath` and decode it for you.
func DecodeFile(fpath string, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadFile(fpath)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// DecodeReader is just like Decode, except it will consume all bytes
// from the reader and decode it for you.
func DecodeReader(r io.Reader, v interface{}) (MetaData, error) {
bs, err := ioutil.ReadAll(r)
if err != nil {
return MetaData{}, err
}
return Decode(string(bs), v)
}
// unify performs a sort of type unification based on the structure of `rv`,
// which is the client representation.
//
// Any type mismatch produces an error. Finding a type that we don't know
// how to handle produces an unsupported type error.
func (md *MetaData) unify(data interface{}, rv reflect.Value) error {
// Special case. Look for a `Primitive` value.
if rv.Type() == reflect.TypeOf((*Primitive)(nil)).Elem() {
// Save the undecoded data and the key context into the primitive
// value.
context := make(Key, len(md.context))
copy(context, md.context)
rv.Set(reflect.ValueOf(Primitive{
undecoded: data,
context: context,
}))
return nil
}
// Special case. Unmarshaler Interface support.
if rv.CanAddr() {
if v, ok := rv.Addr().Interface().(Unmarshaler); ok {
return v.UnmarshalTOML(data)
}
}
// Special case. Handle time.Time values specifically.
// TODO: Remove this code when we decide to drop support for Go 1.1.
// This isn't necessary in Go 1.2 because time.Time satisfies the encoding
// interfaces.
if rv.Type().AssignableTo(rvalue(time.Time{}).Type()) {
return md.unifyDatetime(data, rv)
}
// Special case. Look for a value satisfying the TextUnmarshaler interface.
if v, ok := rv.Interface().(TextUnmarshaler); ok {
return md.unifyText(data, v)
}
// BUG(burntsushi)
// The behavior here is incorrect whenever a Go type satisfies the
// encoding.TextUnmarshaler interface but also corresponds to a TOML
// hash or array. In particular, the unmarshaler should only be applied
// to primitive TOML values. But at this point, it will be applied to
// all kinds of values and produce an incorrect error whenever those values
// are hashes or arrays (including arrays of tables).
k := rv.Kind()
// laziness
if k >= reflect.Int && k <= reflect.Uint64 {
return md.unifyInt(data, rv)
}
switch k {
case reflect.Ptr:
elem := reflect.New(rv.Type().Elem())
err := md.unify(data, reflect.Indirect(elem))
if err != nil {
return err
}
rv.Set(elem)
return nil
case reflect.Struct:
return md.unifyStruct(data, rv)
case reflect.Map:
return md.unifyMap(data, rv)
case reflect.Array:
return md.unifyArray(data, rv)
case reflect.Slice:
return md.unifySlice(data, rv)
case reflect.String:
return md.unifyString(data, rv)
case reflect.Bool:
return md.unifyBool(data, rv)
case reflect.Interface:
// we only support empty interfaces.
if rv.NumMethod() > 0 {
return e("unsupported type %s", rv.Type())
}
return md.unifyAnything(data, rv)
case reflect.Float32:
fallthrough
case reflect.Float64:
return md.unifyFloat64(data, rv)
}
return e("unsupported type %s", rv.Kind())
}
func (md *MetaData) unifyStruct(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if mapping == nil {
return nil
}
return e("type mismatch for %s: expected table but found %T",
rv.Type().String(), mapping)
}
for key, datum := range tmap {
var f *field
fields := cachedTypeFields(rv.Type())
for i := range fields {
ff := &fields[i]
if ff.name == key {
f = ff
break
}
if f == nil && strings.EqualFold(ff.name, key) {
f = ff
}
}
if f != nil {
subv := rv
for _, i := range f.index {
subv = indirect(subv.Field(i))
}
if isUnifiable(subv) {
md.decoded[md.context.add(key).String()] = true
md.context = append(md.context, key)
if err := md.unify(datum, subv); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
} else if f.name != "" {
// Bad user! No soup for you!
return e("cannot write unexported field %s.%s",
rv.Type().String(), f.name)
}
}
}
return nil
}
func (md *MetaData) unifyMap(mapping interface{}, rv reflect.Value) error {
tmap, ok := mapping.(map[string]interface{})
if !ok {
if tmap == nil {
return nil
}
return badtype("map", mapping)
}
if rv.IsNil() {
rv.Set(reflect.MakeMap(rv.Type()))
}
for k, v := range tmap {
md.decoded[md.context.add(k).String()] = true
md.context = append(md.context, k)
rvkey := indirect(reflect.New(rv.Type().Key()))
rvval := reflect.Indirect(reflect.New(rv.Type().Elem()))
if err := md.unify(v, rvval); err != nil {
return err
}
md.context = md.context[0 : len(md.context)-1]
rvkey.SetString(k)
rv.SetMapIndex(rvkey, rvval)
}
return nil
}
func (md *MetaData) unifyArray(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
sliceLen := datav.Len()
if sliceLen != rv.Len() {
return e("expected array length %d; got TOML array of length %d",
rv.Len(), sliceLen)
}
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySlice(data interface{}, rv reflect.Value) error {
datav := reflect.ValueOf(data)
if datav.Kind() != reflect.Slice {
if !datav.IsValid() {
return nil
}
return badtype("slice", data)
}
n := datav.Len()
if rv.IsNil() || rv.Cap() < n {
rv.Set(reflect.MakeSlice(rv.Type(), n, n))
}
rv.SetLen(n)
return md.unifySliceArray(datav, rv)
}
func (md *MetaData) unifySliceArray(data, rv reflect.Value) error {
sliceLen := data.Len()
for i := 0; i < sliceLen; i++ {
v := data.Index(i).Interface()
sliceval := indirect(rv.Index(i))
if err := md.unify(v, sliceval); err != nil {
return err
}
}
return nil
}
func (md *MetaData) unifyDatetime(data interface{}, rv reflect.Value) error {
if _, ok := data.(time.Time); ok {
rv.Set(reflect.ValueOf(data))
return nil
}
return badtype("time.Time", data)
}
func (md *MetaData) unifyString(data interface{}, rv reflect.Value) error {
if s, ok := data.(string); ok {
rv.SetString(s)
return nil
}
return badtype("string", data)
}
func (md *MetaData) unifyFloat64(data interface{}, rv reflect.Value) error {
if num, ok := data.(float64); ok {
switch rv.Kind() {
case reflect.Float32:
fallthrough
case reflect.Float64:
rv.SetFloat(num)
default:
panic("bug")
}
return nil
}
return badtype("float", data)
}
func (md *MetaData) unifyInt(data interface{}, rv reflect.Value) error {
if num, ok := data.(int64); ok {
if rv.Kind() >= reflect.Int && rv.Kind() <= reflect.Int64 {
switch rv.Kind() {
case reflect.Int, reflect.Int64:
// No bounds checking necessary.
case reflect.Int8:
if num < math.MinInt8 || num > math.MaxInt8 {
return e("value %d is out of range for int8", num)
}
case reflect.Int16:
if num < math.MinInt16 || num > math.MaxInt16 {
return e("value %d is out of range for int16", num)
}
case reflect.Int32:
if num < math.MinInt32 || num > math.MaxInt32 {
return e("value %d is out of range for int32", num)
}
}
rv.SetInt(num)
} else if rv.Kind() >= reflect.Uint && rv.Kind() <= reflect.Uint64 {
unum := uint64(num)
switch rv.Kind() {
case reflect.Uint, reflect.Uint64:
// No bounds checking necessary.
case reflect.Uint8:
if num < 0 || unum > math.MaxUint8 {
return e("value %d is out of range for uint8", num)
}
case reflect.Uint16:
if num < 0 || unum > math.MaxUint16 {
return e("value %d is out of range for uint16", num)
}
case reflect.Uint32:
if num < 0 || unum > math.MaxUint32 {
return e("value %d is out of range for uint32", num)
}
}
rv.SetUint(unum)
} else {
panic("unreachable")
}
return nil
}
return badtype("integer", data)
}
func (md *MetaData) unifyBool(data interface{}, rv reflect.Value) error {
if b, ok := data.(bool); ok {
rv.SetBool(b)
return nil
}
return badtype("boolean", data)
}
func (md *MetaData) unifyAnything(data interface{}, rv reflect.Value) error {
rv.Set(reflect.ValueOf(data))
return nil
}
func (md *MetaData) unifyText(data interface{}, v TextUnmarshaler) error {
var s string
switch sdata := data.(type) {
case TextMarshaler:
text, err := sdata.MarshalText()
if err != nil {
return err
}
s = string(text)
case fmt.Stringer:
s = sdata.String()
case string:
s = sdata
case bool:
s = fmt.Sprintf("%v", sdata)
case int64:
s = fmt.Sprintf("%d", sdata)
case float64:
s = fmt.Sprintf("%f", sdata)
default:
return badtype("primitive (string-like)", data)
}
if err := v.UnmarshalText([]byte(s)); err != nil {
return err
}
return nil
}
// rvalue returns a reflect.Value of `v`. All pointers are resolved.
func rvalue(v interface{}) reflect.Value {
return indirect(reflect.ValueOf(v))
}
// indirect returns the value pointed to by a pointer.
// Pointers are followed until the value is not a pointer.
// New values are allocated for each nil pointer.
//
// An exception to this rule is if the value satisfies an interface of
// interest to us (like encoding.TextUnmarshaler).
func indirect(v reflect.Value) reflect.Value {
if v.Kind() != reflect.Ptr {
if v.CanSet() {
pv := v.Addr()
if _, ok := pv.Interface().(TextUnmarshaler); ok {
return pv
}
}
return v
}
if v.IsNil() {
v.Set(reflect.New(v.Type().Elem()))
}
return indirect(reflect.Indirect(v))
}
func isUnifiable(rv reflect.Value) bool {
if rv.CanSet() {
return true
}
if _, ok := rv.Interface().(TextUnmarshaler); ok {
return true
}
return false
}
func badtype(expected string, data interface{}) error {
return e("cannot load TOML value of type %T into a Go %s", data, expected)
}

View file

@ -0,0 +1,121 @@
package toml
import "strings"
// MetaData allows access to meta information about TOML data that may not
// be inferrable via reflection. In particular, whether a key has been defined
// and the TOML type of a key.
type MetaData struct {
mapping map[string]interface{}
types map[string]tomlType
keys []Key
decoded map[string]bool
context Key // Used only during decoding.
}
// IsDefined returns true if the key given exists in the TOML data. The key
// should be specified hierarchially. e.g.,
//
// // access the TOML key 'a.b.c'
// IsDefined("a", "b", "c")
//
// IsDefined will return false if an empty key given. Keys are case sensitive.
func (md *MetaData) IsDefined(key ...string) bool {
if len(key) == 0 {
return false
}
var hash map[string]interface{}
var ok bool
var hashOrVal interface{} = md.mapping
for _, k := range key {
if hash, ok = hashOrVal.(map[string]interface{}); !ok {
return false
}
if hashOrVal, ok = hash[k]; !ok {
return false
}
}
return true
}
// Type returns a string representation of the type of the key specified.
//
// Type will return the empty string if given an empty key or a key that
// does not exist. Keys are case sensitive.
func (md *MetaData) Type(key ...string) string {
fullkey := strings.Join(key, ".")
if typ, ok := md.types[fullkey]; ok {
return typ.typeString()
}
return ""
}
// Key is the type of any TOML key, including key groups. Use (MetaData).Keys
// to get values of this type.
type Key []string
func (k Key) String() string {
return strings.Join(k, ".")
}
func (k Key) maybeQuotedAll() string {
var ss []string
for i := range k {
ss = append(ss, k.maybeQuoted(i))
}
return strings.Join(ss, ".")
}
func (k Key) maybeQuoted(i int) string {
quote := false
for _, c := range k[i] {
if !isBareKeyChar(c) {
quote = true
break
}
}
if quote {
return "\"" + strings.Replace(k[i], "\"", "\\\"", -1) + "\""
}
return k[i]
}
func (k Key) add(piece string) Key {
newKey := make(Key, len(k)+1)
copy(newKey, k)
newKey[len(k)] = piece
return newKey
}
// Keys returns a slice of every key in the TOML data, including key groups.
// Each key is itself a slice, where the first element is the top of the
// hierarchy and the last is the most specific.
//
// The list will have the same order as the keys appeared in the TOML data.
//
// All keys returned are non-empty.
func (md *MetaData) Keys() []Key {
return md.keys
}
// Undecoded returns all keys that have not been decoded in the order in which
// they appear in the original TOML document.
//
// This includes keys that haven't been decoded because of a Primitive value.
// Once the Primitive value is decoded, the keys will be considered decoded.
//
// Also note that decoding into an empty interface will result in no decoding,
// and so no keys will be considered decoded.
//
// In this sense, the Undecoded keys correspond to keys in the TOML document
// that do not have a concrete type in your representation.
func (md *MetaData) Undecoded() []Key {
undecoded := make([]Key, 0, len(md.keys))
for _, key := range md.keys {
if !md.decoded[key.String()] {
undecoded = append(undecoded, key)
}
}
return undecoded
}

27
integration/vendor/github.com/BurntSushi/toml/doc.go generated vendored Normal file
View file

@ -0,0 +1,27 @@
/*
Package toml provides facilities for decoding and encoding TOML configuration
files via reflection. There is also support for delaying decoding with
the Primitive type, and querying the set of keys in a TOML document with the
MetaData type.
The specification implemented: https://github.com/mojombo/toml
The sub-command github.com/BurntSushi/toml/cmd/tomlv can be used to verify
whether a file is a valid TOML document. It can also be used to print the
type of each key in a TOML document.
Testing
There are two important types of tests used for this package. The first is
contained inside '*_test.go' files and uses the standard Go unit testing
framework. These tests are primarily devoted to holistically testing the
decoder and encoder.
The second type of testing is used to verify the implementation's adherence
to the TOML specification. These tests have been factored into their own
project: https://github.com/BurntSushi/toml-test
The reason the tests are in a separate project is so that they can be used by
any implementation of TOML. Namely, it is language agnostic.
*/
package toml

568
integration/vendor/github.com/BurntSushi/toml/encode.go generated vendored Normal file
View file

@ -0,0 +1,568 @@
package toml
import (
"bufio"
"errors"
"fmt"
"io"
"reflect"
"sort"
"strconv"
"strings"
"time"
)
type tomlEncodeError struct{ error }
var (
errArrayMixedElementTypes = errors.New(
"toml: cannot encode array with mixed element types")
errArrayNilElement = errors.New(
"toml: cannot encode array with nil element")
errNonString = errors.New(
"toml: cannot encode a map with non-string key type")
errAnonNonStruct = errors.New(
"toml: cannot encode an anonymous field that is not a struct")
errArrayNoTable = errors.New(
"toml: TOML array element cannot contain a table")
errNoKey = errors.New(
"toml: top-level values must be Go maps or structs")
errAnything = errors.New("") // used in testing
)
var quotedReplacer = strings.NewReplacer(
"\t", "\\t",
"\n", "\\n",
"\r", "\\r",
"\"", "\\\"",
"\\", "\\\\",
)
// Encoder controls the encoding of Go values to a TOML document to some
// io.Writer.
//
// The indentation level can be controlled with the Indent field.
type Encoder struct {
// A single indentation level. By default it is two spaces.
Indent string
// hasWritten is whether we have written any output to w yet.
hasWritten bool
w *bufio.Writer
}
// NewEncoder returns a TOML encoder that encodes Go values to the io.Writer
// given. By default, a single indentation level is 2 spaces.
func NewEncoder(w io.Writer) *Encoder {
return &Encoder{
w: bufio.NewWriter(w),
Indent: " ",
}
}
// Encode writes a TOML representation of the Go value to the underlying
// io.Writer. If the value given cannot be encoded to a valid TOML document,
// then an error is returned.
//
// The mapping between Go values and TOML values should be precisely the same
// as for the Decode* functions. Similarly, the TextMarshaler interface is
// supported by encoding the resulting bytes as strings. (If you want to write
// arbitrary binary data then you will need to use something like base64 since
// TOML does not have any binary types.)
//
// When encoding TOML hashes (i.e., Go maps or structs), keys without any
// sub-hashes are encoded first.
//
// If a Go map is encoded, then its keys are sorted alphabetically for
// deterministic output. More control over this behavior may be provided if
// there is demand for it.
//
// Encoding Go values without a corresponding TOML representation---like map
// types with non-string keys---will cause an error to be returned. Similarly
// for mixed arrays/slices, arrays/slices with nil elements, embedded
// non-struct types and nested slices containing maps or structs.
// (e.g., [][]map[string]string is not allowed but []map[string]string is OK
// and so is []map[string][]string.)
func (enc *Encoder) Encode(v interface{}) error {
rv := eindirect(reflect.ValueOf(v))
if err := enc.safeEncode(Key([]string{}), rv); err != nil {
return err
}
return enc.w.Flush()
}
func (enc *Encoder) safeEncode(key Key, rv reflect.Value) (err error) {
defer func() {
if r := recover(); r != nil {
if terr, ok := r.(tomlEncodeError); ok {
err = terr.error
return
}
panic(r)
}
}()
enc.encode(key, rv)
return nil
}
func (enc *Encoder) encode(key Key, rv reflect.Value) {
// Special case. Time needs to be in ISO8601 format.
// Special case. If we can marshal the type to text, then we used that.
// Basically, this prevents the encoder for handling these types as
// generic structs (or whatever the underlying type of a TextMarshaler is).
switch rv.Interface().(type) {
case time.Time, TextMarshaler:
enc.keyEqElement(key, rv)
return
}
k := rv.Kind()
switch k {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64,
reflect.Float32, reflect.Float64, reflect.String, reflect.Bool:
enc.keyEqElement(key, rv)
case reflect.Array, reflect.Slice:
if typeEqual(tomlArrayHash, tomlTypeOfGo(rv)) {
enc.eArrayOfTables(key, rv)
} else {
enc.keyEqElement(key, rv)
}
case reflect.Interface:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Map:
if rv.IsNil() {
return
}
enc.eTable(key, rv)
case reflect.Ptr:
if rv.IsNil() {
return
}
enc.encode(key, rv.Elem())
case reflect.Struct:
enc.eTable(key, rv)
default:
panic(e("unsupported type for key '%s': %s", key, k))
}
}
// eElement encodes any value that can be an array element (primitives and
// arrays).
func (enc *Encoder) eElement(rv reflect.Value) {
switch v := rv.Interface().(type) {
case time.Time:
// Special case time.Time as a primitive. Has to come before
// TextMarshaler below because time.Time implements
// encoding.TextMarshaler, but we need to always use UTC.
enc.wf(v.UTC().Format("2006-01-02T15:04:05Z"))
return
case TextMarshaler:
// Special case. Use text marshaler if it's available for this value.
if s, err := v.MarshalText(); err != nil {
encPanic(err)
} else {
enc.writeQuoted(string(s))
}
return
}
switch rv.Kind() {
case reflect.Bool:
enc.wf(strconv.FormatBool(rv.Bool()))
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64:
enc.wf(strconv.FormatInt(rv.Int(), 10))
case reflect.Uint, reflect.Uint8, reflect.Uint16,
reflect.Uint32, reflect.Uint64:
enc.wf(strconv.FormatUint(rv.Uint(), 10))
case reflect.Float32:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 32)))
case reflect.Float64:
enc.wf(floatAddDecimal(strconv.FormatFloat(rv.Float(), 'f', -1, 64)))
case reflect.Array, reflect.Slice:
enc.eArrayOrSliceElement(rv)
case reflect.Interface:
enc.eElement(rv.Elem())
case reflect.String:
enc.writeQuoted(rv.String())
default:
panic(e("unexpected primitive type: %s", rv.Kind()))
}
}
// By the TOML spec, all floats must have a decimal with at least one
// number on either side.
func floatAddDecimal(fstr string) string {
if !strings.Contains(fstr, ".") {
return fstr + ".0"
}
return fstr
}
func (enc *Encoder) writeQuoted(s string) {
enc.wf("\"%s\"", quotedReplacer.Replace(s))
}
func (enc *Encoder) eArrayOrSliceElement(rv reflect.Value) {
length := rv.Len()
enc.wf("[")
for i := 0; i < length; i++ {
elem := rv.Index(i)
enc.eElement(elem)
if i != length-1 {
enc.wf(", ")
}
}
enc.wf("]")
}
func (enc *Encoder) eArrayOfTables(key Key, rv reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
for i := 0; i < rv.Len(); i++ {
trv := rv.Index(i)
if isNil(trv) {
continue
}
panicIfInvalidKey(key)
enc.newline()
enc.wf("%s[[%s]]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
enc.eMapOrStruct(key, trv)
}
}
func (enc *Encoder) eTable(key Key, rv reflect.Value) {
panicIfInvalidKey(key)
if len(key) == 1 {
// Output an extra new line between top-level tables.
// (The newline isn't written if nothing else has been written though.)
enc.newline()
}
if len(key) > 0 {
enc.wf("%s[%s]", enc.indentStr(key), key.maybeQuotedAll())
enc.newline()
}
enc.eMapOrStruct(key, rv)
}
func (enc *Encoder) eMapOrStruct(key Key, rv reflect.Value) {
switch rv := eindirect(rv); rv.Kind() {
case reflect.Map:
enc.eMap(key, rv)
case reflect.Struct:
enc.eStruct(key, rv)
default:
panic("eTable: unhandled reflect.Value Kind: " + rv.Kind().String())
}
}
func (enc *Encoder) eMap(key Key, rv reflect.Value) {
rt := rv.Type()
if rt.Key().Kind() != reflect.String {
encPanic(errNonString)
}
// Sort keys so that we have deterministic output. And write keys directly
// underneath this key first, before writing sub-structs or sub-maps.
var mapKeysDirect, mapKeysSub []string
for _, mapKey := range rv.MapKeys() {
k := mapKey.String()
if typeIsHash(tomlTypeOfGo(rv.MapIndex(mapKey))) {
mapKeysSub = append(mapKeysSub, k)
} else {
mapKeysDirect = append(mapKeysDirect, k)
}
}
var writeMapKeys = func(mapKeys []string) {
sort.Strings(mapKeys)
for _, mapKey := range mapKeys {
mrv := rv.MapIndex(reflect.ValueOf(mapKey))
if isNil(mrv) {
// Don't write anything for nil fields.
continue
}
enc.encode(key.add(mapKey), mrv)
}
}
writeMapKeys(mapKeysDirect)
writeMapKeys(mapKeysSub)
}
func (enc *Encoder) eStruct(key Key, rv reflect.Value) {
// Write keys for fields directly under this key first, because if we write
// a field that creates a new table, then all keys under it will be in that
// table (not the one we're writing here).
rt := rv.Type()
var fieldsDirect, fieldsSub [][]int
var addFields func(rt reflect.Type, rv reflect.Value, start []int)
addFields = func(rt reflect.Type, rv reflect.Value, start []int) {
for i := 0; i < rt.NumField(); i++ {
f := rt.Field(i)
// skip unexported fields
if f.PkgPath != "" && !f.Anonymous {
continue
}
frv := rv.Field(i)
if f.Anonymous {
t := f.Type
switch t.Kind() {
case reflect.Struct:
// Treat anonymous struct fields with
// tag names as though they are not
// anonymous, like encoding/json does.
if getOptions(f.Tag).name == "" {
addFields(t, frv, f.Index)
continue
}
case reflect.Ptr:
if t.Elem().Kind() == reflect.Struct &&
getOptions(f.Tag).name == "" {
if !frv.IsNil() {
addFields(t.Elem(), frv.Elem(), f.Index)
}
continue
}
// Fall through to the normal field encoding logic below
// for non-struct anonymous fields.
}
}
if typeIsHash(tomlTypeOfGo(frv)) {
fieldsSub = append(fieldsSub, append(start, f.Index...))
} else {
fieldsDirect = append(fieldsDirect, append(start, f.Index...))
}
}
}
addFields(rt, rv, nil)
var writeFields = func(fields [][]int) {
for _, fieldIndex := range fields {
sft := rt.FieldByIndex(fieldIndex)
sf := rv.FieldByIndex(fieldIndex)
if isNil(sf) {
// Don't write anything for nil fields.
continue
}
opts := getOptions(sft.Tag)
if opts.skip {
continue
}
keyName := sft.Name
if opts.name != "" {
keyName = opts.name
}
if opts.omitempty && isEmpty(sf) {
continue
}
if opts.omitzero && isZero(sf) {
continue
}
enc.encode(key.add(keyName), sf)
}
}
writeFields(fieldsDirect)
writeFields(fieldsSub)
}
// tomlTypeName returns the TOML type name of the Go value's type. It is
// used to determine whether the types of array elements are mixed (which is
// forbidden). If the Go value is nil, then it is illegal for it to be an array
// element, and valueIsNil is returned as true.
// Returns the TOML type of a Go value. The type may be `nil`, which means
// no concrete TOML type could be found.
func tomlTypeOfGo(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() {
return nil
}
switch rv.Kind() {
case reflect.Bool:
return tomlBool
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32,
reflect.Int64,
reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32,
reflect.Uint64:
return tomlInteger
case reflect.Float32, reflect.Float64:
return tomlFloat
case reflect.Array, reflect.Slice:
if typeEqual(tomlHash, tomlArrayType(rv)) {
return tomlArrayHash
}
return tomlArray
case reflect.Ptr, reflect.Interface:
return tomlTypeOfGo(rv.Elem())
case reflect.String:
return tomlString
case reflect.Map:
return tomlHash
case reflect.Struct:
switch rv.Interface().(type) {
case time.Time:
return tomlDatetime
case TextMarshaler:
return tomlString
default:
return tomlHash
}
default:
panic("unexpected reflect.Kind: " + rv.Kind().String())
}
}
// tomlArrayType returns the element type of a TOML array. The type returned
// may be nil if it cannot be determined (e.g., a nil slice or a zero length
// slize). This function may also panic if it finds a type that cannot be
// expressed in TOML (such as nil elements, heterogeneous arrays or directly
// nested arrays of tables).
func tomlArrayType(rv reflect.Value) tomlType {
if isNil(rv) || !rv.IsValid() || rv.Len() == 0 {
return nil
}
firstType := tomlTypeOfGo(rv.Index(0))
if firstType == nil {
encPanic(errArrayNilElement)
}
rvlen := rv.Len()
for i := 1; i < rvlen; i++ {
elem := rv.Index(i)
switch elemType := tomlTypeOfGo(elem); {
case elemType == nil:
encPanic(errArrayNilElement)
case !typeEqual(firstType, elemType):
encPanic(errArrayMixedElementTypes)
}
}
// If we have a nested array, then we must make sure that the nested
// array contains ONLY primitives.
// This checks arbitrarily nested arrays.
if typeEqual(firstType, tomlArray) || typeEqual(firstType, tomlArrayHash) {
nest := tomlArrayType(eindirect(rv.Index(0)))
if typeEqual(nest, tomlHash) || typeEqual(nest, tomlArrayHash) {
encPanic(errArrayNoTable)
}
}
return firstType
}
type tagOptions struct {
skip bool // "-"
name string
omitempty bool
omitzero bool
}
func getOptions(tag reflect.StructTag) tagOptions {
t := tag.Get("toml")
if t == "-" {
return tagOptions{skip: true}
}
var opts tagOptions
parts := strings.Split(t, ",")
opts.name = parts[0]
for _, s := range parts[1:] {
switch s {
case "omitempty":
opts.omitempty = true
case "omitzero":
opts.omitzero = true
}
}
return opts
}
func isZero(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Int, reflect.Int8, reflect.Int16, reflect.Int32, reflect.Int64:
return rv.Int() == 0
case reflect.Uint, reflect.Uint8, reflect.Uint16, reflect.Uint32, reflect.Uint64:
return rv.Uint() == 0
case reflect.Float32, reflect.Float64:
return rv.Float() == 0.0
}
return false
}
func isEmpty(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Array, reflect.Slice, reflect.Map, reflect.String:
return rv.Len() == 0
case reflect.Bool:
return !rv.Bool()
}
return false
}
func (enc *Encoder) newline() {
if enc.hasWritten {
enc.wf("\n")
}
}
func (enc *Encoder) keyEqElement(key Key, val reflect.Value) {
if len(key) == 0 {
encPanic(errNoKey)
}
panicIfInvalidKey(key)
enc.wf("%s%s = ", enc.indentStr(key), key.maybeQuoted(len(key)-1))
enc.eElement(val)
enc.newline()
}
func (enc *Encoder) wf(format string, v ...interface{}) {
if _, err := fmt.Fprintf(enc.w, format, v...); err != nil {
encPanic(err)
}
enc.hasWritten = true
}
func (enc *Encoder) indentStr(key Key) string {
return strings.Repeat(enc.Indent, len(key)-1)
}
func encPanic(err error) {
panic(tomlEncodeError{err})
}
func eindirect(v reflect.Value) reflect.Value {
switch v.Kind() {
case reflect.Ptr, reflect.Interface:
return eindirect(v.Elem())
default:
return v
}
}
func isNil(rv reflect.Value) bool {
switch rv.Kind() {
case reflect.Interface, reflect.Map, reflect.Ptr, reflect.Slice:
return rv.IsNil()
default:
return false
}
}
func panicIfInvalidKey(key Key) {
for _, k := range key {
if len(k) == 0 {
encPanic(e("Key '%s' is not a valid table name. Key names "+
"cannot be empty.", key.maybeQuotedAll()))
}
}
}
func isValidKeyName(s string) bool {
return len(s) != 0
}

View file

@ -0,0 +1,19 @@
// +build go1.2
package toml
// In order to support Go 1.1, we define our own TextMarshaler and
// TextUnmarshaler types. For Go 1.2+, we just alias them with the
// standard library interfaces.
import (
"encoding"
)
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler encoding.TextMarshaler
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler encoding.TextUnmarshaler

View file

@ -0,0 +1,18 @@
// +build !go1.2
package toml
// These interfaces were introduced in Go 1.2, so we add them manually when
// compiling for Go 1.1.
// TextMarshaler is a synonym for encoding.TextMarshaler. It is defined here
// so that Go 1.1 can be supported.
type TextMarshaler interface {
MarshalText() (text []byte, err error)
}
// TextUnmarshaler is a synonym for encoding.TextUnmarshaler. It is defined
// here so that Go 1.1 can be supported.
type TextUnmarshaler interface {
UnmarshalText(text []byte) error
}

858
integration/vendor/github.com/BurntSushi/toml/lex.go generated vendored Normal file
View file

@ -0,0 +1,858 @@
package toml
import (
"fmt"
"strings"
"unicode"
"unicode/utf8"
)
type itemType int
const (
itemError itemType = iota
itemNIL // used in the parser to indicate no type
itemEOF
itemText
itemString
itemRawString
itemMultilineString
itemRawMultilineString
itemBool
itemInteger
itemFloat
itemDatetime
itemArray // the start of an array
itemArrayEnd
itemTableStart
itemTableEnd
itemArrayTableStart
itemArrayTableEnd
itemKeyStart
itemCommentStart
)
const (
eof = 0
tableStart = '['
tableEnd = ']'
arrayTableStart = '['
arrayTableEnd = ']'
tableSep = '.'
keySep = '='
arrayStart = '['
arrayEnd = ']'
arrayValTerm = ','
commentStart = '#'
stringStart = '"'
stringEnd = '"'
rawStringStart = '\''
rawStringEnd = '\''
)
type stateFn func(lx *lexer) stateFn
type lexer struct {
input string
start int
pos int
width int
line int
state stateFn
items chan item
// A stack of state functions used to maintain context.
// The idea is to reuse parts of the state machine in various places.
// For example, values can appear at the top level or within arbitrarily
// nested arrays. The last state on the stack is used after a value has
// been lexed. Similarly for comments.
stack []stateFn
}
type item struct {
typ itemType
val string
line int
}
func (lx *lexer) nextItem() item {
for {
select {
case item := <-lx.items:
return item
default:
lx.state = lx.state(lx)
}
}
}
func lex(input string) *lexer {
lx := &lexer{
input: input + "\n",
state: lexTop,
line: 1,
items: make(chan item, 10),
stack: make([]stateFn, 0, 10),
}
return lx
}
func (lx *lexer) push(state stateFn) {
lx.stack = append(lx.stack, state)
}
func (lx *lexer) pop() stateFn {
if len(lx.stack) == 0 {
return lx.errorf("BUG in lexer: no states to pop.")
}
last := lx.stack[len(lx.stack)-1]
lx.stack = lx.stack[0 : len(lx.stack)-1]
return last
}
func (lx *lexer) current() string {
return lx.input[lx.start:lx.pos]
}
func (lx *lexer) emit(typ itemType) {
lx.items <- item{typ, lx.current(), lx.line}
lx.start = lx.pos
}
func (lx *lexer) emitTrim(typ itemType) {
lx.items <- item{typ, strings.TrimSpace(lx.current()), lx.line}
lx.start = lx.pos
}
func (lx *lexer) next() (r rune) {
if lx.pos >= len(lx.input) {
lx.width = 0
return eof
}
if lx.input[lx.pos] == '\n' {
lx.line++
}
r, lx.width = utf8.DecodeRuneInString(lx.input[lx.pos:])
lx.pos += lx.width
return r
}
// ignore skips over the pending input before this point.
func (lx *lexer) ignore() {
lx.start = lx.pos
}
// backup steps back one rune. Can be called only once per call of next.
func (lx *lexer) backup() {
lx.pos -= lx.width
if lx.pos < len(lx.input) && lx.input[lx.pos] == '\n' {
lx.line--
}
}
// accept consumes the next rune if it's equal to `valid`.
func (lx *lexer) accept(valid rune) bool {
if lx.next() == valid {
return true
}
lx.backup()
return false
}
// peek returns but does not consume the next rune in the input.
func (lx *lexer) peek() rune {
r := lx.next()
lx.backup()
return r
}
// skip ignores all input that matches the given predicate.
func (lx *lexer) skip(pred func(rune) bool) {
for {
r := lx.next()
if pred(r) {
continue
}
lx.backup()
lx.ignore()
return
}
}
// errorf stops all lexing by emitting an error and returning `nil`.
// Note that any value that is a character is escaped if it's a special
// character (new lines, tabs, etc.).
func (lx *lexer) errorf(format string, values ...interface{}) stateFn {
lx.items <- item{
itemError,
fmt.Sprintf(format, values...),
lx.line,
}
return nil
}
// lexTop consumes elements at the top level of TOML data.
func lexTop(lx *lexer) stateFn {
r := lx.next()
if isWhitespace(r) || isNL(r) {
return lexSkip(lx, lexTop)
}
switch r {
case commentStart:
lx.push(lexTop)
return lexCommentStart
case tableStart:
return lexTableStart
case eof:
if lx.pos > lx.start {
return lx.errorf("Unexpected EOF.")
}
lx.emit(itemEOF)
return nil
}
// At this point, the only valid item can be a key, so we back up
// and let the key lexer do the rest.
lx.backup()
lx.push(lexTopEnd)
return lexKeyStart
}
// lexTopEnd is entered whenever a top-level item has been consumed. (A value
// or a table.) It must see only whitespace, and will turn back to lexTop
// upon a new line. If it sees EOF, it will quit the lexer successfully.
func lexTopEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case r == commentStart:
// a comment will read to a new line for us.
lx.push(lexTop)
return lexCommentStart
case isWhitespace(r):
return lexTopEnd
case isNL(r):
lx.ignore()
return lexTop
case r == eof:
lx.ignore()
return lexTop
}
return lx.errorf("Expected a top-level item to end with a new line, "+
"comment or EOF, but got %q instead.", r)
}
// lexTable lexes the beginning of a table. Namely, it makes sure that
// it starts with a character other than '.' and ']'.
// It assumes that '[' has already been consumed.
// It also handles the case that this is an item in an array of tables.
// e.g., '[[name]]'.
func lexTableStart(lx *lexer) stateFn {
if lx.peek() == arrayTableStart {
lx.next()
lx.emit(itemArrayTableStart)
lx.push(lexArrayTableEnd)
} else {
lx.emit(itemTableStart)
lx.push(lexTableEnd)
}
return lexTableNameStart
}
func lexTableEnd(lx *lexer) stateFn {
lx.emit(itemTableEnd)
return lexTopEnd
}
func lexArrayTableEnd(lx *lexer) stateFn {
if r := lx.next(); r != arrayTableEnd {
return lx.errorf("Expected end of table array name delimiter %q, "+
"but got %q instead.", arrayTableEnd, r)
}
lx.emit(itemArrayTableEnd)
return lexTopEnd
}
func lexTableNameStart(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.peek(); {
case r == tableEnd || r == eof:
return lx.errorf("Unexpected end of table name. (Table names cannot " +
"be empty.)")
case r == tableSep:
return lx.errorf("Unexpected table separator. (Table names cannot " +
"be empty.)")
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.push(lexTableNameEnd)
return lexValue // reuse string lexing
default:
return lexBareTableName
}
}
// lexBareTableName lexes the name of a table. It assumes that at least one
// valid character for the table has already been read.
func lexBareTableName(lx *lexer) stateFn {
r := lx.next()
if isBareKeyChar(r) {
return lexBareTableName
}
lx.backup()
lx.emit(itemText)
return lexTableNameEnd
}
// lexTableNameEnd reads the end of a piece of a table name, optionally
// consuming whitespace.
func lexTableNameEnd(lx *lexer) stateFn {
lx.skip(isWhitespace)
switch r := lx.next(); {
case isWhitespace(r):
return lexTableNameEnd
case r == tableSep:
lx.ignore()
return lexTableNameStart
case r == tableEnd:
return lx.pop()
default:
return lx.errorf("Expected '.' or ']' to end table name, but got %q "+
"instead.", r)
}
}
// lexKeyStart consumes a key name up until the first non-whitespace character.
// lexKeyStart will ignore whitespace.
func lexKeyStart(lx *lexer) stateFn {
r := lx.peek()
switch {
case r == keySep:
return lx.errorf("Unexpected key separator %q.", keySep)
case isWhitespace(r) || isNL(r):
lx.next()
return lexSkip(lx, lexKeyStart)
case r == stringStart || r == rawStringStart:
lx.ignore()
lx.emit(itemKeyStart)
lx.push(lexKeyEnd)
return lexValue // reuse string lexing
default:
lx.ignore()
lx.emit(itemKeyStart)
return lexBareKey
}
}
// lexBareKey consumes the text of a bare key. Assumes that the first character
// (which is not whitespace) has not yet been consumed.
func lexBareKey(lx *lexer) stateFn {
switch r := lx.next(); {
case isBareKeyChar(r):
return lexBareKey
case isWhitespace(r):
lx.backup()
lx.emit(itemText)
return lexKeyEnd
case r == keySep:
lx.backup()
lx.emit(itemText)
return lexKeyEnd
default:
return lx.errorf("Bare keys cannot contain %q.", r)
}
}
// lexKeyEnd consumes the end of a key and trims whitespace (up to the key
// separator).
func lexKeyEnd(lx *lexer) stateFn {
switch r := lx.next(); {
case r == keySep:
return lexSkip(lx, lexValue)
case isWhitespace(r):
return lexSkip(lx, lexKeyEnd)
default:
return lx.errorf("Expected key separator %q, but got %q instead.",
keySep, r)
}
}
// lexValue starts the consumption of a value anywhere a value is expected.
// lexValue will ignore whitespace.
// After a value is lexed, the last state on the next is popped and returned.
func lexValue(lx *lexer) stateFn {
// We allow whitespace to precede a value, but NOT new lines.
// In array syntax, the array states are responsible for ignoring new
// lines.
r := lx.next()
switch {
case isWhitespace(r):
return lexSkip(lx, lexValue)
case isDigit(r):
lx.backup() // avoid an extra state and use the same as above
return lexNumberOrDateStart
}
switch r {
case arrayStart:
lx.ignore()
lx.emit(itemArray)
return lexArrayValue
case stringStart:
if lx.accept(stringStart) {
if lx.accept(stringStart) {
lx.ignore() // Ignore """
return lexMultilineString
}
lx.backup()
}
lx.ignore() // ignore the '"'
return lexString
case rawStringStart:
if lx.accept(rawStringStart) {
if lx.accept(rawStringStart) {
lx.ignore() // Ignore """
return lexMultilineRawString
}
lx.backup()
}
lx.ignore() // ignore the "'"
return lexRawString
case '+', '-':
return lexNumberStart
case '.': // special error case, be kind to users
return lx.errorf("Floats must start with a digit, not '.'.")
}
if unicode.IsLetter(r) {
// Be permissive here; lexBool will give a nice error if the
// user wrote something like
// x = foo
// (i.e. not 'true' or 'false' but is something else word-like.)
lx.backup()
return lexBool
}
return lx.errorf("Expected value but found %q instead.", r)
}
// lexArrayValue consumes one value in an array. It assumes that '[' or ','
// have already been consumed. All whitespace and new lines are ignored.
func lexArrayValue(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValue)
case r == commentStart:
lx.push(lexArrayValue)
return lexCommentStart
case r == arrayValTerm:
return lx.errorf("Unexpected array value terminator %q.",
arrayValTerm)
case r == arrayEnd:
return lexArrayEnd
}
lx.backup()
lx.push(lexArrayValueEnd)
return lexValue
}
// lexArrayValueEnd consumes the cruft between values of an array. Namely,
// it ignores whitespace and expects either a ',' or a ']'.
func lexArrayValueEnd(lx *lexer) stateFn {
r := lx.next()
switch {
case isWhitespace(r) || isNL(r):
return lexSkip(lx, lexArrayValueEnd)
case r == commentStart:
lx.push(lexArrayValueEnd)
return lexCommentStart
case r == arrayValTerm:
lx.ignore()
return lexArrayValue // move on to the next value
case r == arrayEnd:
return lexArrayEnd
}
return lx.errorf("Expected an array value terminator %q or an array "+
"terminator %q, but got %q instead.", arrayValTerm, arrayEnd, r)
}
// lexArrayEnd finishes the lexing of an array. It assumes that a ']' has
// just been consumed.
func lexArrayEnd(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemArrayEnd)
return lx.pop()
}
// lexString consumes the inner contents of a string. It assumes that the
// beginning '"' has already been consumed and ignored.
func lexString(lx *lexer) stateFn {
r := lx.next()
switch {
case isNL(r):
return lx.errorf("Strings cannot contain new lines.")
case r == '\\':
lx.push(lexString)
return lexStringEscape
case r == stringEnd:
lx.backup()
lx.emit(itemString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexString
}
// lexMultilineString consumes the inner contents of a string. It assumes that
// the beginning '"""' has already been consumed and ignored.
func lexMultilineString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == '\\':
return lexMultilineStringEscape
case r == stringEnd:
if lx.accept(stringEnd) {
if lx.accept(stringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineString
}
// lexRawString consumes a raw string. Nothing can be escaped in such a string.
// It assumes that the beginning "'" has already been consumed and ignored.
func lexRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case isNL(r):
return lx.errorf("Strings cannot contain new lines.")
case r == rawStringEnd:
lx.backup()
lx.emit(itemRawString)
lx.next()
lx.ignore()
return lx.pop()
}
return lexRawString
}
// lexMultilineRawString consumes a raw string. Nothing can be escaped in such
// a string. It assumes that the beginning "'" has already been consumed and
// ignored.
func lexMultilineRawString(lx *lexer) stateFn {
r := lx.next()
switch {
case r == rawStringEnd:
if lx.accept(rawStringEnd) {
if lx.accept(rawStringEnd) {
lx.backup()
lx.backup()
lx.backup()
lx.emit(itemRawMultilineString)
lx.next()
lx.next()
lx.next()
lx.ignore()
return lx.pop()
}
lx.backup()
}
}
return lexMultilineRawString
}
// lexMultilineStringEscape consumes an escaped character. It assumes that the
// preceding '\\' has already been consumed.
func lexMultilineStringEscape(lx *lexer) stateFn {
// Handle the special case first:
if isNL(lx.next()) {
return lexMultilineString
}
lx.backup()
lx.push(lexMultilineString)
return lexStringEscape(lx)
}
func lexStringEscape(lx *lexer) stateFn {
r := lx.next()
switch r {
case 'b':
fallthrough
case 't':
fallthrough
case 'n':
fallthrough
case 'f':
fallthrough
case 'r':
fallthrough
case '"':
fallthrough
case '\\':
return lx.pop()
case 'u':
return lexShortUnicodeEscape
case 'U':
return lexLongUnicodeEscape
}
return lx.errorf("Invalid escape character %q. Only the following "+
"escape characters are allowed: "+
"\\b, \\t, \\n, \\f, \\r, \\\", \\/, \\\\, "+
"\\uXXXX and \\UXXXXXXXX.", r)
}
func lexShortUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 4; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf("Expected four hexadecimal digits after '\\u', "+
"but got '%s' instead.", lx.current())
}
}
return lx.pop()
}
func lexLongUnicodeEscape(lx *lexer) stateFn {
var r rune
for i := 0; i < 8; i++ {
r = lx.next()
if !isHexadecimal(r) {
return lx.errorf("Expected eight hexadecimal digits after '\\U', "+
"but got '%s' instead.", lx.current())
}
}
return lx.pop()
}
// lexNumberOrDateStart consumes either an integer, a float, or datetime.
func lexNumberOrDateStart(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '_':
return lexNumber
case 'e', 'E':
return lexFloat
case '.':
return lx.errorf("Floats must start with a digit, not '.'.")
}
return lx.errorf("Expected a digit but got %q.", r)
}
// lexNumberOrDate consumes either an integer, float or datetime.
func lexNumberOrDate(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumberOrDate
}
switch r {
case '-':
return lexDatetime
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexDatetime consumes a Datetime, to a first approximation.
// The parser validates that it matches one of the accepted formats.
func lexDatetime(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexDatetime
}
switch r {
case '-', 'T', ':', '.', 'Z':
return lexDatetime
}
lx.backup()
lx.emit(itemDatetime)
return lx.pop()
}
// lexNumberStart consumes either an integer or a float. It assumes that a sign
// has already been read, but that *no* digits have been consumed.
// lexNumberStart will move to the appropriate integer or float states.
func lexNumberStart(lx *lexer) stateFn {
// We MUST see a digit. Even floats have to start with a digit.
r := lx.next()
if !isDigit(r) {
if r == '.' {
return lx.errorf("Floats must start with a digit, not '.'.")
}
return lx.errorf("Expected a digit but got %q.", r)
}
return lexNumber
}
// lexNumber consumes an integer or a float after seeing the first digit.
func lexNumber(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexNumber
}
switch r {
case '_':
return lexNumber
case '.', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemInteger)
return lx.pop()
}
// lexFloat consumes the elements of a float. It allows any sequence of
// float-like characters, so floats emitted by the lexer are only a first
// approximation and must be validated by the parser.
func lexFloat(lx *lexer) stateFn {
r := lx.next()
if isDigit(r) {
return lexFloat
}
switch r {
case '_', '.', '-', '+', 'e', 'E':
return lexFloat
}
lx.backup()
lx.emit(itemFloat)
return lx.pop()
}
// lexBool consumes a bool string: 'true' or 'false.
func lexBool(lx *lexer) stateFn {
var rs []rune
for {
r := lx.next()
if r == eof || isWhitespace(r) || isNL(r) {
lx.backup()
break
}
rs = append(rs, r)
}
s := string(rs)
switch s {
case "true", "false":
lx.emit(itemBool)
return lx.pop()
}
return lx.errorf("Expected value but found %q instead.", s)
}
// lexCommentStart begins the lexing of a comment. It will emit
// itemCommentStart and consume no characters, passing control to lexComment.
func lexCommentStart(lx *lexer) stateFn {
lx.ignore()
lx.emit(itemCommentStart)
return lexComment
}
// lexComment lexes an entire comment. It assumes that '#' has been consumed.
// It will consume *up to* the first new line character, and pass control
// back to the last state on the stack.
func lexComment(lx *lexer) stateFn {
r := lx.peek()
if isNL(r) || r == eof {
lx.emit(itemText)
return lx.pop()
}
lx.next()
return lexComment
}
// lexSkip ignores all slurped input and moves on to the next state.
func lexSkip(lx *lexer, nextState stateFn) stateFn {
return func(lx *lexer) stateFn {
lx.ignore()
return nextState
}
}
// isWhitespace returns true if `r` is a whitespace character according
// to the spec.
func isWhitespace(r rune) bool {
return r == '\t' || r == ' '
}
func isNL(r rune) bool {
return r == '\n' || r == '\r'
}
func isDigit(r rune) bool {
return r >= '0' && r <= '9'
}
func isHexadecimal(r rune) bool {
return (r >= '0' && r <= '9') ||
(r >= 'a' && r <= 'f') ||
(r >= 'A' && r <= 'F')
}
func isBareKeyChar(r rune) bool {
return (r >= 'A' && r <= 'Z') ||
(r >= 'a' && r <= 'z') ||
(r >= '0' && r <= '9') ||
r == '_' ||
r == '-'
}
func (itype itemType) String() string {
switch itype {
case itemError:
return "Error"
case itemNIL:
return "NIL"
case itemEOF:
return "EOF"
case itemText:
return "Text"
case itemString, itemRawString, itemMultilineString, itemRawMultilineString:
return "String"
case itemBool:
return "Bool"
case itemInteger:
return "Integer"
case itemFloat:
return "Float"
case itemDatetime:
return "DateTime"
case itemTableStart:
return "TableStart"
case itemTableEnd:
return "TableEnd"
case itemKeyStart:
return "KeyStart"
case itemArray:
return "Array"
case itemArrayEnd:
return "ArrayEnd"
case itemCommentStart:
return "CommentStart"
}
panic(fmt.Sprintf("BUG: Unknown type '%d'.", int(itype)))
}
func (item item) String() string {
return fmt.Sprintf("(%s, %s)", item.typ.String(), item.val)
}

557
integration/vendor/github.com/BurntSushi/toml/parse.go generated vendored Normal file
View file

@ -0,0 +1,557 @@
package toml
import (
"fmt"
"strconv"
"strings"
"time"
"unicode"
"unicode/utf8"
)
type parser struct {
mapping map[string]interface{}
types map[string]tomlType
lx *lexer
// A list of keys in the order that they appear in the TOML data.
ordered []Key
// the full key for the current hash in scope
context Key
// the base key name for everything except hashes
currentKey string
// rough approximation of line number
approxLine int
// A map of 'key.group.names' to whether they were created implicitly.
implicits map[string]bool
}
type parseError string
func (pe parseError) Error() string {
return string(pe)
}
func parse(data string) (p *parser, err error) {
defer func() {
if r := recover(); r != nil {
var ok bool
if err, ok = r.(parseError); ok {
return
}
panic(r)
}
}()
p = &parser{
mapping: make(map[string]interface{}),
types: make(map[string]tomlType),
lx: lex(data),
ordered: make([]Key, 0),
implicits: make(map[string]bool),
}
for {
item := p.next()
if item.typ == itemEOF {
break
}
p.topLevel(item)
}
return p, nil
}
func (p *parser) panicf(format string, v ...interface{}) {
msg := fmt.Sprintf("Near line %d (last key parsed '%s'): %s",
p.approxLine, p.current(), fmt.Sprintf(format, v...))
panic(parseError(msg))
}
func (p *parser) next() item {
it := p.lx.nextItem()
if it.typ == itemError {
p.panicf("%s", it.val)
}
return it
}
func (p *parser) bug(format string, v ...interface{}) {
panic(fmt.Sprintf("BUG: "+format+"\n\n", v...))
}
func (p *parser) expect(typ itemType) item {
it := p.next()
p.assertEqual(typ, it.typ)
return it
}
func (p *parser) assertEqual(expected, got itemType) {
if expected != got {
p.bug("Expected '%s' but got '%s'.", expected, got)
}
}
func (p *parser) topLevel(item item) {
switch item.typ {
case itemCommentStart:
p.approxLine = item.line
p.expect(itemText)
case itemTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemTableEnd, kg.typ)
p.establishContext(key, false)
p.setType("", tomlHash)
p.ordered = append(p.ordered, key)
case itemArrayTableStart:
kg := p.next()
p.approxLine = kg.line
var key Key
for ; kg.typ != itemArrayTableEnd && kg.typ != itemEOF; kg = p.next() {
key = append(key, p.keyString(kg))
}
p.assertEqual(itemArrayTableEnd, kg.typ)
p.establishContext(key, true)
p.setType("", tomlArrayHash)
p.ordered = append(p.ordered, key)
case itemKeyStart:
kname := p.next()
p.approxLine = kname.line
p.currentKey = p.keyString(kname)
val, typ := p.value(p.next())
p.setValue(p.currentKey, val)
p.setType(p.currentKey, typ)
p.ordered = append(p.ordered, p.context.add(p.currentKey))
p.currentKey = ""
default:
p.bug("Unexpected type at top level: %s", item.typ)
}
}
// Gets a string for a key (or part of a key in a table name).
func (p *parser) keyString(it item) string {
switch it.typ {
case itemText:
return it.val
case itemString, itemMultilineString,
itemRawString, itemRawMultilineString:
s, _ := p.value(it)
return s.(string)
default:
p.bug("Unexpected key type: %s", it.typ)
panic("unreachable")
}
}
// value translates an expected value from the lexer into a Go value wrapped
// as an empty interface.
func (p *parser) value(it item) (interface{}, tomlType) {
switch it.typ {
case itemString:
return p.replaceEscapes(it.val), p.typeOfPrimitive(it)
case itemMultilineString:
trimmed := stripFirstNewline(stripEscapedWhitespace(it.val))
return p.replaceEscapes(trimmed), p.typeOfPrimitive(it)
case itemRawString:
return it.val, p.typeOfPrimitive(it)
case itemRawMultilineString:
return stripFirstNewline(it.val), p.typeOfPrimitive(it)
case itemBool:
switch it.val {
case "true":
return true, p.typeOfPrimitive(it)
case "false":
return false, p.typeOfPrimitive(it)
}
p.bug("Expected boolean value, but got '%s'.", it.val)
case itemInteger:
if !numUnderscoresOK(it.val) {
p.panicf("Invalid integer %q: underscores must be surrounded by digits",
it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseInt(val, 10, 64)
if err != nil {
// Distinguish integer values. Normally, it'd be a bug if the lexer
// provides an invalid integer, but it's possible that the number is
// out of range of valid values (which the lexer cannot determine).
// So mark the former as a bug but the latter as a legitimate user
// error.
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Integer '%s' is out of the range of 64-bit "+
"signed integers.", it.val)
} else {
p.bug("Expected integer value, but got '%s'.", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemFloat:
parts := strings.FieldsFunc(it.val, func(r rune) bool {
switch r {
case '.', 'e', 'E':
return true
}
return false
})
for _, part := range parts {
if !numUnderscoresOK(part) {
p.panicf("Invalid float %q: underscores must be "+
"surrounded by digits", it.val)
}
}
if !numPeriodsOK(it.val) {
// As a special case, numbers like '123.' or '1.e2',
// which are valid as far as Go/strconv are concerned,
// must be rejected because TOML says that a fractional
// part consists of '.' followed by 1+ digits.
p.panicf("Invalid float %q: '.' must be followed "+
"by one or more digits", it.val)
}
val := strings.Replace(it.val, "_", "", -1)
num, err := strconv.ParseFloat(val, 64)
if err != nil {
if e, ok := err.(*strconv.NumError); ok &&
e.Err == strconv.ErrRange {
p.panicf("Float '%s' is out of the range of 64-bit "+
"IEEE-754 floating-point numbers.", it.val)
} else {
p.panicf("Invalid float value: %q", it.val)
}
}
return num, p.typeOfPrimitive(it)
case itemDatetime:
var t time.Time
var ok bool
var err error
for _, format := range []string{
"2006-01-02T15:04:05Z07:00",
"2006-01-02T15:04:05",
"2006-01-02",
} {
t, err = time.ParseInLocation(format, it.val, time.Local)
if err == nil {
ok = true
break
}
}
if !ok {
p.panicf("Invalid TOML Datetime: %q.", it.val)
}
return t, p.typeOfPrimitive(it)
case itemArray:
array := make([]interface{}, 0)
types := make([]tomlType, 0)
for it = p.next(); it.typ != itemArrayEnd; it = p.next() {
if it.typ == itemCommentStart {
p.expect(itemText)
continue
}
val, typ := p.value(it)
array = append(array, val)
types = append(types, typ)
}
return array, p.typeOfArray(types)
}
p.bug("Unexpected value type: %s", it.typ)
panic("unreachable")
}
// numUnderscoresOK checks whether each underscore in s is surrounded by
// characters that are not underscores.
func numUnderscoresOK(s string) bool {
accept := false
for _, r := range s {
if r == '_' {
if !accept {
return false
}
accept = false
continue
}
accept = true
}
return accept
}
// numPeriodsOK checks whether every period in s is followed by a digit.
func numPeriodsOK(s string) bool {
period := false
for _, r := range s {
if period && !isDigit(r) {
return false
}
period = r == '.'
}
return !period
}
// establishContext sets the current context of the parser,
// where the context is either a hash or an array of hashes. Which one is
// set depends on the value of the `array` parameter.
//
// Establishing the context also makes sure that the key isn't a duplicate, and
// will create implicit hashes automatically.
func (p *parser) establishContext(key Key, array bool) {
var ok bool
// Always start at the top level and drill down for our context.
hashContext := p.mapping
keyContext := make(Key, 0)
// We only need implicit hashes for key[0:-1]
for _, k := range key[0 : len(key)-1] {
_, ok = hashContext[k]
keyContext = append(keyContext, k)
// No key? Make an implicit hash and move on.
if !ok {
p.addImplicit(keyContext)
hashContext[k] = make(map[string]interface{})
}
// If the hash context is actually an array of tables, then set
// the hash context to the last element in that array.
//
// Otherwise, it better be a table, since this MUST be a key group (by
// virtue of it not being the last element in a key).
switch t := hashContext[k].(type) {
case []map[string]interface{}:
hashContext = t[len(t)-1]
case map[string]interface{}:
hashContext = t
default:
p.panicf("Key '%s' was already created as a hash.", keyContext)
}
}
p.context = keyContext
if array {
// If this is the first element for this array, then allocate a new
// list of tables for it.
k := key[len(key)-1]
if _, ok := hashContext[k]; !ok {
hashContext[k] = make([]map[string]interface{}, 0, 5)
}
// Add a new table. But make sure the key hasn't already been used
// for something else.
if hash, ok := hashContext[k].([]map[string]interface{}); ok {
hashContext[k] = append(hash, make(map[string]interface{}))
} else {
p.panicf("Key '%s' was already created and cannot be used as "+
"an array.", keyContext)
}
} else {
p.setValue(key[len(key)-1], make(map[string]interface{}))
}
p.context = append(p.context, key[len(key)-1])
}
// setValue sets the given key to the given value in the current context.
// It will make sure that the key hasn't already been defined, account for
// implicit key groups.
func (p *parser) setValue(key string, value interface{}) {
var tmpHash interface{}
var ok bool
hash := p.mapping
keyContext := make(Key, 0)
for _, k := range p.context {
keyContext = append(keyContext, k)
if tmpHash, ok = hash[k]; !ok {
p.bug("Context for key '%s' has not been established.", keyContext)
}
switch t := tmpHash.(type) {
case []map[string]interface{}:
// The context is a table of hashes. Pick the most recent table
// defined as the current hash.
hash = t[len(t)-1]
case map[string]interface{}:
hash = t
default:
p.bug("Expected hash to have type 'map[string]interface{}', but "+
"it has '%T' instead.", tmpHash)
}
}
keyContext = append(keyContext, key)
if _, ok := hash[key]; ok {
// Typically, if the given key has already been set, then we have
// to raise an error since duplicate keys are disallowed. However,
// it's possible that a key was previously defined implicitly. In this
// case, it is allowed to be redefined concretely. (See the
// `tests/valid/implicit-and-explicit-after.toml` test in `toml-test`.)
//
// But we have to make sure to stop marking it as an implicit. (So that
// another redefinition provokes an error.)
//
// Note that since it has already been defined (as a hash), we don't
// want to overwrite it. So our business is done.
if p.isImplicit(keyContext) {
p.removeImplicit(keyContext)
return
}
// Otherwise, we have a concrete key trying to override a previous
// key, which is *always* wrong.
p.panicf("Key '%s' has already been defined.", keyContext)
}
hash[key] = value
}
// setType sets the type of a particular value at a given key.
// It should be called immediately AFTER setValue.
//
// Note that if `key` is empty, then the type given will be applied to the
// current context (which is either a table or an array of tables).
func (p *parser) setType(key string, typ tomlType) {
keyContext := make(Key, 0, len(p.context)+1)
for _, k := range p.context {
keyContext = append(keyContext, k)
}
if len(key) > 0 { // allow type setting for hashes
keyContext = append(keyContext, key)
}
p.types[keyContext.String()] = typ
}
// addImplicit sets the given Key as having been created implicitly.
func (p *parser) addImplicit(key Key) {
p.implicits[key.String()] = true
}
// removeImplicit stops tagging the given key as having been implicitly
// created.
func (p *parser) removeImplicit(key Key) {
p.implicits[key.String()] = false
}
// isImplicit returns true if the key group pointed to by the key was created
// implicitly.
func (p *parser) isImplicit(key Key) bool {
return p.implicits[key.String()]
}
// current returns the full key name of the current context.
func (p *parser) current() string {
if len(p.currentKey) == 0 {
return p.context.String()
}
if len(p.context) == 0 {
return p.currentKey
}
return fmt.Sprintf("%s.%s", p.context, p.currentKey)
}
func stripFirstNewline(s string) string {
if len(s) == 0 || s[0] != '\n' {
return s
}
return s[1:]
}
func stripEscapedWhitespace(s string) string {
esc := strings.Split(s, "\\\n")
if len(esc) > 1 {
for i := 1; i < len(esc); i++ {
esc[i] = strings.TrimLeftFunc(esc[i], unicode.IsSpace)
}
}
return strings.Join(esc, "")
}
func (p *parser) replaceEscapes(str string) string {
var replaced []rune
s := []byte(str)
r := 0
for r < len(s) {
if s[r] != '\\' {
c, size := utf8.DecodeRune(s[r:])
r += size
replaced = append(replaced, c)
continue
}
r += 1
if r >= len(s) {
p.bug("Escape sequence at end of string.")
return ""
}
switch s[r] {
default:
p.bug("Expected valid escape code after \\, but got %q.", s[r])
return ""
case 'b':
replaced = append(replaced, rune(0x0008))
r += 1
case 't':
replaced = append(replaced, rune(0x0009))
r += 1
case 'n':
replaced = append(replaced, rune(0x000A))
r += 1
case 'f':
replaced = append(replaced, rune(0x000C))
r += 1
case 'r':
replaced = append(replaced, rune(0x000D))
r += 1
case '"':
replaced = append(replaced, rune(0x0022))
r += 1
case '\\':
replaced = append(replaced, rune(0x005C))
r += 1
case 'u':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+5). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+5])
replaced = append(replaced, escaped)
r += 5
case 'U':
// At this point, we know we have a Unicode escape of the form
// `uXXXX` at [r, r+9). (Because the lexer guarantees this
// for us.)
escaped := p.asciiEscapeToUnicode(s[r+1 : r+9])
replaced = append(replaced, escaped)
r += 9
}
}
return string(replaced)
}
func (p *parser) asciiEscapeToUnicode(bs []byte) rune {
s := string(bs)
hex, err := strconv.ParseUint(strings.ToLower(s), 16, 32)
if err != nil {
p.bug("Could not parse '%s' as a hexadecimal number, but the "+
"lexer claims it's OK: %s", s, err)
}
if !utf8.ValidRune(rune(hex)) {
p.panicf("Escaped character '\\u%s' is not valid UTF-8.", s)
}
return rune(hex)
}
func isStringType(ty itemType) bool {
return ty == itemString || ty == itemMultilineString ||
ty == itemRawString || ty == itemRawMultilineString
}

View file

@ -0,0 +1,91 @@
package toml
// tomlType represents any Go type that corresponds to a TOML type.
// While the first draft of the TOML spec has a simplistic type system that
// probably doesn't need this level of sophistication, we seem to be militating
// toward adding real composite types.
type tomlType interface {
typeString() string
}
// typeEqual accepts any two types and returns true if they are equal.
func typeEqual(t1, t2 tomlType) bool {
if t1 == nil || t2 == nil {
return false
}
return t1.typeString() == t2.typeString()
}
func typeIsHash(t tomlType) bool {
return typeEqual(t, tomlHash) || typeEqual(t, tomlArrayHash)
}
type tomlBaseType string
func (btype tomlBaseType) typeString() string {
return string(btype)
}
func (btype tomlBaseType) String() string {
return btype.typeString()
}
var (
tomlInteger tomlBaseType = "Integer"
tomlFloat tomlBaseType = "Float"
tomlDatetime tomlBaseType = "Datetime"
tomlString tomlBaseType = "String"
tomlBool tomlBaseType = "Bool"
tomlArray tomlBaseType = "Array"
tomlHash tomlBaseType = "Hash"
tomlArrayHash tomlBaseType = "ArrayHash"
)
// typeOfPrimitive returns a tomlType of any primitive value in TOML.
// Primitive values are: Integer, Float, Datetime, String and Bool.
//
// Passing a lexer item other than the following will cause a BUG message
// to occur: itemString, itemBool, itemInteger, itemFloat, itemDatetime.
func (p *parser) typeOfPrimitive(lexItem item) tomlType {
switch lexItem.typ {
case itemInteger:
return tomlInteger
case itemFloat:
return tomlFloat
case itemDatetime:
return tomlDatetime
case itemString:
return tomlString
case itemMultilineString:
return tomlString
case itemRawString:
return tomlString
case itemRawMultilineString:
return tomlString
case itemBool:
return tomlBool
}
p.bug("Cannot infer primitive type of lex item '%s'.", lexItem)
panic("unreachable")
}
// typeOfArray returns a tomlType for an array given a list of types of its
// values.
//
// In the current spec, if an array is homogeneous, then its type is always
// "Array". If the array is not homogeneous, an error is generated.
func (p *parser) typeOfArray(types []tomlType) tomlType {
// Empty arrays are cool.
if len(types) == 0 {
return tomlArray
}
theType := types[0]
for _, t := range types[1:] {
if !typeEqual(theType, t) {
p.panicf("Array contains values of type '%s' and '%s', but "+
"arrays must be homogeneous.", theType, t)
}
}
return tomlArray
}

View file

@ -0,0 +1,242 @@
package toml
// Struct field handling is adapted from code in encoding/json:
//
// Copyright 2010 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the Go distribution.
import (
"reflect"
"sort"
"sync"
)
// A field represents a single field found in a struct.
type field struct {
name string // the name of the field (`toml` tag included)
tag bool // whether field has a `toml` tag
index []int // represents the depth of an anonymous field
typ reflect.Type // the type of the field
}
// byName sorts field by name, breaking ties with depth,
// then breaking ties with "name came from toml tag", then
// breaking ties with index sequence.
type byName []field
func (x byName) Len() int { return len(x) }
func (x byName) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byName) Less(i, j int) bool {
if x[i].name != x[j].name {
return x[i].name < x[j].name
}
if len(x[i].index) != len(x[j].index) {
return len(x[i].index) < len(x[j].index)
}
if x[i].tag != x[j].tag {
return x[i].tag
}
return byIndex(x).Less(i, j)
}
// byIndex sorts field by index sequence.
type byIndex []field
func (x byIndex) Len() int { return len(x) }
func (x byIndex) Swap(i, j int) { x[i], x[j] = x[j], x[i] }
func (x byIndex) Less(i, j int) bool {
for k, xik := range x[i].index {
if k >= len(x[j].index) {
return false
}
if xik != x[j].index[k] {
return xik < x[j].index[k]
}
}
return len(x[i].index) < len(x[j].index)
}
// typeFields returns a list of fields that TOML should recognize for the given
// type. The algorithm is breadth-first search over the set of structs to
// include - the top struct and then any reachable anonymous structs.
func typeFields(t reflect.Type) []field {
// Anonymous fields to explore at the current level and the next.
current := []field{}
next := []field{{typ: t}}
// Count of queued names for current level and the next.
count := map[reflect.Type]int{}
nextCount := map[reflect.Type]int{}
// Types already visited at an earlier level.
visited := map[reflect.Type]bool{}
// Fields found.
var fields []field
for len(next) > 0 {
current, next = next, current[:0]
count, nextCount = nextCount, map[reflect.Type]int{}
for _, f := range current {
if visited[f.typ] {
continue
}
visited[f.typ] = true
// Scan f.typ for fields to include.
for i := 0; i < f.typ.NumField(); i++ {
sf := f.typ.Field(i)
if sf.PkgPath != "" && !sf.Anonymous { // unexported
continue
}
opts := getOptions(sf.Tag)
if opts.skip {
continue
}
index := make([]int, len(f.index)+1)
copy(index, f.index)
index[len(f.index)] = i
ft := sf.Type
if ft.Name() == "" && ft.Kind() == reflect.Ptr {
// Follow pointer.
ft = ft.Elem()
}
// Record found field and index sequence.
if opts.name != "" || !sf.Anonymous || ft.Kind() != reflect.Struct {
tagged := opts.name != ""
name := opts.name
if name == "" {
name = sf.Name
}
fields = append(fields, field{name, tagged, index, ft})
if count[f.typ] > 1 {
// If there were multiple instances, add a second,
// so that the annihilation code will see a duplicate.
// It only cares about the distinction between 1 or 2,
// so don't bother generating any more copies.
fields = append(fields, fields[len(fields)-1])
}
continue
}
// Record new anonymous struct to explore in next round.
nextCount[ft]++
if nextCount[ft] == 1 {
f := field{name: ft.Name(), index: index, typ: ft}
next = append(next, f)
}
}
}
}
sort.Sort(byName(fields))
// Delete all fields that are hidden by the Go rules for embedded fields,
// except that fields with TOML tags are promoted.
// The fields are sorted in primary order of name, secondary order
// of field index length. Loop over names; for each name, delete
// hidden fields by choosing the one dominant field that survives.
out := fields[:0]
for advance, i := 0, 0; i < len(fields); i += advance {
// One iteration per name.
// Find the sequence of fields with the name of this first field.
fi := fields[i]
name := fi.name
for advance = 1; i+advance < len(fields); advance++ {
fj := fields[i+advance]
if fj.name != name {
break
}
}
if advance == 1 { // Only one field with this name
out = append(out, fi)
continue
}
dominant, ok := dominantField(fields[i : i+advance])
if ok {
out = append(out, dominant)
}
}
fields = out
sort.Sort(byIndex(fields))
return fields
}
// dominantField looks through the fields, all of which are known to
// have the same name, to find the single field that dominates the
// others using Go's embedding rules, modified by the presence of
// TOML tags. If there are multiple top-level fields, the boolean
// will be false: This condition is an error in Go and we skip all
// the fields.
func dominantField(fields []field) (field, bool) {
// The fields are sorted in increasing index-length order. The winner
// must therefore be one with the shortest index length. Drop all
// longer entries, which is easy: just truncate the slice.
length := len(fields[0].index)
tagged := -1 // Index of first tagged field.
for i, f := range fields {
if len(f.index) > length {
fields = fields[:i]
break
}
if f.tag {
if tagged >= 0 {
// Multiple tagged fields at the same level: conflict.
// Return no field.
return field{}, false
}
tagged = i
}
}
if tagged >= 0 {
return fields[tagged], true
}
// All remaining fields have the same length. If there's more than one,
// we have a conflict (two fields named "X" at the same level) and we
// return no field.
if len(fields) > 1 {
return field{}, false
}
return fields[0], true
}
var fieldCache struct {
sync.RWMutex
m map[reflect.Type][]field
}
// cachedTypeFields is like typeFields but uses a cache to avoid repeated work.
func cachedTypeFields(t reflect.Type) []field {
fieldCache.RLock()
f := fieldCache.m[t]
fieldCache.RUnlock()
if f != nil {
return f
}
// Compute fields without lock.
// Might duplicate effort but won't hold other computations back.
f = typeFields(t)
if f == nil {
f = []field{}
}
fieldCache.Lock()
if fieldCache.m == nil {
fieldCache.m = map[reflect.Type][]field{}
}
fieldCache.m[t] = f
fieldCache.Unlock()
return f
}

13
integration/vendor/github.com/BurntSushi/ty/COPYING generated vendored Normal file
View file

@ -0,0 +1,13 @@
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
Version 2, December 2004
Copyright (C) 2004 Sam Hocevar <sam@hocevar.net>
Everyone is permitted to copy and distribute verbatim or modified
copies of this license document, and changing it is allowed as long
as the name is changed.
DO WHAT THE FUCK YOU WANT TO PUBLIC LICENSE
TERMS AND CONDITIONS FOR COPYING, DISTRIBUTION AND MODIFICATION
0. You just DO WHAT THE FUCK YOU WANT TO.

22
integration/vendor/github.com/BurntSushi/ty/doc.go generated vendored Normal file
View file

@ -0,0 +1,22 @@
/*
Package ty provides utilities for writing type parametric functions with run
time type safety.
This package contains two sub-packages `fun` and `data` which define some
potentially useful functions and abstractions using the type checker in
this package.
Requirements
Go tip (or 1.1 when it's released) is required. This package will not work
with Go 1.0.x or earlier.
The very foundation of this package only recently became possible with the
addition of 3 new functions in the standard library `reflect` package:
SliceOf, MapOf and ChanOf. In particular, it provides the ability to
dynamically construct types at run time from component types.
Further extensions to this package can be made if similar functions are added
for structs and functions(?).
*/
package ty

View file

@ -0,0 +1,84 @@
package fun
import (
"reflect"
"github.com/BurntSushi/ty"
)
// AsyncChan has a parametric type:
//
// func AsyncChan(chan A) (send chan<- A, recv <-chan A)
//
// AsyncChan provides a channel abstraction without a fixed size buffer.
// The input should be a pointer to a channel that has a type without a
// direction, e.g., `new(chan int)`. Two new channels are returned: `send` and
// `recv`. The caller must send data on the `send` channel and receive data on
// the `recv` channel.
//
// Implementation is inspired by Kyle Lemons' work:
// https://github.com/kylelemons/iq/blob/master/iq_slice.go
func AsyncChan(baseChan interface{}) (send, recv interface{}) {
chk := ty.Check(
new(func(*chan ty.A) (chan ty.A, chan ty.A)),
baseChan)
// We don't care about the baseChan---it is only used to construct
// the return types.
tsend, trecv := chk.Returns[0], chk.Returns[1]
buf := make([]reflect.Value, 0, 10)
rsend := reflect.MakeChan(tsend, 0)
rrecv := reflect.MakeChan(trecv, 0)
go func() {
defer rrecv.Close()
BUFLOOP:
for {
if len(buf) == 0 {
rv, ok := rsend.Recv()
if !ok {
break BUFLOOP
}
buf = append(buf, rv)
}
cases := []reflect.SelectCase{
// case v, ok := <-send
{
Dir: reflect.SelectRecv,
Chan: rsend,
},
// case recv <- buf[0]
{
Dir: reflect.SelectSend,
Chan: rrecv,
Send: buf[0],
},
}
choice, rval, rok := reflect.Select(cases)
switch choice {
case 0:
// case v, ok := <-send
if !rok {
break BUFLOOP
}
buf = append(buf, rval)
case 1:
// case recv <- buf[0]
buf = buf[1:]
default:
panic("bug")
}
}
for _, rv := range buf {
rrecv.Send(rv)
}
}()
// Create the directional channel types.
tsDir := reflect.ChanOf(reflect.SendDir, tsend.Elem())
trDir := reflect.ChanOf(reflect.RecvDir, trecv.Elem())
return rsend.Convert(tsDir).Interface(), rrecv.Convert(trDir).Interface()
}

118
integration/vendor/github.com/BurntSushi/ty/fun/doc.go generated vendored Normal file
View file

@ -0,0 +1,118 @@
/*
Package fun provides type parametric utility functions for lists, sets,
channels and maps.
The central contribution of this package is a set of functions that operate
on values without depending on their types while maintaining type safety at
run time using the `reflect` package.
There are two primary concerns when deciding whether to use this package
or not: the loss of compile time type safety and performance. In particular,
with regard to performance, most functions here are much slower than their
built-in counter parts. However, there are a couple where the overhead of
reflection is relatively insignificant: AsyncChan and ParMap.
In terms of code structure and organization, the price is mostly paid inside
of the package due to the annoyances of operating with `reflect`. The caller
usually only has one obligation other than to provide values consistent with
the type of the function: type assert the result to the desired type.
When the caller provides values that are inconsistent with the parametric type
of the function, the function will panic with a `TypeError`. (Either because
the types cannot be unified or because they cannot be constructed due to
limitations of the `reflect` package. See the `github.com/BurntSushi/ty`
package for more details.)
Requirements
Go tip (or 1.1 when it's released) is required. This package will not work
with Go 1.0.x or earlier.
The very foundation of this package only recently became possible with the
addition of 3 new functions in the standard library `reflect` package:
SliceOf, MapOf and ChanOf. In particular, it provides the ability to
dynamically construct types at run time from component types.
Further extensions to this package can be made if similar functions are added
for structs and functions(?).
Examples
Squaring each integer in a slice:
square := func(x int) int { return x * x }
nums := []int{1, 2, 3, 4, 5}
squares := Map(square, nums).([]int)
Reversing any slice:
slice := []string{"a", "b", "c"}
reversed := Reverse(slice).([]string)
Sorting any slice:
// Sort a slice of structs with first class functions.
type Album struct {
Title string
Year int
}
albums := []Album{
{"Born to Run", 1975},
{"WIESS", 1973},
{"Darkness", 1978},
{"Greetings", 1973},
}
less := func(a, b Album) bool { return a.Year < b.Year },
sorted := QuickSort(less, albums).([]Album)
Parallel map:
// Compute the prime factorization concurrently
// for every integer in [1000, 10000].
primeFactors := func(n int) []int { // compute prime factors }
factors := ParMap(primeFactors, Range(1000, 10001)).([]int)
Asynchronous channel without a fixed size buffer:
s, r := AsyncChan(new(chan int))
send, recv := s.(chan<- int), r.(<-chan int)
// Send as much as you want.
for i := 0; i < 100; i++ {
s <- i
}
close(s)
for i := range recv {
// do something with `i`
}
Shuffle any slice in place:
jumbleMe := []string{"The", "quick", "brown", "fox"}
Shuffle(jumbleMe)
Function memoization:
// Memoizing a recursive function like `fibonacci`.
// Write it like normal:
var fib func(n int64) int64
fib = func(n int64) int64 {
switch n {
case 0:
return 0
case 1:
return 1
}
return fib(n - 1) + fib(n - 2)
}
// And wrap it with `Memo`.
fib = Memo(fib).(func(int64) int64)
// Will keep your CPU busy for a long time
// without memoization.
fmt.Println(fib(80))
*/
package fun

View file

@ -0,0 +1,35 @@
package fun
import (
"reflect"
"github.com/BurntSushi/ty"
)
// Memo has a parametric type:
//
// func Memo(f func(A) B) func(A) B
//
// Memo memoizes any function of a single argument that returns a single value.
// The type `A` must be a Go type for which the comparison operators `==` and
// `!=` are fully defined (this rules out functions, maps and slices).
func Memo(f interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A) ty.B)),
f)
vf := chk.Args[0]
saved := make(map[interface{}]reflect.Value)
memo := func(in []reflect.Value) []reflect.Value {
val := in[0].Interface()
ret, ok := saved[val]
if ok {
return []reflect.Value{ret}
}
ret = call1(vf, in[0])
saved[val] = ret
return []reflect.Value{ret}
}
return reflect.MakeFunc(vf.Type(), memo).Interface()
}

303
integration/vendor/github.com/BurntSushi/ty/fun/list.go generated vendored Normal file
View file

@ -0,0 +1,303 @@
package fun
import (
"reflect"
"runtime"
"sync"
"github.com/BurntSushi/ty"
)
// All has a parametric type:
//
// func All(p func(A) bool, xs []A) bool
//
// All returns `true` if and only if every element in `xs` satisfies `p`.
func All(f, xs interface{}) bool {
chk := ty.Check(
new(func(func(ty.A) bool, []ty.A) bool),
f, xs)
vf, vxs := chk.Args[0], chk.Args[1]
xsLen := vxs.Len()
for i := 0; i < xsLen; i++ {
if !call1(vf, vxs.Index(i)).Interface().(bool) {
return false
}
}
return true
}
// Exists has a parametric type:
//
// func Exists(p func(A) bool, xs []A) bool
//
// Exists returns `true` if and only if an element in `xs` satisfies `p`.
func Exists(f, xs interface{}) bool {
chk := ty.Check(
new(func(func(ty.A) bool, []ty.A) bool),
f, xs)
vf, vxs := chk.Args[0], chk.Args[1]
xsLen := vxs.Len()
for i := 0; i < xsLen; i++ {
if call1(vf, vxs.Index(i)).Interface().(bool) {
return true
}
}
return false
}
// In has a parametric type:
//
// func In(needle A, haystack []A) bool
//
// In returns `true` if and only if `v` can be found in `xs`. The equality test
// used is Go's standard `==` equality and NOT deep equality.
//
// Note that this requires that `A` be a type that can be meaningfully compared.
func In(needle, haystack interface{}) bool {
chk := ty.Check(
new(func(ty.A, []ty.A) bool),
needle, haystack)
vhaystack := chk.Args[1]
length := vhaystack.Len()
for i := 0; i < length; i++ {
if vhaystack.Index(i).Interface() == needle {
return true
}
}
return false
}
// Map has a parametric type:
//
// func Map(f func(A) B, xs []A) []B
//
// Map returns the list corresponding to the return value of applying
// `f` to each element in `xs`.
func Map(f, xs interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A) ty.B, []ty.A) []ty.B),
f, xs)
vf, vxs, tys := chk.Args[0], chk.Args[1], chk.Returns[0]
xsLen := vxs.Len()
vys := reflect.MakeSlice(tys, xsLen, xsLen)
for i := 0; i < xsLen; i++ {
vy := call1(vf, vxs.Index(i))
vys.Index(i).Set(vy)
}
return vys.Interface()
}
// Filter has a parametric type:
//
// func Filter(p func(A) bool, xs []A) []A
//
// Filter returns a new list only containing the elements of `xs` that satisfy
// the predicate `p`.
func Filter(p, xs interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A) bool, []ty.A) []ty.A),
p, xs)
vp, vxs, tys := chk.Args[0], chk.Args[1], chk.Returns[0]
xsLen := vxs.Len()
vys := reflect.MakeSlice(tys, 0, xsLen)
for i := 0; i < xsLen; i++ {
vx := vxs.Index(i)
if call1(vp, vx).Bool() {
vys = reflect.Append(vys, vx)
}
}
return vys.Interface()
}
// Foldl has a parametric type:
//
// func Foldl(f func(A, B) B, init B, xs []A) B
//
// Foldl reduces a list of A to a single element B using a left fold with
// an initial value `init`.
func Foldl(f, init, xs interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A, ty.B) ty.B, ty.B, []ty.A) ty.B),
f, init, xs)
vf, vinit, vxs, tb := chk.Args[0], chk.Args[1], chk.Args[2], chk.Returns[0]
xsLen := vxs.Len()
vb := zeroValue(tb)
vb.Set(vinit)
if xsLen == 0 {
return vb.Interface()
}
vb.Set(call1(vf, vxs.Index(0), vb))
for i := 1; i < xsLen; i++ {
vb.Set(call1(vf, vxs.Index(i), vb))
}
return vb.Interface()
}
// Foldr has a parametric type:
//
// func Foldr(f func(A, B) B, init B, xs []A) B
//
// Foldr reduces a list of A to a single element B using a right fold with
// an initial value `init`.
func Foldr(f, init, xs interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A, ty.B) ty.B, ty.B, []ty.A) ty.B),
f, init, xs)
vf, vinit, vxs, tb := chk.Args[0], chk.Args[1], chk.Args[2], chk.Returns[0]
xsLen := vxs.Len()
vb := zeroValue(tb)
vb.Set(vinit)
if xsLen == 0 {
return vb.Interface()
}
vb.Set(call1(vf, vxs.Index(xsLen-1), vb))
for i := xsLen - 2; i >= 0; i-- {
vb.Set(call1(vf, vxs.Index(i), vb))
}
return vb.Interface()
}
// Concat has a parametric type:
//
// func Concat(xs [][]A) []A
//
// Concat returns a new flattened list by appending all elements of `xs`.
func Concat(xs interface{}) interface{} {
chk := ty.Check(
new(func([][]ty.A) []ty.A),
xs)
vxs, tflat := chk.Args[0], chk.Returns[0]
xsLen := vxs.Len()
vflat := reflect.MakeSlice(tflat, 0, xsLen*3)
for i := 0; i < xsLen; i++ {
vflat = reflect.AppendSlice(vflat, vxs.Index(i))
}
return vflat.Interface()
}
// Reverse has a parametric type:
//
// func Reverse(xs []A) []A
//
// Reverse returns a new slice that is the reverse of `xs`.
func Reverse(xs interface{}) interface{} {
chk := ty.Check(
new(func([]ty.A) []ty.A),
xs)
vxs, tys := chk.Args[0], chk.Returns[0]
xsLen := vxs.Len()
vys := reflect.MakeSlice(tys, xsLen, xsLen)
for i := 0; i < xsLen; i++ {
vys.Index(i).Set(vxs.Index(xsLen - 1 - i))
}
return vys.Interface()
}
// Copy has a parametric type:
//
// func Copy(xs []A) []A
//
// Copy returns a copy of `xs` using Go's `copy` operation.
func Copy(xs interface{}) interface{} {
chk := ty.Check(
new(func([]ty.A) []ty.A),
xs)
vxs, tys := chk.Args[0], chk.Returns[0]
xsLen := vxs.Len()
vys := reflect.MakeSlice(tys, xsLen, xsLen)
reflect.Copy(vys, vxs)
return vys.Interface()
}
// ParMap has a parametric type:
//
// func ParMap(f func(A) B, xs []A) []B
//
// ParMap is just like Map, except it applies `f` to each element in `xs`
// concurrently using N worker goroutines (where N is the number of CPUs
// available reported by the Go runtime). If you want to control the number
// of goroutines spawned, use `ParMapN`.
//
// It is important that `f` not be a trivial operation, otherwise the overhead
// of executing it concurrently will result in worse performance than using
// a `Map`.
func ParMap(f, xs interface{}) interface{} {
n := runtime.NumCPU()
if n < 1 {
n = 1
}
return ParMapN(f, xs, n)
}
// ParMapN has a parametric type:
//
// func ParMapN(f func(A) B, xs []A, n int) []B
//
// ParMapN is just like Map, except it applies `f` to each element in `xs`
// concurrently using `n` worker goroutines.
//
// It is important that `f` not be a trivial operation, otherwise the overhead
// of executing it concurrently will result in worse performance than using
// a `Map`.
func ParMapN(f, xs interface{}, n int) interface{} {
chk := ty.Check(
new(func(func(ty.A) ty.B, []ty.A) []ty.B),
f, xs)
vf, vxs, tys := chk.Args[0], chk.Args[1], chk.Returns[0]
xsLen := vxs.Len()
ys := reflect.MakeSlice(tys, xsLen, xsLen)
if n < 1 {
n = 1
}
work := make(chan int, n)
wg := new(sync.WaitGroup)
for i := 0; i < n; i++ {
wg.Add(1)
go func() {
for j := range work {
// Good golly miss molly. Is `reflect.Value.Index`
// safe to access/set from multiple goroutines?
// XXX: If not, we'll need an extra wave of allocation to
// use real slices of `reflect.Value`.
ys.Index(j).Set(call1(vf, vxs.Index(j)))
}
wg.Done()
}()
}
for i := 0; i < xsLen; i++ {
work <- i
}
close(work)
wg.Wait()
return ys.Interface()
}
// Range generates a list of integers corresponding to every integer in
// the half-open interval [x, y).
//
// Range will panic if `end < start`.
func Range(start, end int) []int {
if end < start {
panic("range must have end greater than or equal to start")
}
r := make([]int, end-start)
for i := start; i < end; i++ {
r[i-start] = i
}
return r
}

46
integration/vendor/github.com/BurntSushi/ty/fun/map.go generated vendored Normal file
View file

@ -0,0 +1,46 @@
package fun
import (
"reflect"
"github.com/BurntSushi/ty"
)
// Keys has a parametric type:
//
// func Keys(m map[A]B) []A
//
// Keys returns a list of the keys of `m` in an unspecified order.
func Keys(m interface{}) interface{} {
chk := ty.Check(
new(func(map[ty.A]ty.B) []ty.A),
m)
vm, tkeys := chk.Args[0], chk.Returns[0]
vkeys := reflect.MakeSlice(tkeys, vm.Len(), vm.Len())
for i, vkey := range vm.MapKeys() {
vkeys.Index(i).Set(vkey)
}
return vkeys.Interface()
}
// Values has a parametric type:
//
// func Values(m map[A]B) []B
//
// Values returns a list of the values of `m` in an unspecified order.
func Values(m interface{}) interface{} {
chk := ty.Check(
new(func(map[ty.A]ty.B) []ty.B),
m)
vm, tvals := chk.Args[0], chk.Returns[0]
vvals := reflect.MakeSlice(tvals, vm.Len(), vm.Len())
for i, vkey := range vm.MapKeys() {
vvals.Index(i).Set(vm.MapIndex(vkey))
}
return vvals.Interface()
}
// func MapMerge(m1, m2 interface{}) interface{} {
// }

View file

@ -0,0 +1,94 @@
package fun
import (
"math/rand"
"reflect"
"time"
"github.com/BurntSushi/ty"
)
var randNumGen *rand.Rand
func init() {
randNumGen = rand.New(rand.NewSource(time.Now().UnixNano()))
}
// ShuffleGen has a parametric type:
//
// func ShuffleGen(xs []A, rng *rand.Rand)
//
// ShuffleGen shuffles `xs` in place using the given random number
// generator `rng`.
func ShuffleGen(xs interface{}, rng *rand.Rand) {
chk := ty.Check(
new(func([]ty.A, *rand.Rand)),
xs, rng)
vxs := chk.Args[0]
// Implements the Fisher-Yates shuffle: http://goo.gl/Hb9vg
xsLen := vxs.Len()
swapper := swapperOf(vxs.Type().Elem())
for i := xsLen - 1; i >= 1; i-- {
j := rng.Intn(i + 1)
swapper.swap(vxs.Index(i), vxs.Index(j))
}
}
// Shuffle has a parametric type:
//
// func Shuffle(xs []A)
//
// Shuffle shuffles `xs` in place using a default random number
// generator seeded once at program initialization.
func Shuffle(xs interface{}) {
ShuffleGen(xs, randNumGen)
}
// Sample has a parametric type:
//
// func Sample(population []A, n int) []A
//
// Sample returns a random sample of size `n` from a list
// `population` using a default random number generator seeded once at
// program initialization.
// All elements in `population` have an equal chance of being selected.
// If `n` is greater than the size of `population`, then `n` is set to
// the size of the population.
func Sample(population interface{}, n int) interface{} {
return SampleGen(population, n, randNumGen)
}
// SampleGen has a parametric type:
//
// func SampleGen(population []A, n int, rng *rand.Rand) []A
//
// SampleGen returns a random sample of size `n` from a list
// `population` using a given random number generator `rng`.
// All elements in `population` have an equal chance of being selected.
// If `n` is greater than the size of `population`, then `n` is set to
// the size of the population.
func SampleGen(population interface{}, n int, rng *rand.Rand) interface{} {
chk := ty.Check(
new(func([]ty.A, int, *rand.Rand) []ty.A),
population, n, rng)
rpop, tsamp := chk.Args[0], chk.Returns[0]
popLen := rpop.Len()
if n == 0 {
return reflect.MakeSlice(tsamp, 0, 0).Interface()
}
if n > popLen {
n = popLen
}
// TODO(burntsushi): Implement an algorithm that doesn't depend on
// the size of the population.
rsamp := reflect.MakeSlice(tsamp, n, n)
choices := rng.Perm(popLen)
for i := 0; i < n; i++ {
rsamp.Index(i).Set(rpop.Index(choices[i]))
}
return rsamp.Interface()
}

99
integration/vendor/github.com/BurntSushi/ty/fun/set.go generated vendored Normal file
View file

@ -0,0 +1,99 @@
package fun
import (
"reflect"
"github.com/BurntSushi/ty"
)
// Set has a parametric type:
//
// func Set(xs []A) map[A]bool
//
// Set creates a set from a list.
func Set(xs interface{}) interface{} {
chk := ty.Check(
new(func([]ty.A) map[ty.A]bool),
xs)
vxs, tset := chk.Args[0], chk.Returns[0]
vtrue := reflect.ValueOf(true)
vset := reflect.MakeMap(tset)
xsLen := vxs.Len()
for i := 0; i < xsLen; i++ {
vset.SetMapIndex(vxs.Index(i), vtrue)
}
return vset.Interface()
}
// Union has a parametric type:
//
// func Union(a map[A]bool, b map[A]bool) map[A]bool
//
// Union returns the union of two sets, where a set is represented as a
// `map[A]bool`. The sets `a` and `b` are not modified.
func Union(a, b interface{}) interface{} {
chk := ty.Check(
new(func(map[ty.A]bool, map[ty.A]bool) map[ty.A]bool),
a, b)
va, vb, tc := chk.Args[0], chk.Args[1], chk.Returns[0]
vtrue := reflect.ValueOf(true)
vc := reflect.MakeMap(tc)
for _, vkey := range va.MapKeys() {
vc.SetMapIndex(vkey, vtrue)
}
for _, vkey := range vb.MapKeys() {
vc.SetMapIndex(vkey, vtrue)
}
return vc.Interface()
}
// Intersection has a parametric type:
//
// func Intersection(a map[A]bool, b map[A]bool) map[A]bool
//
// Intersection returns the intersection of two sets, where a set is
// represented as a `map[A]bool`. The sets `a` and `b` are not modified.
func Intersection(a, b interface{}) interface{} {
chk := ty.Check(
new(func(map[ty.A]bool, map[ty.A]bool) map[ty.A]bool),
a, b)
va, vb, tc := chk.Args[0], chk.Args[1], chk.Returns[0]
vtrue := reflect.ValueOf(true)
vc := reflect.MakeMap(tc)
for _, vkey := range va.MapKeys() {
if vb.MapIndex(vkey).IsValid() {
vc.SetMapIndex(vkey, vtrue)
}
}
for _, vkey := range vb.MapKeys() {
if va.MapIndex(vkey).IsValid() {
vc.SetMapIndex(vkey, vtrue)
}
}
return vc.Interface()
}
// Difference has a parametric type:
//
// func Difference(a map[A]bool, b map[A]bool) map[A]bool
//
// Difference returns a set with all elements in `a` that are not in `b`.
// The sets `a` and `b` are not modified.
func Difference(a, b interface{}) interface{} {
chk := ty.Check(
new(func(map[ty.A]bool, map[ty.A]bool) map[ty.A]bool),
a, b)
va, vb, tc := chk.Args[0], chk.Args[1], chk.Returns[0]
vtrue := reflect.ValueOf(true)
vc := reflect.MakeMap(tc)
for _, vkey := range va.MapKeys() {
if !vb.MapIndex(vkey).IsValid() {
vc.SetMapIndex(vkey, vtrue)
}
}
return vc.Interface()
}

View file

@ -0,0 +1,98 @@
package fun
import (
"reflect"
"sort"
"github.com/BurntSushi/ty"
)
// QuickSort has a parametric type:
//
// func QuickSort(less func(x1 A, x2 A) bool, []A) []A
//
// QuickSort applies the "quicksort" algorithm to return a new sorted list
// of `xs`, where `xs` is not modified.
//
// `less` should be a function that returns true if and only if `x1` is less
// than `x2`.
func QuickSort(less, xs interface{}) interface{} {
chk := ty.Check(
new(func(func(ty.A, ty.A) bool, []ty.A) []ty.A),
less, xs)
vless, vxs, tys := chk.Args[0], chk.Args[1], chk.Returns[0]
var qsort func(left, right int)
var partition func(left, right, pivot int) int
xsind := Range(0, vxs.Len())
qsort = func(left, right int) {
if left >= right {
return
}
pivot := (left + right) / 2
pivot = partition(left, right, pivot)
qsort(left, pivot-1)
qsort(pivot+1, right)
}
partition = func(left, right, pivot int) int {
vpivot := xsind[pivot]
xsind[pivot], xsind[right] = xsind[right], xsind[pivot]
ind := left
for i := left; i < right; i++ {
if call1(vless, vxs.Index(xsind[i]), vxs.Index(vpivot)).Bool() {
xsind[i], xsind[ind] = xsind[ind], xsind[i]
ind++
}
}
xsind[ind], xsind[right] = xsind[right], xsind[ind]
return ind
}
// Sort `xsind` in place.
qsort(0, len(xsind)-1)
vys := reflect.MakeSlice(tys, len(xsind), len(xsind))
for i, xsIndex := range xsind {
vys.Index(i).Set(vxs.Index(xsIndex))
}
return vys.Interface()
}
// Sort has a parametric type:
//
// func Sort(less func(x1 A, x2 A) bool, []A)
//
// Sort uses the standard library `sort` package to sort `xs` in place.
//
// `less` should be a function that returns true if and only if `x1` is less
// than `x2`.
func Sort(less, xs interface{}) {
chk := ty.Check(
new(func(func(ty.A, ty.A) bool, []ty.A)),
less, xs)
vless, vxs := chk.Args[0], chk.Args[1]
sort.Sort(&sortable{vless, vxs, swapperOf(vxs.Type().Elem())})
}
type sortable struct {
less reflect.Value
xs reflect.Value
swapper swapper
}
func (s *sortable) Less(i, j int) bool {
ith, jth := s.xs.Index(i), s.xs.Index(j)
return call1(s.less, ith, jth).Bool()
}
func (s *sortable) Swap(i, j int) {
s.swapper.swap(s.xs.Index(i), s.xs.Index(j))
}
func (s *sortable) Len() int {
return s.xs.Len()
}

View file

@ -0,0 +1,37 @@
package fun
import (
"reflect"
)
func zeroValue(typ reflect.Type) reflect.Value {
return reflect.New(typ).Elem()
}
type swapper reflect.Value
func swapperOf(typ reflect.Type) swapper {
return swapper(zeroValue(typ))
}
func (s swapper) swap(a, b reflect.Value) {
vs := reflect.Value(s)
vs.Set(a)
a.Set(b)
b.Set(vs)
}
func call(f reflect.Value, args ...reflect.Value) {
f.Call(args)
}
func call1(f reflect.Value, args ...reflect.Value) reflect.Value {
return f.Call(args)[0]
}
func call2(f reflect.Value, args ...reflect.Value) (
reflect.Value, reflect.Value) {
ret := f.Call(args)
return ret[0], ret[1]
}

View file

@ -0,0 +1,338 @@
package ty
import (
"fmt"
"reflect"
"strings"
)
// TypeError corresponds to any error reported by the `Check` function.
// Since `Check` panics, if you want to run `Check` safely, it is
// appropriate to recover and use a type switch to discover a `TypeError`
// value.
type TypeError string
func (te TypeError) Error() string {
return string(te)
}
func pe(format string, v ...interface{}) TypeError {
return TypeError(fmt.Sprintf(format, v...))
}
func ppe(format string, v ...interface{}) {
panic(pe(format, v...))
}
// Typed corresponds to the information returned by `Check`.
type Typed struct {
// In correspondence with the `as` parameter to `Check`.
Args []reflect.Value
// In correspondence with the return types of `f` in `Check`.
Returns []reflect.Type
// The type environment generated via unification in `Check`.
// (Its usefulness in the public API is questionable.)
TypeEnv map[string]reflect.Type
}
// Check accepts a function `f`, which may have a parametric type, along with a
// number of arguments in correspondence with the arguments to `f`,
// and returns inferred Go type information. This type information includes
// a list of `reflect.Value` in correspondence with `as`, a list of
// `reflect.Type` in correspondence with the return types of `f` and a type
// environment mapping type variables to `reflect.Type`.
//
// The power of `Check` comes from the following invariant: if `Check` returns,
// then the types of the arguments corresponding to `as` are consistent
// with the parametric type of `f`, *and* the parametric return types of `f`
// were made into valid Go types that are not parametric. Otherwise, there is
// a bug in `Check`.
//
// More concretely, consider a simple parametric function `Map`, which
// transforms a list of elements by applying a function to each element in
// order to generate a new list. Such a function constructed only for integers
// might have a type like
//
// func Map(func(int) int, []int) []int
//
// But the parametric type of `Map` could be given with
//
// func Map(func(A) B, []A) []B
//
// which in English reads, "Given a function from any type `A` to any type `B`
// and a slice of `A`, `Map` returns a slice of `B`."
//
// To write a parametric function like `Map`, one can pass a pointer
// to a nil function of the desired parametric type to get the reflection
// information:
//
// func Map(f, xs interface{}) interface{} {
// // Given the parametric type and the arguments, Check will
// // return all the reflection information you need to write `Map`.
// uni := ty.Check(
// new(func(func(ty.A) ty.B, []ty.A) []ty.B),
// f, xs)
//
// // `vf` and `vxs` are `reflect.Value`s of `f` and `xs`.
// vf, vxs := uni.Args[0], uni.Args[1]
//
// // `tys` is a `reflect.Type` of `[]ty.B` where `ty.B` is replaced
// // with the return type of the given function `f`.
// tys := uni.Returns[0]
//
// // Given the promise of `Check`, we now know that `vf` has
// // type `func(ty.A) ty.B` and `vxs` has type `[]ty.A`.
// xsLen := vxs.Len()
//
// // Constructs a new slice which will have type `[]ty.B`.
// vys := reflect.MakeSlice(tys, xsLen, xsLen)
//
// // Actually perform the `Map` operation, but in the world of
// // reflection.
// for i := 0; i < xsLen; i++ {
// vy := vf.Call([]reflect.Value{vxs.Index(i)})[0]
// vys.Index(i).Set(vy)
// }
//
// // The `reflect.Value.Interface` method is how we exit the world of
// // reflection. The onus is now on the caller to type assert it to
// // the appropriate type.
// return vys.Interface()
// }
//
// Working in the reflection world is certainly more inconvenient than writing
// regular Go code, but the information and invariants held by `Check` provide
// a more convenient experience than how one normally works with reflection.
// (Notice that there is no error-prone type switching or boiler plate to
// construct new types, since `Check` guarantees the types are consistent
// with the inputs for us.)
//
// And while writing such functions is still not so convenient,
// invoking them is simple:
//
// square := func(x int) int { return x * x }
// squared := Map(square, []int{1, 2, 3, 4, 5}).([]int)
//
// Restrictions
//
// There are a few restrictions imposed on the parametric return types of
// `f`: type variables may only be found in types that can be composed by the
// `reflect` package. This *only* includes channels, maps, pointers and slices.
// If a type variable is found in an array, function or struct, `Check` will
// panic.
//
// Also, type variables inside of structs are ignored in the types of the
// arguments `as`. This restriction may be lifted in the future.
//
// To be clear: type variables *may* appear in arrays or functions in the types
// of the arguments `as`.
func Check(f interface{}, as ...interface{}) *Typed {
rf := reflect.ValueOf(f)
tf := rf.Type()
if tf.Kind() == reflect.Ptr {
rf = reflect.Indirect(rf)
tf = rf.Type()
}
if tf.Kind() != reflect.Func {
ppe("The type of `f` must be a function, but it is a '%s'.", tf.Kind())
}
if tf.NumIn() != len(as) {
ppe("`f` expects %d arguments, but only %d were given.",
tf.NumIn(), len(as))
}
// Populate the argument value list.
args := make([]reflect.Value, len(as))
for i := 0; i < len(as); i++ {
args[i] = reflect.ValueOf(as[i])
}
// Populate our type variable environment through unification.
tyenv := make(tyenv)
for i := 0; i < len(args); i++ {
tp := typePair{tyenv, tf.In(i), args[i].Type()}
// Mutates the type variable environment.
if err := tp.unify(tp.param, tp.input); err != nil {
argTypes := make([]string, len(args))
for i := range args {
argTypes[i] = args[i].Type().String()
}
ppe("\nError type checking\n\t%s\nwith argument types\n\t(%s)\n%s",
tf, strings.Join(argTypes, ", "), err)
}
}
// Now substitute those types into the return types of `f`.
retTypes := make([]reflect.Type, tf.NumOut())
for i := 0; i < tf.NumOut(); i++ {
retTypes[i] = (&returnType{tyenv, tf.Out(i)}).tysubst(tf.Out(i))
}
return &Typed{args, retTypes, map[string]reflect.Type(tyenv)}
}
// tyenv maps type variable names to their inferred Go type.
type tyenv map[string]reflect.Type
// typePair represents a pair of types to be unified. They act as a way to
// report sensible error messages from within the unification algorithm.
//
// It also includes a type environment, which is mutated during unification.
type typePair struct {
tyenv tyenv
param reflect.Type
input reflect.Type
}
func (tp typePair) error(format string, v ...interface{}) error {
return pe("Type error when unifying type '%s' and '%s': %s",
tp.param, tp.input, fmt.Sprintf(format, v...))
}
// unify attempts to satisfy a pair of types, where the `param` type is the
// expected type of a function argument and the `input` type is the known
// type of a function argument. The `param` type may be parametric (that is,
// it may contain a type that is convertible to TypeVariable) but the
// `input` type may *not* be parametric.
//
// Any failure to unify the two types results in a panic.
//
// The end result of unification is a type environment: a set of substitutions
// from type variable to a Go type.
func (tp typePair) unify(param, input reflect.Type) error {
if tyname := tyvarName(input); len(tyname) > 0 {
return tp.error("Type variables are not allowed in the types of " +
"arguments.")
}
if tyname := tyvarName(param); len(tyname) > 0 {
if cur, ok := tp.tyenv[tyname]; ok && cur != input {
return tp.error("Type variable %s expected type '%s' but got '%s'.",
tyname, cur, input)
} else if !ok {
tp.tyenv[tyname] = input
}
return nil
}
if param.Kind() != input.Kind() {
return tp.error("Cannot unify different kinds of types '%s' and '%s'.",
param, input)
}
switch param.Kind() {
case reflect.Array:
return tp.unify(param.Elem(), input.Elem())
case reflect.Chan:
if param.ChanDir() != input.ChanDir() {
return tp.error("Cannot unify '%s' with '%s' "+
"(channel directions are different: '%s' != '%s').",
param, input, param.ChanDir(), input.ChanDir())
}
return tp.unify(param.Elem(), input.Elem())
case reflect.Func:
if param.NumIn() != input.NumIn() || param.NumOut() != input.NumOut() {
return tp.error("Cannot unify '%s' with '%s'.", param, input)
}
for i := 0; i < param.NumIn(); i++ {
if err := tp.unify(param.In(i), input.In(i)); err != nil {
return err
}
}
for i := 0; i < param.NumOut(); i++ {
if err := tp.unify(param.Out(i), input.Out(i)); err != nil {
return err
}
}
case reflect.Map:
if err := tp.unify(param.Key(), input.Key()); err != nil {
return err
}
return tp.unify(param.Elem(), input.Elem())
case reflect.Ptr:
return tp.unify(param.Elem(), input.Elem())
case reflect.Slice:
return tp.unify(param.Elem(), input.Elem())
}
// The only other container types are Interface and Struct.
// I am unsure about what to do with interfaces. Mind is fuzzy.
// Structs? I don't think it really makes much sense to use type
// variables inside of them.
return nil
}
// returnType corresponds to the type of a single return value of a function,
// in which the type may be parametric. It also contains a type environment
// constructed from unification.
type returnType struct {
tyenv tyenv
typ reflect.Type
}
func (rt returnType) panic(format string, v ...interface{}) {
ppe("Error substituting in return type '%s': %s",
rt.typ, fmt.Sprintf(format, v...))
}
// tysubst attempts to substitute all type variables within a single return
// type with their corresponding Go type from the type environment.
//
// tysubst will panic if a type variable is unbound, or if it encounters a
// type that cannot be dynamically created. Such types include arrays,
// functions and structs. (A limitation of the `reflect` package.)
func (rt returnType) tysubst(typ reflect.Type) reflect.Type {
if tyname := tyvarName(typ); len(tyname) > 0 {
if thetype, ok := rt.tyenv[tyname]; !ok {
rt.panic("Unbound type variable %s.", tyname)
} else {
return thetype
}
}
switch typ.Kind() {
case reflect.Array:
rt.panic("Cannot dynamically create Array types.")
case reflect.Chan:
return reflect.ChanOf(typ.ChanDir(), rt.tysubst(typ.Elem()))
case reflect.Func:
rt.panic("Cannot dynamically create Function types.")
case reflect.Interface:
// rt.panic("TODO")
// Not sure if this is right.
return typ
case reflect.Map:
return reflect.MapOf(rt.tysubst(typ.Key()), rt.tysubst(typ.Elem()))
case reflect.Ptr:
return reflect.PtrTo(rt.tysubst(typ.Elem()))
case reflect.Slice:
return reflect.SliceOf(rt.tysubst(typ.Elem()))
case reflect.Struct:
rt.panic("Cannot dynamically create Struct types.")
case reflect.UnsafePointer:
rt.panic("Cannot dynamically create unsafe.Pointer types.")
}
// We've covered all the composite types, so we're only left with
// base types.
return typ
}
func tyvarName(t reflect.Type) string {
if !t.ConvertibleTo(tyvarUnderlyingType) {
return ""
}
return t.Name()
}
// AssertType panics with a `TypeError` if `v` does not have type `t`.
// Otherwise, it returns the `reflect.Value` of `v`.
func AssertType(v interface{}, t reflect.Type) reflect.Value {
rv := reflect.ValueOf(v)
tv := rv.Type()
if tv != t {
ppe("Value '%v' has type '%s' but expected '%s'.", v, tv, t)
}
return rv
}

28
integration/vendor/github.com/BurntSushi/ty/tyvars.go generated vendored Normal file
View file

@ -0,0 +1,28 @@
package ty
import (
"reflect"
)
// TypeVariable is the underlying type of every type variable used in
// parametric types. It should not be used directly. Instead, use
//
// type myOwnTypeVariable TypeVariable
//
// to create your own type variable. For your convenience, this package
// defines some type variables for you. (e.g., `A`, `B`, `C`, ...)
type TypeVariable struct {
noImitation struct{}
}
// tyvarUnderlyingType is used to discover types that are type variables.
// Namely, any type variable must be convertible to `TypeVariable`.
var tyvarUnderlyingType = reflect.TypeOf(TypeVariable{})
type A TypeVariable
type B TypeVariable
type C TypeVariable
type D TypeVariable
type E TypeVariable
type F TypeVariable
type G TypeVariable

View file

@ -0,0 +1,22 @@
The MIT License (MIT)
Copyright (c) 2015 Microsoft
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
SOFTWARE.

View file

@ -0,0 +1,266 @@
package winio
import (
"encoding/binary"
"errors"
"fmt"
"io"
"io/ioutil"
"os"
"runtime"
"syscall"
"unicode/utf16"
)
//sys backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupRead
//sys backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) = BackupWrite
const (
BackupData = uint32(iota + 1)
BackupEaData
BackupSecurity
BackupAlternateData
BackupLink
BackupPropertyData
BackupObjectId
BackupReparseData
BackupSparseBlock
BackupTxfsData
)
const (
StreamSparseAttributes = uint32(8)
)
const (
WRITE_DAC = 0x40000
WRITE_OWNER = 0x80000
ACCESS_SYSTEM_SECURITY = 0x1000000
)
// BackupHeader represents a backup stream of a file.
type BackupHeader struct {
Id uint32 // The backup stream ID
Attributes uint32 // Stream attributes
Size int64 // The size of the stream in bytes
Name string // The name of the stream (for BackupAlternateData only).
Offset int64 // The offset of the stream in the file (for BackupSparseBlock only).
}
type win32StreamId struct {
StreamId uint32
Attributes uint32
Size uint64
NameSize uint32
}
// BackupStreamReader reads from a stream produced by the BackupRead Win32 API and produces a series
// of BackupHeader values.
type BackupStreamReader struct {
r io.Reader
bytesLeft int64
}
// NewBackupStreamReader produces a BackupStreamReader from any io.Reader.
func NewBackupStreamReader(r io.Reader) *BackupStreamReader {
return &BackupStreamReader{r, 0}
}
// Next returns the next backup stream and prepares for calls to Write(). It skips the remainder of the current stream if
// it was not completely read.
func (r *BackupStreamReader) Next() (*BackupHeader, error) {
if r.bytesLeft > 0 {
if _, err := io.Copy(ioutil.Discard, r); err != nil {
return nil, err
}
}
var wsi win32StreamId
if err := binary.Read(r.r, binary.LittleEndian, &wsi); err != nil {
return nil, err
}
hdr := &BackupHeader{
Id: wsi.StreamId,
Attributes: wsi.Attributes,
Size: int64(wsi.Size),
}
if wsi.NameSize != 0 {
name := make([]uint16, int(wsi.NameSize/2))
if err := binary.Read(r.r, binary.LittleEndian, name); err != nil {
return nil, err
}
hdr.Name = syscall.UTF16ToString(name)
}
if wsi.StreamId == BackupSparseBlock {
if err := binary.Read(r.r, binary.LittleEndian, &hdr.Offset); err != nil {
return nil, err
}
hdr.Size -= 8
}
r.bytesLeft = hdr.Size
return hdr, nil
}
// Read reads from the current backup stream.
func (r *BackupStreamReader) Read(b []byte) (int, error) {
if r.bytesLeft == 0 {
return 0, io.EOF
}
if int64(len(b)) > r.bytesLeft {
b = b[:r.bytesLeft]
}
n, err := r.r.Read(b)
r.bytesLeft -= int64(n)
if err == io.EOF {
err = io.ErrUnexpectedEOF
} else if r.bytesLeft == 0 && err == nil {
err = io.EOF
}
return n, err
}
// BackupStreamWriter writes a stream compatible with the BackupWrite Win32 API.
type BackupStreamWriter struct {
w io.Writer
bytesLeft int64
}
// NewBackupStreamWriter produces a BackupStreamWriter on top of an io.Writer.
func NewBackupStreamWriter(w io.Writer) *BackupStreamWriter {
return &BackupStreamWriter{w, 0}
}
// WriteHeader writes the next backup stream header and prepares for calls to Write().
func (w *BackupStreamWriter) WriteHeader(hdr *BackupHeader) error {
if w.bytesLeft != 0 {
return fmt.Errorf("missing %d bytes", w.bytesLeft)
}
name := utf16.Encode([]rune(hdr.Name))
wsi := win32StreamId{
StreamId: hdr.Id,
Attributes: hdr.Attributes,
Size: uint64(hdr.Size),
NameSize: uint32(len(name) * 2),
}
if hdr.Id == BackupSparseBlock {
// Include space for the int64 block offset
wsi.Size += 8
}
if err := binary.Write(w.w, binary.LittleEndian, &wsi); err != nil {
return err
}
if len(name) != 0 {
if err := binary.Write(w.w, binary.LittleEndian, name); err != nil {
return err
}
}
if hdr.Id == BackupSparseBlock {
if err := binary.Write(w.w, binary.LittleEndian, hdr.Offset); err != nil {
return err
}
}
w.bytesLeft = hdr.Size
return nil
}
// Write writes to the current backup stream.
func (w *BackupStreamWriter) Write(b []byte) (int, error) {
if w.bytesLeft < int64(len(b)) {
return 0, fmt.Errorf("too many bytes by %d", int64(len(b))-w.bytesLeft)
}
n, err := w.w.Write(b)
w.bytesLeft -= int64(n)
return n, err
}
// BackupFileReader provides an io.ReadCloser interface on top of the BackupRead Win32 API.
type BackupFileReader struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileReader returns a new BackupFileReader from a file handle. If includeSecurity is true,
// Read will attempt to read the security descriptor of the file.
func NewBackupFileReader(f *os.File, includeSecurity bool) *BackupFileReader {
r := &BackupFileReader{f, includeSecurity, 0}
runtime.SetFinalizer(r, func(r *BackupFileReader) { r.Close() })
return r
}
// Read reads a backup stream from the file by calling the Win32 API BackupRead().
func (r *BackupFileReader) Read(b []byte) (int, error) {
var bytesRead uint32
err := backupRead(syscall.Handle(r.f.Fd()), b, &bytesRead, false, r.includeSecurity, &r.ctx)
if err != nil {
return 0, &os.PathError{"BackupRead", r.f.Name(), err}
}
if bytesRead == 0 {
return 0, io.EOF
}
return int(bytesRead), nil
}
// Close frees Win32 resources associated with the BackupFileReader. It does not close
// the underlying file.
func (r *BackupFileReader) Close() error {
if r.ctx != 0 {
backupRead(syscall.Handle(r.f.Fd()), nil, nil, true, false, &r.ctx)
r.ctx = 0
}
return nil
}
// BackupFileWriter provides an io.WriteCloser interface on top of the BackupWrite Win32 API.
type BackupFileWriter struct {
f *os.File
includeSecurity bool
ctx uintptr
}
// NewBackupFileWrtier returns a new BackupFileWriter from a file handle. If includeSecurity is true,
// Write() will attempt to restore the security descriptor from the stream.
func NewBackupFileWriter(f *os.File, includeSecurity bool) *BackupFileWriter {
w := &BackupFileWriter{f, includeSecurity, 0}
runtime.SetFinalizer(w, func(w *BackupFileWriter) { w.Close() })
return w
}
// Write restores a portion of the file using the provided backup stream.
func (w *BackupFileWriter) Write(b []byte) (int, error) {
var bytesWritten uint32
err := backupWrite(syscall.Handle(w.f.Fd()), b, &bytesWritten, false, w.includeSecurity, &w.ctx)
if err != nil {
return 0, &os.PathError{"BackupWrite", w.f.Name(), err}
}
if int(bytesWritten) != len(b) {
return int(bytesWritten), errors.New("not all bytes could be written")
}
return len(b), nil
}
// Close frees Win32 resources associated with the BackupFileWriter. It does not
// close the underlying file.
func (w *BackupFileWriter) Close() error {
if w.ctx != 0 {
backupWrite(syscall.Handle(w.f.Fd()), nil, nil, true, false, &w.ctx)
w.ctx = 0
}
return nil
}
// OpenForBackup opens a file or directory, potentially skipping access checks if the backup
// or restore privileges have been acquired.
//
// If the file opened was a directory, it cannot be used with Readdir().
func OpenForBackup(path string, access uint32, share uint32, createmode uint32) (*os.File, error) {
winPath, err := syscall.UTF16FromString(path)
if err != nil {
return nil, err
}
h, err := syscall.CreateFile(&winPath[0], access, share, nil, createmode, syscall.FILE_FLAG_BACKUP_SEMANTICS|syscall.FILE_FLAG_OPEN_REPARSE_POINT, 0)
if err != nil {
err = &os.PathError{Op: "open", Path: path, Err: err}
return nil, err
}
return os.NewFile(uintptr(h), path), nil
}

View file

@ -0,0 +1,219 @@
package winio
import (
"errors"
"io"
"runtime"
"sync"
"syscall"
"time"
)
//sys cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) = CancelIoEx
//sys createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) = CreateIoCompletionPort
//sys getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) = GetQueuedCompletionStatus
//sys setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) = SetFileCompletionNotificationModes
//sys timeBeginPeriod(period uint32) (n int32) = winmm.timeBeginPeriod
const (
cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS = 1
cFILE_SKIP_SET_EVENT_ON_HANDLE = 2
)
var (
ErrFileClosed = errors.New("file has already been closed")
ErrTimeout = &timeoutError{}
)
type timeoutError struct{}
func (e *timeoutError) Error() string { return "i/o timeout" }
func (e *timeoutError) Timeout() bool { return true }
func (e *timeoutError) Temporary() bool { return true }
var ioInitOnce sync.Once
var ioCompletionPort syscall.Handle
// ioResult contains the result of an asynchronous IO operation
type ioResult struct {
bytes uint32
err error
}
// ioOperation represents an outstanding asynchronous Win32 IO
type ioOperation struct {
o syscall.Overlapped
ch chan ioResult
}
func initIo() {
h, err := createIoCompletionPort(syscall.InvalidHandle, 0, 0, 0xffffffff)
if err != nil {
panic(err)
}
ioCompletionPort = h
go ioCompletionProcessor(h)
}
// win32File implements Reader, Writer, and Closer on a Win32 handle without blocking in a syscall.
// It takes ownership of this handle and will close it if it is garbage collected.
type win32File struct {
handle syscall.Handle
wg sync.WaitGroup
closing bool
readDeadline time.Time
writeDeadline time.Time
}
// makeWin32File makes a new win32File from an existing file handle
func makeWin32File(h syscall.Handle) (*win32File, error) {
f := &win32File{handle: h}
ioInitOnce.Do(initIo)
_, err := createIoCompletionPort(h, ioCompletionPort, 0, 0xffffffff)
if err != nil {
return nil, err
}
err = setFileCompletionNotificationModes(h, cFILE_SKIP_COMPLETION_PORT_ON_SUCCESS|cFILE_SKIP_SET_EVENT_ON_HANDLE)
if err != nil {
return nil, err
}
runtime.SetFinalizer(f, (*win32File).closeHandle)
return f, nil
}
func MakeOpenFile(h syscall.Handle) (io.ReadWriteCloser, error) {
return makeWin32File(h)
}
// closeHandle closes the resources associated with a Win32 handle
func (f *win32File) closeHandle() {
if !f.closing {
// cancel all IO and wait for it to complete
f.closing = true
cancelIoEx(f.handle, nil)
f.wg.Wait()
// at this point, no new IO can start
syscall.Close(f.handle)
f.handle = 0
}
}
// Close closes a win32File.
func (f *win32File) Close() error {
f.closeHandle()
runtime.SetFinalizer(f, nil)
return nil
}
// prepareIo prepares for a new IO operation
func (f *win32File) prepareIo() (*ioOperation, error) {
f.wg.Add(1)
if f.closing {
return nil, ErrFileClosed
}
c := &ioOperation{}
c.ch = make(chan ioResult)
return c, nil
}
// ioCompletionProcessor processes completed async IOs forever
func ioCompletionProcessor(h syscall.Handle) {
// Set the timer resolution to 1. This fixes a performance regression in golang 1.6.
timeBeginPeriod(1)
for {
var bytes uint32
var key uintptr
var op *ioOperation
err := getQueuedCompletionStatus(h, &bytes, &key, &op, syscall.INFINITE)
if op == nil {
panic(err)
}
op.ch <- ioResult{bytes, err}
}
}
// asyncIo processes the return value from ReadFile or WriteFile, blocking until
// the operation has actually completed.
func (f *win32File) asyncIo(c *ioOperation, deadline time.Time, bytes uint32, err error) (int, error) {
if err != syscall.ERROR_IO_PENDING {
f.wg.Done()
return int(bytes), err
} else {
var r ioResult
wait := true
timedout := false
if f.closing {
cancelIoEx(f.handle, &c.o)
} else if !deadline.IsZero() {
now := time.Now()
if !deadline.After(now) {
timedout = true
} else {
timeout := time.After(deadline.Sub(now))
select {
case r = <-c.ch:
wait = false
case <-timeout:
timedout = true
}
}
}
if timedout {
cancelIoEx(f.handle, &c.o)
}
if wait {
r = <-c.ch
}
err = r.err
if err == syscall.ERROR_OPERATION_ABORTED {
if f.closing {
err = ErrFileClosed
} else if timedout {
err = ErrTimeout
}
}
f.wg.Done()
return int(r.bytes), err
}
}
// Read reads from a file handle.
func (f *win32File) Read(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
var bytes uint32
err = syscall.ReadFile(f.handle, b, &bytes, &c.o)
n, err := f.asyncIo(c, f.readDeadline, bytes, err)
// Handle EOF conditions.
if err == nil && n == 0 && len(b) != 0 {
return 0, io.EOF
} else if err == syscall.ERROR_BROKEN_PIPE {
return 0, io.EOF
} else {
return n, err
}
}
// Write writes to a file handle.
func (f *win32File) Write(b []byte) (int, error) {
c, err := f.prepareIo()
if err != nil {
return 0, err
}
var bytes uint32
err = syscall.WriteFile(f.handle, b, &bytes, &c.o)
return f.asyncIo(c, f.writeDeadline, bytes, err)
}
func (f *win32File) SetReadDeadline(t time.Time) error {
f.readDeadline = t
return nil
}
func (f *win32File) SetWriteDeadline(t time.Time) error {
f.writeDeadline = t
return nil
}

View file

@ -0,0 +1,54 @@
package winio
import (
"os"
"syscall"
"unsafe"
)
//sys getFileInformationByHandleEx(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = GetFileInformationByHandleEx
//sys setFileInformationByHandle(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) = SetFileInformationByHandle
const (
fileBasicInfo = 0
fileIDInfo = 0x12
)
// FileBasicInfo contains file access time and file attributes information.
type FileBasicInfo struct {
CreationTime, LastAccessTime, LastWriteTime, ChangeTime syscall.Filetime
FileAttributes uintptr // includes padding
}
// GetFileBasicInfo retrieves times and attributes for a file.
func GetFileBasicInfo(f *os.File) (*FileBasicInfo, error) {
bi := &FileBasicInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
return bi, nil
}
// SetFileBasicInfo sets times and attributes for a file.
func SetFileBasicInfo(f *os.File, bi *FileBasicInfo) error {
if err := setFileInformationByHandle(syscall.Handle(f.Fd()), fileBasicInfo, (*byte)(unsafe.Pointer(bi)), uint32(unsafe.Sizeof(*bi))); err != nil {
return &os.PathError{Op: "SetFileInformationByHandle", Path: f.Name(), Err: err}
}
return nil
}
// FileIDInfo contains the volume serial number and file ID for a file. This pair should be
// unique on a system.
type FileIDInfo struct {
VolumeSerialNumber uint64
FileID [16]byte
}
// GetFileID retrieves the unique (volume, file ID) pair for a file.
func GetFileID(f *os.File) (*FileIDInfo, error) {
fileID := &FileIDInfo{}
if err := getFileInformationByHandleEx(syscall.Handle(f.Fd()), fileIDInfo, (*byte)(unsafe.Pointer(fileID)), uint32(unsafe.Sizeof(*fileID))); err != nil {
return nil, &os.PathError{Op: "GetFileInformationByHandleEx", Path: f.Name(), Err: err}
}
return fileID, nil
}

View file

@ -0,0 +1,398 @@
package winio
import (
"errors"
"io"
"net"
"os"
"syscall"
"time"
"unsafe"
)
//sys connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) = ConnectNamedPipe
//sys createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *securityAttributes) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateNamedPipeW
//sys createFile(name string, access uint32, mode uint32, sa *securityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) [failretval==syscall.InvalidHandle] = CreateFileW
//sys waitNamedPipe(name string, timeout uint32) (err error) = WaitNamedPipeW
//sys getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) = GetNamedPipeInfo
//sys getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) = GetNamedPipeHandleStateW
type securityAttributes struct {
Length uint32
SecurityDescriptor *byte
InheritHandle uint32
}
const (
cERROR_PIPE_BUSY = syscall.Errno(231)
cERROR_PIPE_CONNECTED = syscall.Errno(535)
cERROR_SEM_TIMEOUT = syscall.Errno(121)
cPIPE_ACCESS_DUPLEX = 0x3
cFILE_FLAG_FIRST_PIPE_INSTANCE = 0x80000
cSECURITY_SQOS_PRESENT = 0x100000
cSECURITY_ANONYMOUS = 0
cPIPE_REJECT_REMOTE_CLIENTS = 0x8
cPIPE_UNLIMITED_INSTANCES = 255
cNMPWAIT_USE_DEFAULT_WAIT = 0
cNMPWAIT_NOWAIT = 1
cPIPE_TYPE_MESSAGE = 4
cPIPE_READMODE_MESSAGE = 2
)
var (
// ErrPipeListenerClosed is returned for pipe operations on listeners that have been closed.
// This error should match net.errClosing since docker takes a dependency on its text.
ErrPipeListenerClosed = errors.New("use of closed network connection")
errPipeWriteClosed = errors.New("pipe has been closed for write")
)
type win32Pipe struct {
*win32File
path string
}
type win32MessageBytePipe struct {
win32Pipe
writeClosed bool
readEOF bool
}
type pipeAddress string
func (f *win32Pipe) LocalAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) RemoteAddr() net.Addr {
return pipeAddress(f.path)
}
func (f *win32Pipe) SetDeadline(t time.Time) error {
f.SetReadDeadline(t)
f.SetWriteDeadline(t)
return nil
}
// CloseWrite closes the write side of a message pipe in byte mode.
func (f *win32MessageBytePipe) CloseWrite() error {
if f.writeClosed {
return errPipeWriteClosed
}
_, err := f.win32File.Write(nil)
if err != nil {
return err
}
f.writeClosed = true
return nil
}
// Write writes bytes to a message pipe in byte mode. Zero-byte writes are ignored, since
// they are used to implement CloseWrite().
func (f *win32MessageBytePipe) Write(b []byte) (int, error) {
if f.writeClosed {
return 0, errPipeWriteClosed
}
if len(b) == 0 {
return 0, nil
}
return f.win32File.Write(b)
}
// Read reads bytes from a message pipe in byte mode. A read of a zero-byte message on a message
// mode pipe will return io.EOF, as will all subsequent reads.
func (f *win32MessageBytePipe) Read(b []byte) (int, error) {
if f.readEOF {
return 0, io.EOF
}
n, err := f.win32File.Read(b)
if err == io.EOF {
// If this was the result of a zero-byte read, then
// it is possible that the read was due to a zero-size
// message. Since we are simulating CloseWrite with a
// zero-byte message, ensure that all future Read() calls
// also return EOF.
f.readEOF = true
}
return n, err
}
func (s pipeAddress) Network() string {
return "pipe"
}
func (s pipeAddress) String() string {
return string(s)
}
// DialPipe connects to a named pipe by path, timing out if the connection
// takes longer than the specified duration. If timeout is nil, then the timeout
// is the default timeout established by the pipe server.
func DialPipe(path string, timeout *time.Duration) (net.Conn, error) {
var absTimeout time.Time
if timeout != nil {
absTimeout = time.Now().Add(*timeout)
}
var err error
var h syscall.Handle
for {
h, err = createFile(path, syscall.GENERIC_READ|syscall.GENERIC_WRITE, 0, nil, syscall.OPEN_EXISTING, syscall.FILE_FLAG_OVERLAPPED|cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != cERROR_PIPE_BUSY {
break
}
now := time.Now()
var ms uint32
if absTimeout.IsZero() {
ms = cNMPWAIT_USE_DEFAULT_WAIT
} else if now.After(absTimeout) {
ms = cNMPWAIT_NOWAIT
} else {
ms = uint32(absTimeout.Sub(now).Nanoseconds() / 1000 / 1000)
}
err = waitNamedPipe(path, ms)
if err != nil {
if err == cERROR_SEM_TIMEOUT {
return nil, ErrTimeout
}
break
}
}
if err != nil {
return nil, &os.PathError{Op: "open", Path: path, Err: err}
}
var flags uint32
err = getNamedPipeInfo(h, &flags, nil, nil, nil)
if err != nil {
return nil, err
}
var state uint32
err = getNamedPipeHandleState(h, &state, nil, nil, nil, nil, 0)
if err != nil {
return nil, err
}
if state&cPIPE_READMODE_MESSAGE != 0 {
return nil, &os.PathError{Op: "open", Path: path, Err: errors.New("message readmode pipes not supported")}
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
// If the pipe is in message mode, return a message byte pipe, which
// supports CloseWrite().
if flags&cPIPE_TYPE_MESSAGE != 0 {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: f, path: path},
}, nil
}
return &win32Pipe{win32File: f, path: path}, nil
}
type acceptResponse struct {
f *win32File
err error
}
type win32PipeListener struct {
firstHandle syscall.Handle
path string
securityDescriptor []byte
config PipeConfig
acceptCh chan (chan acceptResponse)
closeCh chan int
doneCh chan int
}
func makeServerPipeHandle(path string, securityDescriptor []byte, c *PipeConfig, first bool) (syscall.Handle, error) {
var flags uint32 = cPIPE_ACCESS_DUPLEX | syscall.FILE_FLAG_OVERLAPPED
if first {
flags |= cFILE_FLAG_FIRST_PIPE_INSTANCE
}
var mode uint32 = cPIPE_REJECT_REMOTE_CLIENTS
if c.MessageMode {
mode |= cPIPE_TYPE_MESSAGE
}
var sa securityAttributes
sa.Length = uint32(unsafe.Sizeof(sa))
if securityDescriptor != nil {
sa.SecurityDescriptor = &securityDescriptor[0]
}
h, err := createNamedPipe(path, flags, mode, cPIPE_UNLIMITED_INSTANCES, uint32(c.OutputBufferSize), uint32(c.InputBufferSize), 0, &sa)
if err != nil {
return 0, &os.PathError{Op: "open", Path: path, Err: err}
}
return h, nil
}
func (l *win32PipeListener) makeServerPipe() (*win32File, error) {
h, err := makeServerPipeHandle(l.path, l.securityDescriptor, &l.config, false)
if err != nil {
return nil, err
}
f, err := makeWin32File(h)
if err != nil {
syscall.Close(h)
return nil, err
}
return f, nil
}
func (l *win32PipeListener) listenerRoutine() {
closed := false
for !closed {
select {
case <-l.closeCh:
closed = true
case responseCh := <-l.acceptCh:
p, err := l.makeServerPipe()
if err == nil {
// Wait for the client to connect.
ch := make(chan error)
go func() {
ch <- connectPipe(p)
}()
select {
case err = <-ch:
if err != nil {
p.Close()
p = nil
}
case <-l.closeCh:
// Abort the connect request by closing the handle.
p.Close()
p = nil
err = <-ch
if err == nil || err == ErrFileClosed {
err = ErrPipeListenerClosed
}
closed = true
}
}
responseCh <- acceptResponse{p, err}
}
}
syscall.Close(l.firstHandle)
l.firstHandle = 0
// Notify Close() and Accept() callers that the handle has been closed.
close(l.doneCh)
}
// PipeConfig contain configuration for the pipe listener.
type PipeConfig struct {
// SecurityDescriptor contains a Windows security descriptor in SDDL format.
SecurityDescriptor string
// MessageMode determines whether the pipe is in byte or message mode. In either
// case the pipe is read in byte mode by default. The only practical difference in
// this implementation is that CloseWrite() is only supported for message mode pipes;
// CloseWrite() is implemented as a zero-byte write, but zero-byte writes are only
// transferred to the reader (and returned as io.EOF in this implementation)
// when the pipe is in message mode.
MessageMode bool
// InputBufferSize specifies the size the input buffer, in bytes.
InputBufferSize int32
// OutputBufferSize specifies the size the input buffer, in bytes.
OutputBufferSize int32
}
// ListenPipe creates a listener on a Windows named pipe path, e.g. \\.\pipe\mypipe.
// The pipe must not already exist.
func ListenPipe(path string, c *PipeConfig) (net.Listener, error) {
var (
sd []byte
err error
)
if c == nil {
c = &PipeConfig{}
}
if c.SecurityDescriptor != "" {
sd, err = SddlToSecurityDescriptor(c.SecurityDescriptor)
if err != nil {
return nil, err
}
}
h, err := makeServerPipeHandle(path, sd, c, true)
if err != nil {
return nil, err
}
// Immediately open and then close a client handle so that the named pipe is
// created but not currently accepting connections.
h2, err := createFile(path, 0, 0, nil, syscall.OPEN_EXISTING, cSECURITY_SQOS_PRESENT|cSECURITY_ANONYMOUS, 0)
if err != nil {
syscall.Close(h)
return nil, err
}
syscall.Close(h2)
l := &win32PipeListener{
firstHandle: h,
path: path,
securityDescriptor: sd,
config: *c,
acceptCh: make(chan (chan acceptResponse)),
closeCh: make(chan int),
doneCh: make(chan int),
}
go l.listenerRoutine()
return l, nil
}
func connectPipe(p *win32File) error {
c, err := p.prepareIo()
if err != nil {
return err
}
err = connectNamedPipe(p.handle, &c.o)
_, err = p.asyncIo(c, time.Time{}, 0, err)
if err != nil && err != cERROR_PIPE_CONNECTED {
return err
}
return nil
}
func (l *win32PipeListener) Accept() (net.Conn, error) {
ch := make(chan acceptResponse)
select {
case l.acceptCh <- ch:
response := <-ch
err := response.err
if err != nil {
return nil, err
}
if l.config.MessageMode {
return &win32MessageBytePipe{
win32Pipe: win32Pipe{win32File: response.f, path: l.path},
}, nil
}
return &win32Pipe{win32File: response.f, path: l.path}, nil
case <-l.doneCh:
return nil, ErrPipeListenerClosed
}
}
func (l *win32PipeListener) Close() error {
select {
case l.closeCh <- 1:
<-l.doneCh
case <-l.doneCh:
}
return nil
}
func (l *win32PipeListener) Addr() net.Addr {
return pipeAddress(l.path)
}

View file

@ -0,0 +1,191 @@
package winio
import (
"bytes"
"encoding/binary"
"fmt"
"runtime"
"sync"
"syscall"
"unicode/utf16"
"golang.org/x/sys/windows"
)
//sys adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) [true] = advapi32.AdjustTokenPrivileges
//sys impersonateSelf(level uint32) (err error) = advapi32.ImpersonateSelf
//sys revertToSelf() (err error) = advapi32.RevertToSelf
//sys openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) = advapi32.OpenThreadToken
//sys getCurrentThread() (h syscall.Handle) = GetCurrentThread
//sys lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) = advapi32.LookupPrivilegeValueW
//sys lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) = advapi32.LookupPrivilegeNameW
//sys lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) = advapi32.LookupPrivilegeDisplayNameW
const (
SE_PRIVILEGE_ENABLED = 2
ERROR_NOT_ALL_ASSIGNED syscall.Errno = 1300
SeBackupPrivilege = "SeBackupPrivilege"
SeRestorePrivilege = "SeRestorePrivilege"
)
const (
securityAnonymous = iota
securityIdentification
securityImpersonation
securityDelegation
)
var (
privNames = make(map[string]uint64)
privNameMutex sync.Mutex
)
// PrivilegeError represents an error enabling privileges.
type PrivilegeError struct {
privileges []uint64
}
func (e *PrivilegeError) Error() string {
s := ""
if len(e.privileges) > 1 {
s = "Could not enable privileges "
} else {
s = "Could not enable privilege "
}
for i, p := range e.privileges {
if i != 0 {
s += ", "
}
s += `"`
s += getPrivilegeName(p)
s += `"`
}
return s
}
// RunWithPrivilege enables a single privilege for a function call.
func RunWithPrivilege(name string, fn func() error) error {
return RunWithPrivileges([]string{name}, fn)
}
// RunWithPrivileges enables privileges for a function call.
func RunWithPrivileges(names []string, fn func() error) error {
privileges, err := mapPrivileges(names)
if err != nil {
return err
}
runtime.LockOSThread()
defer runtime.UnlockOSThread()
token, err := newThreadToken()
if err != nil {
return err
}
defer releaseThreadToken(token)
err = adjustPrivileges(token, privileges)
if err != nil {
return err
}
return fn()
}
func mapPrivileges(names []string) ([]uint64, error) {
var privileges []uint64
privNameMutex.Lock()
defer privNameMutex.Unlock()
for _, name := range names {
p, ok := privNames[name]
if !ok {
err := lookupPrivilegeValue("", name, &p)
if err != nil {
return nil, err
}
privNames[name] = p
}
privileges = append(privileges, p)
}
return privileges, nil
}
// EnableProcessPrivileges enables privileges globally for the process.
func EnableProcessPrivileges(names []string) error {
privileges, err := mapPrivileges(names)
if err != nil {
return err
}
p, _ := windows.GetCurrentProcess()
var token windows.Token
err = windows.OpenProcessToken(p, windows.TOKEN_ADJUST_PRIVILEGES|windows.TOKEN_QUERY, &token)
if err != nil {
return err
}
defer token.Close()
return adjustPrivileges(token, privileges)
}
func adjustPrivileges(token windows.Token, privileges []uint64) error {
var b bytes.Buffer
binary.Write(&b, binary.LittleEndian, uint32(len(privileges)))
for _, p := range privileges {
binary.Write(&b, binary.LittleEndian, p)
binary.Write(&b, binary.LittleEndian, uint32(SE_PRIVILEGE_ENABLED))
}
prevState := make([]byte, b.Len())
reqSize := uint32(0)
success, err := adjustTokenPrivileges(token, false, &b.Bytes()[0], uint32(len(prevState)), &prevState[0], &reqSize)
if !success {
return err
}
if err == ERROR_NOT_ALL_ASSIGNED {
return &PrivilegeError{privileges}
}
return nil
}
func getPrivilegeName(luid uint64) string {
var nameBuffer [256]uint16
bufSize := uint32(len(nameBuffer))
err := lookupPrivilegeName("", &luid, &nameBuffer[0], &bufSize)
if err != nil {
return fmt.Sprintf("<unknown privilege %d>", luid)
}
var displayNameBuffer [256]uint16
displayBufSize := uint32(len(displayNameBuffer))
var langID uint32
err = lookupPrivilegeDisplayName("", &nameBuffer[0], &displayNameBuffer[0], &displayBufSize, &langID)
if err != nil {
return fmt.Sprintf("<unknown privilege %s>", string(utf16.Decode(nameBuffer[:bufSize])))
}
return string(utf16.Decode(displayNameBuffer[:displayBufSize]))
}
func newThreadToken() (windows.Token, error) {
err := impersonateSelf(securityImpersonation)
if err != nil {
return 0, err
}
var token windows.Token
err = openThreadToken(getCurrentThread(), syscall.TOKEN_ADJUST_PRIVILEGES|syscall.TOKEN_QUERY, false, &token)
if err != nil {
rerr := revertToSelf()
if rerr != nil {
panic(rerr)
}
return 0, err
}
return token, nil
}
func releaseThreadToken(h windows.Token) {
err := revertToSelf()
if err != nil {
panic(err)
}
h.Close()
}

View file

@ -0,0 +1,128 @@
package winio
import (
"bytes"
"encoding/binary"
"fmt"
"strings"
"unicode/utf16"
"unsafe"
)
const (
reparseTagMountPoint = 0xA0000003
reparseTagSymlink = 0xA000000C
)
type reparseDataBuffer struct {
ReparseTag uint32
ReparseDataLength uint16
Reserved uint16
SubstituteNameOffset uint16
SubstituteNameLength uint16
PrintNameOffset uint16
PrintNameLength uint16
}
// ReparsePoint describes a Win32 symlink or mount point.
type ReparsePoint struct {
Target string
IsMountPoint bool
}
// UnsupportedReparsePointError is returned when trying to decode a non-symlink or
// mount point reparse point.
type UnsupportedReparsePointError struct {
Tag uint32
}
func (e *UnsupportedReparsePointError) Error() string {
return fmt.Sprintf("unsupported reparse point %x", e.Tag)
}
// DecodeReparsePoint decodes a Win32 REPARSE_DATA_BUFFER structure containing either a symlink
// or a mount point.
func DecodeReparsePoint(b []byte) (*ReparsePoint, error) {
tag := binary.LittleEndian.Uint32(b[0:4])
return DecodeReparsePointData(tag, b[8:])
}
func DecodeReparsePointData(tag uint32, b []byte) (*ReparsePoint, error) {
isMountPoint := false
switch tag {
case reparseTagMountPoint:
isMountPoint = true
case reparseTagSymlink:
default:
return nil, &UnsupportedReparsePointError{tag}
}
nameOffset := 8 + binary.LittleEndian.Uint16(b[4:6])
if !isMountPoint {
nameOffset += 4
}
nameLength := binary.LittleEndian.Uint16(b[6:8])
name := make([]uint16, nameLength/2)
err := binary.Read(bytes.NewReader(b[nameOffset:nameOffset+nameLength]), binary.LittleEndian, &name)
if err != nil {
return nil, err
}
return &ReparsePoint{string(utf16.Decode(name)), isMountPoint}, nil
}
func isDriveLetter(c byte) bool {
return (c >= 'a' && c <= 'z') || (c >= 'A' && c <= 'Z')
}
// EncodeReparsePoint encodes a Win32 REPARSE_DATA_BUFFER structure describing a symlink or
// mount point.
func EncodeReparsePoint(rp *ReparsePoint) []byte {
// Generate an NT path and determine if this is a relative path.
var ntTarget string
relative := false
if strings.HasPrefix(rp.Target, `\\?\`) {
ntTarget = `\??\` + rp.Target[4:]
} else if strings.HasPrefix(rp.Target, `\\`) {
ntTarget = `\??\UNC\` + rp.Target[2:]
} else if len(rp.Target) >= 2 && isDriveLetter(rp.Target[0]) && rp.Target[1] == ':' {
ntTarget = `\??\` + rp.Target
} else {
ntTarget = rp.Target
relative = true
}
// The paths must be NUL-terminated even though they are counted strings.
target16 := utf16.Encode([]rune(rp.Target + "\x00"))
ntTarget16 := utf16.Encode([]rune(ntTarget + "\x00"))
size := int(unsafe.Sizeof(reparseDataBuffer{})) - 8
size += len(ntTarget16)*2 + len(target16)*2
tag := uint32(reparseTagMountPoint)
if !rp.IsMountPoint {
tag = reparseTagSymlink
size += 4 // Add room for symlink flags
}
data := reparseDataBuffer{
ReparseTag: tag,
ReparseDataLength: uint16(size),
SubstituteNameOffset: 0,
SubstituteNameLength: uint16((len(ntTarget16) - 1) * 2),
PrintNameOffset: uint16(len(ntTarget16) * 2),
PrintNameLength: uint16((len(target16) - 1) * 2),
}
var b bytes.Buffer
binary.Write(&b, binary.LittleEndian, &data)
if !rp.IsMountPoint {
flags := uint32(0)
if relative {
flags |= 1
}
binary.Write(&b, binary.LittleEndian, flags)
}
binary.Write(&b, binary.LittleEndian, ntTarget16)
binary.Write(&b, binary.LittleEndian, target16)
return b.Bytes()
}

96
integration/vendor/github.com/Microsoft/go-winio/sd.go generated vendored Normal file
View file

@ -0,0 +1,96 @@
package winio
import (
"syscall"
"unsafe"
)
//sys lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) = advapi32.LookupAccountNameW
//sys convertSidToStringSid(sid *byte, str **uint16) (err error) = advapi32.ConvertSidToStringSidW
//sys convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) = advapi32.ConvertStringSecurityDescriptorToSecurityDescriptorW
//sys convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) = advapi32.ConvertSecurityDescriptorToStringSecurityDescriptorW
//sys localFree(mem uintptr) = LocalFree
//sys getSecurityDescriptorLength(sd uintptr) (len uint32) = advapi32.GetSecurityDescriptorLength
const (
cERROR_NONE_MAPPED = syscall.Errno(1332)
)
type AccountLookupError struct {
Name string
Err error
}
func (e *AccountLookupError) Error() string {
if e.Name == "" {
return "lookup account: empty account name specified"
}
var s string
switch e.Err {
case cERROR_NONE_MAPPED:
s = "not found"
default:
s = e.Err.Error()
}
return "lookup account " + e.Name + ": " + s
}
type SddlConversionError struct {
Sddl string
Err error
}
func (e *SddlConversionError) Error() string {
return "convert " + e.Sddl + ": " + e.Err.Error()
}
// LookupSidByName looks up the SID of an account by name
func LookupSidByName(name string) (sid string, err error) {
if name == "" {
return "", &AccountLookupError{name, cERROR_NONE_MAPPED}
}
var sidSize, sidNameUse, refDomainSize uint32
err = lookupAccountName(nil, name, nil, &sidSize, nil, &refDomainSize, &sidNameUse)
if err != nil && err != syscall.ERROR_INSUFFICIENT_BUFFER {
return "", &AccountLookupError{name, err}
}
sidBuffer := make([]byte, sidSize)
refDomainBuffer := make([]uint16, refDomainSize)
err = lookupAccountName(nil, name, &sidBuffer[0], &sidSize, &refDomainBuffer[0], &refDomainSize, &sidNameUse)
if err != nil {
return "", &AccountLookupError{name, err}
}
var strBuffer *uint16
err = convertSidToStringSid(&sidBuffer[0], &strBuffer)
if err != nil {
return "", &AccountLookupError{name, err}
}
sid = syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(strBuffer))[:])
localFree(uintptr(unsafe.Pointer(strBuffer)))
return sid, nil
}
func SddlToSecurityDescriptor(sddl string) ([]byte, error) {
var sdBuffer uintptr
err := convertStringSecurityDescriptorToSecurityDescriptor(sddl, 1, &sdBuffer, nil)
if err != nil {
return nil, &SddlConversionError{sddl, err}
}
defer localFree(sdBuffer)
sd := make([]byte, getSecurityDescriptorLength(sdBuffer))
copy(sd, (*[0xffff]byte)(unsafe.Pointer(sdBuffer))[:len(sd)])
return sd, nil
}
func SecurityDescriptorToSddl(sd []byte) (string, error) {
var sddl *uint16
// The returned string length seems to including an aribtrary number of terminating NULs.
// Don't use it.
err := convertSecurityDescriptorToStringSecurityDescriptor(&sd[0], 1, 0xff, &sddl, nil)
if err != nil {
return "", err
}
defer localFree(uintptr(unsafe.Pointer(sddl)))
return syscall.UTF16ToString((*[0xffff]uint16)(unsafe.Pointer(sddl))[:]), nil
}

View file

@ -0,0 +1,3 @@
package winio
//go:generate go run $GOROOT/src/syscall/mksyscall_windows.go -output zsyscall.go file.go pipe.go sd.go fileinfo.go privilege.go backup.go

View file

@ -0,0 +1,496 @@
// MACHINE GENERATED BY 'go generate' COMMAND; DO NOT EDIT
package winio
import (
"syscall"
"unsafe"
"golang.org/x/sys/windows"
)
var _ unsafe.Pointer
var (
modkernel32 = syscall.NewLazyDLL("kernel32.dll")
modwinmm = syscall.NewLazyDLL("winmm.dll")
modadvapi32 = syscall.NewLazyDLL("advapi32.dll")
procCancelIoEx = modkernel32.NewProc("CancelIoEx")
procCreateIoCompletionPort = modkernel32.NewProc("CreateIoCompletionPort")
procGetQueuedCompletionStatus = modkernel32.NewProc("GetQueuedCompletionStatus")
procSetFileCompletionNotificationModes = modkernel32.NewProc("SetFileCompletionNotificationModes")
proctimeBeginPeriod = modwinmm.NewProc("timeBeginPeriod")
procConnectNamedPipe = modkernel32.NewProc("ConnectNamedPipe")
procCreateNamedPipeW = modkernel32.NewProc("CreateNamedPipeW")
procCreateFileW = modkernel32.NewProc("CreateFileW")
procWaitNamedPipeW = modkernel32.NewProc("WaitNamedPipeW")
procGetNamedPipeInfo = modkernel32.NewProc("GetNamedPipeInfo")
procGetNamedPipeHandleStateW = modkernel32.NewProc("GetNamedPipeHandleStateW")
procLookupAccountNameW = modadvapi32.NewProc("LookupAccountNameW")
procConvertSidToStringSidW = modadvapi32.NewProc("ConvertSidToStringSidW")
procConvertStringSecurityDescriptorToSecurityDescriptorW = modadvapi32.NewProc("ConvertStringSecurityDescriptorToSecurityDescriptorW")
procConvertSecurityDescriptorToStringSecurityDescriptorW = modadvapi32.NewProc("ConvertSecurityDescriptorToStringSecurityDescriptorW")
procLocalFree = modkernel32.NewProc("LocalFree")
procGetSecurityDescriptorLength = modadvapi32.NewProc("GetSecurityDescriptorLength")
procGetFileInformationByHandleEx = modkernel32.NewProc("GetFileInformationByHandleEx")
procSetFileInformationByHandle = modkernel32.NewProc("SetFileInformationByHandle")
procAdjustTokenPrivileges = modadvapi32.NewProc("AdjustTokenPrivileges")
procImpersonateSelf = modadvapi32.NewProc("ImpersonateSelf")
procRevertToSelf = modadvapi32.NewProc("RevertToSelf")
procOpenThreadToken = modadvapi32.NewProc("OpenThreadToken")
procGetCurrentThread = modkernel32.NewProc("GetCurrentThread")
procLookupPrivilegeValueW = modadvapi32.NewProc("LookupPrivilegeValueW")
procLookupPrivilegeNameW = modadvapi32.NewProc("LookupPrivilegeNameW")
procLookupPrivilegeDisplayNameW = modadvapi32.NewProc("LookupPrivilegeDisplayNameW")
procBackupRead = modkernel32.NewProc("BackupRead")
procBackupWrite = modkernel32.NewProc("BackupWrite")
)
func cancelIoEx(file syscall.Handle, o *syscall.Overlapped) (err error) {
r1, _, e1 := syscall.Syscall(procCancelIoEx.Addr(), 2, uintptr(file), uintptr(unsafe.Pointer(o)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createIoCompletionPort(file syscall.Handle, port syscall.Handle, key uintptr, threadCount uint32) (newport syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall6(procCreateIoCompletionPort.Addr(), 4, uintptr(file), uintptr(port), uintptr(key), uintptr(threadCount), 0, 0)
newport = syscall.Handle(r0)
if newport == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getQueuedCompletionStatus(port syscall.Handle, bytes *uint32, key *uintptr, o **ioOperation, timeout uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetQueuedCompletionStatus.Addr(), 5, uintptr(port), uintptr(unsafe.Pointer(bytes)), uintptr(unsafe.Pointer(key)), uintptr(unsafe.Pointer(o)), uintptr(timeout), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func setFileCompletionNotificationModes(h syscall.Handle, flags uint8) (err error) {
r1, _, e1 := syscall.Syscall(procSetFileCompletionNotificationModes.Addr(), 2, uintptr(h), uintptr(flags), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func timeBeginPeriod(period uint32) (n int32) {
r0, _, _ := syscall.Syscall(proctimeBeginPeriod.Addr(), 1, uintptr(period), 0, 0)
n = int32(r0)
return
}
func connectNamedPipe(pipe syscall.Handle, o *syscall.Overlapped) (err error) {
r1, _, e1 := syscall.Syscall(procConnectNamedPipe.Addr(), 2, uintptr(pipe), uintptr(unsafe.Pointer(o)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createNamedPipe(name string, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *securityAttributes) (handle syscall.Handle, err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _createNamedPipe(_p0, flags, pipeMode, maxInstances, outSize, inSize, defaultTimeout, sa)
}
func _createNamedPipe(name *uint16, flags uint32, pipeMode uint32, maxInstances uint32, outSize uint32, inSize uint32, defaultTimeout uint32, sa *securityAttributes) (handle syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall9(procCreateNamedPipeW.Addr(), 8, uintptr(unsafe.Pointer(name)), uintptr(flags), uintptr(pipeMode), uintptr(maxInstances), uintptr(outSize), uintptr(inSize), uintptr(defaultTimeout), uintptr(unsafe.Pointer(sa)), 0)
handle = syscall.Handle(r0)
if handle == syscall.InvalidHandle {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func createFile(name string, access uint32, mode uint32, sa *securityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _createFile(_p0, access, mode, sa, createmode, attrs, templatefile)
}
func _createFile(name *uint16, access uint32, mode uint32, sa *securityAttributes, createmode uint32, attrs uint32, templatefile syscall.Handle) (handle syscall.Handle, err error) {
r0, _, e1 := syscall.Syscall9(procCreateFileW.Addr(), 7, uintptr(unsafe.Pointer(name)), uintptr(access), uintptr(mode), uintptr(unsafe.Pointer(sa)), uintptr(createmode), uintptr(attrs), uintptr(templatefile), 0, 0)
handle = syscall.Handle(r0)
if handle == syscall.InvalidHandle {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func waitNamedPipe(name string, timeout uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _waitNamedPipe(_p0, timeout)
}
func _waitNamedPipe(name *uint16, timeout uint32) (err error) {
r1, _, e1 := syscall.Syscall(procWaitNamedPipeW.Addr(), 2, uintptr(unsafe.Pointer(name)), uintptr(timeout), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getNamedPipeInfo(pipe syscall.Handle, flags *uint32, outSize *uint32, inSize *uint32, maxInstances *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetNamedPipeInfo.Addr(), 5, uintptr(pipe), uintptr(unsafe.Pointer(flags)), uintptr(unsafe.Pointer(outSize)), uintptr(unsafe.Pointer(inSize)), uintptr(unsafe.Pointer(maxInstances)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getNamedPipeHandleState(pipe syscall.Handle, state *uint32, curInstances *uint32, maxCollectionCount *uint32, collectDataTimeout *uint32, userName *uint16, maxUserNameSize uint32) (err error) {
r1, _, e1 := syscall.Syscall9(procGetNamedPipeHandleStateW.Addr(), 7, uintptr(pipe), uintptr(unsafe.Pointer(state)), uintptr(unsafe.Pointer(curInstances)), uintptr(unsafe.Pointer(maxCollectionCount)), uintptr(unsafe.Pointer(collectDataTimeout)), uintptr(unsafe.Pointer(userName)), uintptr(maxUserNameSize), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func lookupAccountName(systemName *uint16, accountName string, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(accountName)
if err != nil {
return
}
return _lookupAccountName(systemName, _p0, sid, sidSize, refDomain, refDomainSize, sidNameUse)
}
func _lookupAccountName(systemName *uint16, accountName *uint16, sid *byte, sidSize *uint32, refDomain *uint16, refDomainSize *uint32, sidNameUse *uint32) (err error) {
r1, _, e1 := syscall.Syscall9(procLookupAccountNameW.Addr(), 7, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(accountName)), uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(sidSize)), uintptr(unsafe.Pointer(refDomain)), uintptr(unsafe.Pointer(refDomainSize)), uintptr(unsafe.Pointer(sidNameUse)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertSidToStringSid(sid *byte, str **uint16) (err error) {
r1, _, e1 := syscall.Syscall(procConvertSidToStringSidW.Addr(), 2, uintptr(unsafe.Pointer(sid)), uintptr(unsafe.Pointer(str)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertStringSecurityDescriptorToSecurityDescriptor(str string, revision uint32, sd *uintptr, size *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(str)
if err != nil {
return
}
return _convertStringSecurityDescriptorToSecurityDescriptor(_p0, revision, sd, size)
}
func _convertStringSecurityDescriptorToSecurityDescriptor(str *uint16, revision uint32, sd *uintptr, size *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procConvertStringSecurityDescriptorToSecurityDescriptorW.Addr(), 4, uintptr(unsafe.Pointer(str)), uintptr(revision), uintptr(unsafe.Pointer(sd)), uintptr(unsafe.Pointer(size)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func convertSecurityDescriptorToStringSecurityDescriptor(sd *byte, revision uint32, secInfo uint32, sddl **uint16, sddlSize *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procConvertSecurityDescriptorToStringSecurityDescriptorW.Addr(), 5, uintptr(unsafe.Pointer(sd)), uintptr(revision), uintptr(secInfo), uintptr(unsafe.Pointer(sddl)), uintptr(unsafe.Pointer(sddlSize)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func localFree(mem uintptr) {
syscall.Syscall(procLocalFree.Addr(), 1, uintptr(mem), 0, 0)
return
}
func getSecurityDescriptorLength(sd uintptr) (len uint32) {
r0, _, _ := syscall.Syscall(procGetSecurityDescriptorLength.Addr(), 1, uintptr(sd), 0, 0)
len = uint32(r0)
return
}
func getFileInformationByHandleEx(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procGetFileInformationByHandleEx.Addr(), 4, uintptr(h), uintptr(class), uintptr(unsafe.Pointer(buffer)), uintptr(size), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func setFileInformationByHandle(h syscall.Handle, class uint32, buffer *byte, size uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procSetFileInformationByHandle.Addr(), 4, uintptr(h), uintptr(class), uintptr(unsafe.Pointer(buffer)), uintptr(size), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func adjustTokenPrivileges(token windows.Token, releaseAll bool, input *byte, outputSize uint32, output *byte, requiredSize *uint32) (success bool, err error) {
var _p0 uint32
if releaseAll {
_p0 = 1
} else {
_p0 = 0
}
r0, _, e1 := syscall.Syscall6(procAdjustTokenPrivileges.Addr(), 6, uintptr(token), uintptr(_p0), uintptr(unsafe.Pointer(input)), uintptr(outputSize), uintptr(unsafe.Pointer(output)), uintptr(unsafe.Pointer(requiredSize)))
success = r0 != 0
if true {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func impersonateSelf(level uint32) (err error) {
r1, _, e1 := syscall.Syscall(procImpersonateSelf.Addr(), 1, uintptr(level), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func revertToSelf() (err error) {
r1, _, e1 := syscall.Syscall(procRevertToSelf.Addr(), 0, 0, 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func openThreadToken(thread syscall.Handle, accessMask uint32, openAsSelf bool, token *windows.Token) (err error) {
var _p0 uint32
if openAsSelf {
_p0 = 1
} else {
_p0 = 0
}
r1, _, e1 := syscall.Syscall6(procOpenThreadToken.Addr(), 4, uintptr(thread), uintptr(accessMask), uintptr(_p0), uintptr(unsafe.Pointer(token)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func getCurrentThread() (h syscall.Handle) {
r0, _, _ := syscall.Syscall(procGetCurrentThread.Addr(), 0, 0, 0, 0)
h = syscall.Handle(r0)
return
}
func lookupPrivilegeValue(systemName string, name string, luid *uint64) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
var _p1 *uint16
_p1, err = syscall.UTF16PtrFromString(name)
if err != nil {
return
}
return _lookupPrivilegeValue(_p0, _p1, luid)
}
func _lookupPrivilegeValue(systemName *uint16, name *uint16, luid *uint64) (err error) {
r1, _, e1 := syscall.Syscall(procLookupPrivilegeValueW.Addr(), 3, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(luid)))
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func lookupPrivilegeName(systemName string, luid *uint64, buffer *uint16, size *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeName(_p0, luid, buffer, size)
}
func _lookupPrivilegeName(systemName *uint16, luid *uint64, buffer *uint16, size *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procLookupPrivilegeNameW.Addr(), 4, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(luid)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func lookupPrivilegeDisplayName(systemName string, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
var _p0 *uint16
_p0, err = syscall.UTF16PtrFromString(systemName)
if err != nil {
return
}
return _lookupPrivilegeDisplayName(_p0, name, buffer, size, languageId)
}
func _lookupPrivilegeDisplayName(systemName *uint16, name *uint16, buffer *uint16, size *uint32, languageId *uint32) (err error) {
r1, _, e1 := syscall.Syscall6(procLookupPrivilegeDisplayNameW.Addr(), 5, uintptr(unsafe.Pointer(systemName)), uintptr(unsafe.Pointer(name)), uintptr(unsafe.Pointer(buffer)), uintptr(unsafe.Pointer(size)), uintptr(unsafe.Pointer(languageId)), 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func backupRead(h syscall.Handle, b []byte, bytesRead *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
var _p0 *byte
if len(b) > 0 {
_p0 = &b[0]
}
var _p1 uint32
if abort {
_p1 = 1
} else {
_p1 = 0
}
var _p2 uint32
if processSecurity {
_p2 = 1
} else {
_p2 = 0
}
r1, _, e1 := syscall.Syscall9(procBackupRead.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesRead)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}
func backupWrite(h syscall.Handle, b []byte, bytesWritten *uint32, abort bool, processSecurity bool, context *uintptr) (err error) {
var _p0 *byte
if len(b) > 0 {
_p0 = &b[0]
}
var _p1 uint32
if abort {
_p1 = 1
} else {
_p1 = 0
}
var _p2 uint32
if processSecurity {
_p2 = 1
} else {
_p2 = 0
}
r1, _, e1 := syscall.Syscall9(procBackupWrite.Addr(), 7, uintptr(h), uintptr(unsafe.Pointer(_p0)), uintptr(len(b)), uintptr(unsafe.Pointer(bytesWritten)), uintptr(_p1), uintptr(_p2), uintptr(unsafe.Pointer(context)), 0, 0)
if r1 == 0 {
if e1 != 0 {
err = error(e1)
} else {
err = syscall.EINVAL
}
}
return
}

21
integration/vendor/github.com/Sirupsen/logrus/LICENSE generated vendored Normal file
View file

@ -0,0 +1,21 @@
The MIT License (MIT)
Copyright (c) 2014 Simon Eskildsen
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal
in the Software without restriction, including without limitation the rights
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
copies of the Software, and to permit persons to whom the Software is
furnished to do so, subject to the following conditions:
The above copyright notice and this permission notice shall be included in
all copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN
THE SOFTWARE.

View file

@ -0,0 +1,64 @@
package logrus
// The following code was sourced and modified from the
// https://bitbucket.org/tebeka/atexit package governed by the following license:
//
// Copyright (c) 2012 Miki Tebeka <miki.tebeka@gmail.com>.
//
// Permission is hereby granted, free of charge, to any person obtaining a copy of
// this software and associated documentation files (the "Software"), to deal in
// the Software without restriction, including without limitation the rights to
// use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
// the Software, and to permit persons to whom the Software is furnished to do so,
// subject to the following conditions:
//
// The above copyright notice and this permission notice shall be included in all
// copies or substantial portions of the Software.
//
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
// IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
// FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
// COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
// IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
// CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.
import (
"fmt"
"os"
)
var handlers = []func(){}
func runHandler(handler func()) {
defer func() {
if err := recover(); err != nil {
fmt.Fprintln(os.Stderr, "Error: Logrus exit handler error:", err)
}
}()
handler()
}
func runHandlers() {
for _, handler := range handlers {
runHandler(handler)
}
}
// Exit runs all the Logrus atexit handlers and then terminates the program using os.Exit(code)
func Exit(code int) {
runHandlers()
os.Exit(code)
}
// RegisterExitHandler adds a Logrus Exit handler, call logrus.Exit to invoke
// all handlers. The handlers will also be invoked when any Fatal log entry is
// made.
//
// This method is useful when a caller wishes to use logrus to log a fatal
// message but also needs to gracefully shutdown. An example usecase could be
// closing database connections, or sending a alert that the application is
// closing.
func RegisterExitHandler(handler func()) {
handlers = append(handlers, handler)
}

26
integration/vendor/github.com/Sirupsen/logrus/doc.go generated vendored Normal file
View file

@ -0,0 +1,26 @@
/*
Package logrus is a structured logger for Go, completely API compatible with the standard library logger.
The simplest way to use Logrus is simply the package-level exported logger:
package main
import (
log "github.com/Sirupsen/logrus"
)
func main() {
log.WithFields(log.Fields{
"animal": "walrus",
"number": 1,
"size": 10,
}).Info("A walrus appears")
}
Output:
time="2015-09-07T08:48:33Z" level=info msg="A walrus appears" animal=walrus number=1 size=10
For a full guide visit https://github.com/Sirupsen/logrus
*/
package logrus

275
integration/vendor/github.com/Sirupsen/logrus/entry.go generated vendored Normal file
View file

@ -0,0 +1,275 @@
package logrus
import (
"bytes"
"fmt"
"os"
"sync"
"time"
)
var bufferPool *sync.Pool
func init() {
bufferPool = &sync.Pool{
New: func() interface{} {
return new(bytes.Buffer)
},
}
}
// Defines the key when adding errors using WithError.
var ErrorKey = "error"
// An entry is the final or intermediate Logrus logging entry. It contains all
// the fields passed with WithField{,s}. It's finally logged when Debug, Info,
// Warn, Error, Fatal or Panic is called on it. These objects can be reused and
// passed around as much as you wish to avoid field duplication.
type Entry struct {
Logger *Logger
// Contains all the fields set by the user.
Data Fields
// Time at which the log entry was created
Time time.Time
// Level the log entry was logged at: Debug, Info, Warn, Error, Fatal or Panic
Level Level
// Message passed to Debug, Info, Warn, Error, Fatal or Panic
Message string
// When formatter is called in entry.log(), an Buffer may be set to entry
Buffer *bytes.Buffer
}
func NewEntry(logger *Logger) *Entry {
return &Entry{
Logger: logger,
// Default is three fields, give a little extra room
Data: make(Fields, 5),
}
}
// Returns the string representation from the reader and ultimately the
// formatter.
func (entry *Entry) String() (string, error) {
serialized, err := entry.Logger.Formatter.Format(entry)
if err != nil {
return "", err
}
str := string(serialized)
return str, nil
}
// Add an error as single field (using the key defined in ErrorKey) to the Entry.
func (entry *Entry) WithError(err error) *Entry {
return entry.WithField(ErrorKey, err)
}
// Add a single field to the Entry.
func (entry *Entry) WithField(key string, value interface{}) *Entry {
return entry.WithFields(Fields{key: value})
}
// Add a map of fields to the Entry.
func (entry *Entry) WithFields(fields Fields) *Entry {
data := make(Fields, len(entry.Data)+len(fields))
for k, v := range entry.Data {
data[k] = v
}
for k, v := range fields {
data[k] = v
}
return &Entry{Logger: entry.Logger, Data: data}
}
// This function is not declared with a pointer value because otherwise
// race conditions will occur when using multiple goroutines
func (entry Entry) log(level Level, msg string) {
var buffer *bytes.Buffer
entry.Time = time.Now()
entry.Level = level
entry.Message = msg
if err := entry.Logger.Hooks.Fire(level, &entry); err != nil {
entry.Logger.mu.Lock()
fmt.Fprintf(os.Stderr, "Failed to fire hook: %v\n", err)
entry.Logger.mu.Unlock()
}
buffer = bufferPool.Get().(*bytes.Buffer)
buffer.Reset()
defer bufferPool.Put(buffer)
entry.Buffer = buffer
serialized, err := entry.Logger.Formatter.Format(&entry)
entry.Buffer = nil
if err != nil {
entry.Logger.mu.Lock()
fmt.Fprintf(os.Stderr, "Failed to obtain reader, %v\n", err)
entry.Logger.mu.Unlock()
} else {
entry.Logger.mu.Lock()
_, err = entry.Logger.Out.Write(serialized)
if err != nil {
fmt.Fprintf(os.Stderr, "Failed to write to log, %v\n", err)
}
entry.Logger.mu.Unlock()
}
// To avoid Entry#log() returning a value that only would make sense for
// panic() to use in Entry#Panic(), we avoid the allocation by checking
// directly here.
if level <= PanicLevel {
panic(&entry)
}
}
func (entry *Entry) Debug(args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.log(DebugLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Print(args ...interface{}) {
entry.Info(args...)
}
func (entry *Entry) Info(args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.log(InfoLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Warn(args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.log(WarnLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Warning(args ...interface{}) {
entry.Warn(args...)
}
func (entry *Entry) Error(args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.log(ErrorLevel, fmt.Sprint(args...))
}
}
func (entry *Entry) Fatal(args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.log(FatalLevel, fmt.Sprint(args...))
}
Exit(1)
}
func (entry *Entry) Panic(args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.log(PanicLevel, fmt.Sprint(args...))
}
panic(fmt.Sprint(args...))
}
// Entry Printf family functions
func (entry *Entry) Debugf(format string, args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.Debug(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Infof(format string, args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.Info(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Printf(format string, args ...interface{}) {
entry.Infof(format, args...)
}
func (entry *Entry) Warnf(format string, args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.Warn(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Warningf(format string, args ...interface{}) {
entry.Warnf(format, args...)
}
func (entry *Entry) Errorf(format string, args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.Error(fmt.Sprintf(format, args...))
}
}
func (entry *Entry) Fatalf(format string, args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.Fatal(fmt.Sprintf(format, args...))
}
Exit(1)
}
func (entry *Entry) Panicf(format string, args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.Panic(fmt.Sprintf(format, args...))
}
}
// Entry Println family functions
func (entry *Entry) Debugln(args ...interface{}) {
if entry.Logger.Level >= DebugLevel {
entry.Debug(entry.sprintlnn(args...))
}
}
func (entry *Entry) Infoln(args ...interface{}) {
if entry.Logger.Level >= InfoLevel {
entry.Info(entry.sprintlnn(args...))
}
}
func (entry *Entry) Println(args ...interface{}) {
entry.Infoln(args...)
}
func (entry *Entry) Warnln(args ...interface{}) {
if entry.Logger.Level >= WarnLevel {
entry.Warn(entry.sprintlnn(args...))
}
}
func (entry *Entry) Warningln(args ...interface{}) {
entry.Warnln(args...)
}
func (entry *Entry) Errorln(args ...interface{}) {
if entry.Logger.Level >= ErrorLevel {
entry.Error(entry.sprintlnn(args...))
}
}
func (entry *Entry) Fatalln(args ...interface{}) {
if entry.Logger.Level >= FatalLevel {
entry.Fatal(entry.sprintlnn(args...))
}
Exit(1)
}
func (entry *Entry) Panicln(args ...interface{}) {
if entry.Logger.Level >= PanicLevel {
entry.Panic(entry.sprintlnn(args...))
}
}
// Sprintlnn => Sprint no newline. This is to get the behavior of how
// fmt.Sprintln where spaces are always added between operands, regardless of
// their type. Instead of vendoring the Sprintln implementation to spare a
// string allocation, we do the simplest thing.
func (entry *Entry) sprintlnn(args ...interface{}) string {
msg := fmt.Sprintln(args...)
return msg[:len(msg)-1]
}

View file

@ -0,0 +1,193 @@
package logrus
import (
"io"
)
var (
// std is the name of the standard logger in stdlib `log`
std = New()
)
func StandardLogger() *Logger {
return std
}
// SetOutput sets the standard logger output.
func SetOutput(out io.Writer) {
std.mu.Lock()
defer std.mu.Unlock()
std.Out = out
}
// SetFormatter sets the standard logger formatter.
func SetFormatter(formatter Formatter) {
std.mu.Lock()
defer std.mu.Unlock()
std.Formatter = formatter
}
// SetLevel sets the standard logger level.
func SetLevel(level Level) {
std.mu.Lock()
defer std.mu.Unlock()
std.Level = level
}
// GetLevel returns the standard logger level.
func GetLevel() Level {
std.mu.Lock()
defer std.mu.Unlock()
return std.Level
}
// AddHook adds a hook to the standard logger hooks.
func AddHook(hook Hook) {
std.mu.Lock()
defer std.mu.Unlock()
std.Hooks.Add(hook)
}
// WithError creates an entry from the standard logger and adds an error to it, using the value defined in ErrorKey as key.
func WithError(err error) *Entry {
return std.WithField(ErrorKey, err)
}
// WithField creates an entry from the standard logger and adds a field to
// it. If you want multiple fields, use `WithFields`.
//
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
// or Panic on the Entry it returns.
func WithField(key string, value interface{}) *Entry {
return std.WithField(key, value)
}
// WithFields creates an entry from the standard logger and adds multiple
// fields to it. This is simply a helper for `WithField`, invoking it
// once for each field.
//
// Note that it doesn't log until you call Debug, Print, Info, Warn, Fatal
// or Panic on the Entry it returns.
func WithFields(fields Fields) *Entry {
return std.WithFields(fields)
}
// Debug logs a message at level Debug on the standard logger.
func Debug(args ...interface{}) {
std.Debug(args...)
}
// Print logs a message at level Info on the standard logger.
func Print(args ...interface{}) {
std.Print(args...)
}
// Info logs a message at level Info on the standard logger.
func Info(args ...interface{}) {
std.Info(args...)
}
// Warn logs a message at level Warn on the standard logger.
func Warn(args ...interface{}) {
std.Warn(args...)
}
// Warning logs a message at level Warn on the standard logger.
func Warning(args ...interface{}) {
std.Warning(args...)
}
// Error logs a message at level Error on the standard logger.
func Error(args ...interface{}) {
std.Error(args...)
}
// Panic logs a message at level Panic on the standard logger.
func Panic(args ...interface{}) {
std.Panic(args...)
}
// Fatal logs a message at level Fatal on the standard logger.
func Fatal(args ...interface{}) {
std.Fatal(args...)
}
// Debugf logs a message at level Debug on the standard logger.
func Debugf(format string, args ...interface{}) {
std.Debugf(format, args...)
}
// Printf logs a message at level Info on the standard logger.
func Printf(format string, args ...interface{}) {
std.Printf(format, args...)
}
// Infof logs a message at level Info on the standard logger.
func Infof(format string, args ...interface{}) {
std.Infof(format, args...)
}
// Warnf logs a message at level Warn on the standard logger.
func Warnf(format string, args ...interface{}) {
std.Warnf(format, args...)
}
// Warningf logs a message at level Warn on the standard logger.
func Warningf(format string, args ...interface{}) {
std.Warningf(format, args...)
}
// Errorf logs a message at level Error on the standard logger.
func Errorf(format string, args ...interface{}) {
std.Errorf(format, args...)
}
// Panicf logs a message at level Panic on the standard logger.
func Panicf(format string, args ...interface{}) {
std.Panicf(format, args...)
}
// Fatalf logs a message at level Fatal on the standard logger.
func Fatalf(format string, args ...interface{}) {
std.Fatalf(format, args...)
}
// Debugln logs a message at level Debug on the standard logger.
func Debugln(args ...interface{}) {
std.Debugln(args...)
}
// Println logs a message at level Info on the standard logger.
func Println(args ...interface{}) {
std.Println(args...)
}
// Infoln logs a message at level Info on the standard logger.
func Infoln(args ...interface{}) {
std.Infoln(args...)
}
// Warnln logs a message at level Warn on the standard logger.
func Warnln(args ...interface{}) {
std.Warnln(args...)
}
// Warningln logs a message at level Warn on the standard logger.
func Warningln(args ...interface{}) {
std.Warningln(args...)
}
// Errorln logs a message at level Error on the standard logger.
func Errorln(args ...interface{}) {
std.Errorln(args...)
}
// Panicln logs a message at level Panic on the standard logger.
func Panicln(args ...interface{}) {
std.Panicln(args...)
}
// Fatalln logs a message at level Fatal on the standard logger.
func Fatalln(args ...interface{}) {
std.Fatalln(args...)
}

View file

@ -0,0 +1,45 @@
package logrus
import "time"
const DefaultTimestampFormat = time.RFC3339
// The Formatter interface is used to implement a custom Formatter. It takes an
// `Entry`. It exposes all the fields, including the default ones:
//
// * `entry.Data["msg"]`. The message passed from Info, Warn, Error ..
// * `entry.Data["time"]`. The timestamp.
// * `entry.Data["level"]. The level the entry was logged at.
//
// Any additional fields added with `WithField` or `WithFields` are also in
// `entry.Data`. Format is expected to return an array of bytes which are then
// logged to `logger.Out`.
type Formatter interface {
Format(*Entry) ([]byte, error)
}
// This is to not silently overwrite `time`, `msg` and `level` fields when
// dumping it. If this code wasn't there doing:
//
// logrus.WithField("level", 1).Info("hello")
//
// Would just silently drop the user provided level. Instead with this code
// it'll logged as:
//
// {"level": "info", "fields.level": 1, "msg": "hello", "time": "..."}
//
// It's not exported because it's still using Data in an opinionated way. It's to
// avoid code duplication between the two default formatters.
func prefixFieldClashes(data Fields) {
if t, ok := data["time"]; ok {
data["fields.time"] = t
}
if m, ok := data["msg"]; ok {
data["fields.msg"] = m
}
if l, ok := data["level"]; ok {
data["fields.level"] = l
}
}

34
integration/vendor/github.com/Sirupsen/logrus/hooks.go generated vendored Normal file
View file

@ -0,0 +1,34 @@
package logrus
// A hook to be fired when logging on the logging levels returned from
// `Levels()` on your implementation of the interface. Note that this is not
// fired in a goroutine or a channel with workers, you should handle such
// functionality yourself if your call is non-blocking and you don't wish for
// the logging calls for levels returned from `Levels()` to block.
type Hook interface {
Levels() []Level
Fire(*Entry) error
}
// Internal type for storing the hooks on a logger instance.
type LevelHooks map[Level][]Hook
// Add a hook to an instance of logger. This is called with
// `log.Hooks.Add(new(MyHook))` where `MyHook` implements the `Hook` interface.
func (hooks LevelHooks) Add(hook Hook) {
for _, level := range hook.Levels() {
hooks[level] = append(hooks[level], hook)
}
}
// Fire all the hooks for the passed level. Used by `entry.log` to fire
// appropriate hooks for a log entry.
func (hooks LevelHooks) Fire(level Level, entry *Entry) error {
for _, hook := range hooks[level] {
if err := hook.Fire(entry); err != nil {
return err
}
}
return nil
}

View file

@ -0,0 +1,41 @@
package logrus
import (
"encoding/json"
"fmt"
)
type JSONFormatter struct {
// TimestampFormat sets the format used for marshaling timestamps.
TimestampFormat string
}
func (f *JSONFormatter) Format(entry *Entry) ([]byte, error) {
data := make(Fields, len(entry.Data)+3)
for k, v := range entry.Data {
switch v := v.(type) {
case error:
// Otherwise errors are ignored by `encoding/json`
// https://github.com/Sirupsen/logrus/issues/137
data[k] = v.Error()
default:
data[k] = v
}
}
prefixFieldClashes(data)
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
data["time"] = entry.Time.Format(timestampFormat)
data["msg"] = entry.Message
data["level"] = entry.Level.String()
serialized, err := json.Marshal(data)
if err != nil {
return nil, fmt.Errorf("Failed to marshal fields to JSON, %v", err)
}
return append(serialized, '\n'), nil
}

308
integration/vendor/github.com/Sirupsen/logrus/logger.go generated vendored Normal file
View file

@ -0,0 +1,308 @@
package logrus
import (
"io"
"os"
"sync"
)
type Logger struct {
// The logs are `io.Copy`'d to this in a mutex. It's common to set this to a
// file, or leave it default which is `os.Stderr`. You can also set this to
// something more adventorous, such as logging to Kafka.
Out io.Writer
// Hooks for the logger instance. These allow firing events based on logging
// levels and log entries. For example, to send errors to an error tracking
// service, log to StatsD or dump the core on fatal errors.
Hooks LevelHooks
// All log entries pass through the formatter before logged to Out. The
// included formatters are `TextFormatter` and `JSONFormatter` for which
// TextFormatter is the default. In development (when a TTY is attached) it
// logs with colors, but to a file it wouldn't. You can easily implement your
// own that implements the `Formatter` interface, see the `README` or included
// formatters for examples.
Formatter Formatter
// The logging level the logger should log at. This is typically (and defaults
// to) `logrus.Info`, which allows Info(), Warn(), Error() and Fatal() to be
// logged. `logrus.Debug` is useful in
Level Level
// Used to sync writing to the log. Locking is enabled by Default
mu MutexWrap
// Reusable empty entry
entryPool sync.Pool
}
type MutexWrap struct {
lock sync.Mutex
disabled bool
}
func (mw *MutexWrap) Lock() {
if !mw.disabled {
mw.lock.Lock()
}
}
func (mw *MutexWrap) Unlock() {
if !mw.disabled {
mw.lock.Unlock()
}
}
func (mw *MutexWrap) Disable() {
mw.disabled = true
}
// Creates a new logger. Configuration should be set by changing `Formatter`,
// `Out` and `Hooks` directly on the default logger instance. You can also just
// instantiate your own:
//
// var log = &Logger{
// Out: os.Stderr,
// Formatter: new(JSONFormatter),
// Hooks: make(LevelHooks),
// Level: logrus.DebugLevel,
// }
//
// It's recommended to make this a global instance called `log`.
func New() *Logger {
return &Logger{
Out: os.Stderr,
Formatter: new(TextFormatter),
Hooks: make(LevelHooks),
Level: InfoLevel,
}
}
func (logger *Logger) newEntry() *Entry {
entry, ok := logger.entryPool.Get().(*Entry)
if ok {
return entry
}
return NewEntry(logger)
}
func (logger *Logger) releaseEntry(entry *Entry) {
logger.entryPool.Put(entry)
}
// Adds a field to the log entry, note that it doesn't log until you call
// Debug, Print, Info, Warn, Fatal or Panic. It only creates a log entry.
// If you want multiple fields, use `WithFields`.
func (logger *Logger) WithField(key string, value interface{}) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithField(key, value)
}
// Adds a struct of fields to the log entry. All it does is call `WithField` for
// each `Field`.
func (logger *Logger) WithFields(fields Fields) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithFields(fields)
}
// Add an error as single field to the log entry. All it does is call
// `WithError` for the given `error`.
func (logger *Logger) WithError(err error) *Entry {
entry := logger.newEntry()
defer logger.releaseEntry(entry)
return entry.WithError(err)
}
func (logger *Logger) Debugf(format string, args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debugf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Infof(format string, args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Infof(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Printf(format string, args ...interface{}) {
entry := logger.newEntry()
entry.Printf(format, args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warnf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warningf(format string, args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Errorf(format string, args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Errorf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatalf(format string, args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatalf(format, args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panicf(format string, args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panicf(format, args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Debug(args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debug(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Info(args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Info(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Print(args ...interface{}) {
entry := logger.newEntry()
entry.Info(args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warn(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warn(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warning(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warn(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Error(args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Error(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatal(args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatal(args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panic(args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panic(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Debugln(args ...interface{}) {
if logger.Level >= DebugLevel {
entry := logger.newEntry()
entry.Debugln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Infoln(args ...interface{}) {
if logger.Level >= InfoLevel {
entry := logger.newEntry()
entry.Infoln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Println(args ...interface{}) {
entry := logger.newEntry()
entry.Println(args...)
logger.releaseEntry(entry)
}
func (logger *Logger) Warnln(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Warningln(args ...interface{}) {
if logger.Level >= WarnLevel {
entry := logger.newEntry()
entry.Warnln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Errorln(args ...interface{}) {
if logger.Level >= ErrorLevel {
entry := logger.newEntry()
entry.Errorln(args...)
logger.releaseEntry(entry)
}
}
func (logger *Logger) Fatalln(args ...interface{}) {
if logger.Level >= FatalLevel {
entry := logger.newEntry()
entry.Fatalln(args...)
logger.releaseEntry(entry)
}
Exit(1)
}
func (logger *Logger) Panicln(args ...interface{}) {
if logger.Level >= PanicLevel {
entry := logger.newEntry()
entry.Panicln(args...)
logger.releaseEntry(entry)
}
}
//When file is opened with appending mode, it's safe to
//write concurrently to a file (within 4k message on Linux).
//In these cases user can choose to disable the lock.
func (logger *Logger) SetNoLock() {
logger.mu.Disable()
}

143
integration/vendor/github.com/Sirupsen/logrus/logrus.go generated vendored Normal file
View file

@ -0,0 +1,143 @@
package logrus
import (
"fmt"
"log"
"strings"
)
// Fields type, used to pass to `WithFields`.
type Fields map[string]interface{}
// Level type
type Level uint8
// Convert the Level to a string. E.g. PanicLevel becomes "panic".
func (level Level) String() string {
switch level {
case DebugLevel:
return "debug"
case InfoLevel:
return "info"
case WarnLevel:
return "warning"
case ErrorLevel:
return "error"
case FatalLevel:
return "fatal"
case PanicLevel:
return "panic"
}
return "unknown"
}
// ParseLevel takes a string level and returns the Logrus log level constant.
func ParseLevel(lvl string) (Level, error) {
switch strings.ToLower(lvl) {
case "panic":
return PanicLevel, nil
case "fatal":
return FatalLevel, nil
case "error":
return ErrorLevel, nil
case "warn", "warning":
return WarnLevel, nil
case "info":
return InfoLevel, nil
case "debug":
return DebugLevel, nil
}
var l Level
return l, fmt.Errorf("not a valid logrus Level: %q", lvl)
}
// A constant exposing all logging levels
var AllLevels = []Level{
PanicLevel,
FatalLevel,
ErrorLevel,
WarnLevel,
InfoLevel,
DebugLevel,
}
// These are the different logging levels. You can set the logging level to log
// on your instance of logger, obtained with `logrus.New()`.
const (
// PanicLevel level, highest level of severity. Logs and then calls panic with the
// message passed to Debug, Info, ...
PanicLevel Level = iota
// FatalLevel level. Logs and then calls `os.Exit(1)`. It will exit even if the
// logging level is set to Panic.
FatalLevel
// ErrorLevel level. Logs. Used for errors that should definitely be noted.
// Commonly used for hooks to send errors to an error tracking service.
ErrorLevel
// WarnLevel level. Non-critical entries that deserve eyes.
WarnLevel
// InfoLevel level. General operational entries about what's going on inside the
// application.
InfoLevel
// DebugLevel level. Usually only enabled when debugging. Very verbose logging.
DebugLevel
)
// Won't compile if StdLogger can't be realized by a log.Logger
var (
_ StdLogger = &log.Logger{}
_ StdLogger = &Entry{}
_ StdLogger = &Logger{}
)
// StdLogger is what your logrus-enabled library should take, that way
// it'll accept a stdlib logger and a logrus logger. There's no standard
// interface, this is the closest we get, unfortunately.
type StdLogger interface {
Print(...interface{})
Printf(string, ...interface{})
Println(...interface{})
Fatal(...interface{})
Fatalf(string, ...interface{})
Fatalln(...interface{})
Panic(...interface{})
Panicf(string, ...interface{})
Panicln(...interface{})
}
// The FieldLogger interface generalizes the Entry and Logger types
type FieldLogger interface {
WithField(key string, value interface{}) *Entry
WithFields(fields Fields) *Entry
WithError(err error) *Entry
Debugf(format string, args ...interface{})
Infof(format string, args ...interface{})
Printf(format string, args ...interface{})
Warnf(format string, args ...interface{})
Warningf(format string, args ...interface{})
Errorf(format string, args ...interface{})
Fatalf(format string, args ...interface{})
Panicf(format string, args ...interface{})
Debug(args ...interface{})
Info(args ...interface{})
Print(args ...interface{})
Warn(args ...interface{})
Warning(args ...interface{})
Error(args ...interface{})
Fatal(args ...interface{})
Panic(args ...interface{})
Debugln(args ...interface{})
Infoln(args ...interface{})
Println(args ...interface{})
Warnln(args ...interface{})
Warningln(args ...interface{})
Errorln(args ...interface{})
Fatalln(args ...interface{})
Panicln(args ...interface{})
}

View file

@ -0,0 +1,8 @@
// +build appengine
package logrus
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
return true
}

View file

@ -0,0 +1,10 @@
// +build darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import "syscall"
const ioctlReadTermios = syscall.TIOCGETA
type Termios syscall.Termios

View file

@ -0,0 +1,14 @@
// Based on ssh/terminal:
// Copyright 2013 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build !appengine
package logrus
import "syscall"
const ioctlReadTermios = syscall.TCGETS
type Termios syscall.Termios

View file

@ -0,0 +1,22 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build linux darwin freebsd openbsd netbsd dragonfly
// +build !appengine
package logrus
import (
"syscall"
"unsafe"
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var termios Termios
_, _, err := syscall.Syscall6(syscall.SYS_IOCTL, uintptr(fd), ioctlReadTermios, uintptr(unsafe.Pointer(&termios)), 0, 0, 0)
return err == 0
}

View file

@ -0,0 +1,15 @@
// +build solaris,!appengine
package logrus
import (
"os"
"golang.org/x/sys/unix"
)
// IsTerminal returns true if the given file descriptor is a terminal.
func IsTerminal() bool {
_, err := unix.IoctlGetTermios(int(os.Stdout.Fd()), unix.TCGETA)
return err == nil
}

View file

@ -0,0 +1,27 @@
// Based on ssh/terminal:
// Copyright 2011 The Go Authors. All rights reserved.
// Use of this source code is governed by a BSD-style
// license that can be found in the LICENSE file.
// +build windows,!appengine
package logrus
import (
"syscall"
"unsafe"
)
var kernel32 = syscall.NewLazyDLL("kernel32.dll")
var (
procGetConsoleMode = kernel32.NewProc("GetConsoleMode")
)
// IsTerminal returns true if stderr's file descriptor is a terminal.
func IsTerminal() bool {
fd := syscall.Stderr
var st uint32
r, _, e := syscall.Syscall(procGetConsoleMode.Addr(), 2, uintptr(fd), uintptr(unsafe.Pointer(&st)), 0)
return r != 0 && e == 0
}

View file

@ -0,0 +1,165 @@
package logrus
import (
"bytes"
"fmt"
"runtime"
"sort"
"strings"
"time"
)
const (
nocolor = 0
red = 31
green = 32
yellow = 33
blue = 34
gray = 37
)
var (
baseTimestamp time.Time
isTerminal bool
)
func init() {
baseTimestamp = time.Now()
isTerminal = IsTerminal()
}
func miniTS() int {
return int(time.Since(baseTimestamp) / time.Second)
}
type TextFormatter struct {
// Set to true to bypass checking for a TTY before outputting colors.
ForceColors bool
// Force disabling colors.
DisableColors bool
// Disable timestamp logging. useful when output is redirected to logging
// system that already adds timestamps.
DisableTimestamp bool
// Enable logging the full timestamp when a TTY is attached instead of just
// the time passed since beginning of execution.
FullTimestamp bool
// TimestampFormat to use for display when a full timestamp is printed
TimestampFormat string
// The fields are sorted by default for a consistent output. For applications
// that log extremely frequently and don't use the JSON formatter this may not
// be desired.
DisableSorting bool
}
func (f *TextFormatter) Format(entry *Entry) ([]byte, error) {
var b *bytes.Buffer
var keys []string = make([]string, 0, len(entry.Data))
for k := range entry.Data {
keys = append(keys, k)
}
if !f.DisableSorting {
sort.Strings(keys)
}
if entry.Buffer != nil {
b = entry.Buffer
} else {
b = &bytes.Buffer{}
}
prefixFieldClashes(entry.Data)
isColorTerminal := isTerminal && (runtime.GOOS != "windows")
isColored := (f.ForceColors || isColorTerminal) && !f.DisableColors
timestampFormat := f.TimestampFormat
if timestampFormat == "" {
timestampFormat = DefaultTimestampFormat
}
if isColored {
f.printColored(b, entry, keys, timestampFormat)
} else {
if !f.DisableTimestamp {
f.appendKeyValue(b, "time", entry.Time.Format(timestampFormat))
}
f.appendKeyValue(b, "level", entry.Level.String())
if entry.Message != "" {
f.appendKeyValue(b, "msg", entry.Message)
}
for _, key := range keys {
f.appendKeyValue(b, key, entry.Data[key])
}
}
b.WriteByte('\n')
return b.Bytes(), nil
}
func (f *TextFormatter) printColored(b *bytes.Buffer, entry *Entry, keys []string, timestampFormat string) {
var levelColor int
switch entry.Level {
case DebugLevel:
levelColor = gray
case WarnLevel:
levelColor = yellow
case ErrorLevel, FatalLevel, PanicLevel:
levelColor = red
default:
levelColor = blue
}
levelText := strings.ToUpper(entry.Level.String())[0:4]
if !f.FullTimestamp {
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%04d] %-44s ", levelColor, levelText, miniTS(), entry.Message)
} else {
fmt.Fprintf(b, "\x1b[%dm%s\x1b[0m[%s] %-44s ", levelColor, levelText, entry.Time.Format(timestampFormat), entry.Message)
}
for _, k := range keys {
v := entry.Data[k]
fmt.Fprintf(b, " \x1b[%dm%s\x1b[0m=%+v", levelColor, k, v)
}
}
func needsQuoting(text string) bool {
for _, ch := range text {
if !((ch >= 'a' && ch <= 'z') ||
(ch >= 'A' && ch <= 'Z') ||
(ch >= '0' && ch <= '9') ||
ch == '-' || ch == '.') {
return true
}
}
return false
}
func (f *TextFormatter) appendKeyValue(b *bytes.Buffer, key string, value interface{}) {
b.WriteString(key)
b.WriteByte('=')
switch value := value.(type) {
case string:
if !needsQuoting(value) {
b.WriteString(value)
} else {
fmt.Fprintf(b, "%q", value)
}
case error:
errmsg := value.Error()
if !needsQuoting(errmsg) {
b.WriteString(errmsg)
} else {
fmt.Fprintf(b, "%q", value)
}
default:
fmt.Fprint(b, value)
}
b.WriteByte(' ')
}

View file

@ -0,0 +1,53 @@
package logrus
import (
"bufio"
"io"
"runtime"
)
func (logger *Logger) Writer() *io.PipeWriter {
return logger.WriterLevel(InfoLevel)
}
func (logger *Logger) WriterLevel(level Level) *io.PipeWriter {
reader, writer := io.Pipe()
var printFunc func(args ...interface{})
switch level {
case DebugLevel:
printFunc = logger.Debug
case InfoLevel:
printFunc = logger.Info
case WarnLevel:
printFunc = logger.Warn
case ErrorLevel:
printFunc = logger.Error
case FatalLevel:
printFunc = logger.Fatal
case PanicLevel:
printFunc = logger.Panic
default:
printFunc = logger.Print
}
go logger.writerScanner(reader, printFunc)
runtime.SetFinalizer(writer, writerFinalizer)
return writer
}
func (logger *Logger) writerScanner(reader *io.PipeReader, printFunc func(args ...interface{})) {
scanner := bufio.NewScanner(reader)
for scanner.Scan() {
printFunc(scanner.Text())
}
if err := scanner.Err(); err != nil {
logger.Errorf("Error while reading from Writer: %s", err)
}
reader.Close()
}
func writerFinalizer(writer *io.PipeWriter) {
writer.Close()
}

20
integration/vendor/github.com/boltdb/bolt/LICENSE generated vendored Normal file
View file

@ -0,0 +1,20 @@
The MIT License (MIT)
Copyright (c) 2013 Ben Johnson
Permission is hereby granted, free of charge, to any person obtaining a copy of
this software and associated documentation files (the "Software"), to deal in
the Software without restriction, including without limitation the rights to
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of
the Software, and to permit persons to whom the Software is furnished to do so,
subject to the following conditions:
The above copyright notice and this permission notice shall be included in all
copies or substantial portions of the Software.
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS
FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR
COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER
IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN
CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE.

10
integration/vendor/github.com/boltdb/bolt/bolt_386.go generated vendored Normal file
View file

@ -0,0 +1,10 @@
package bolt
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0x7FFFFFFF // 2GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0xFFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View file

@ -0,0 +1,10 @@
package bolt
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

28
integration/vendor/github.com/boltdb/bolt/bolt_arm.go generated vendored Normal file
View file

@ -0,0 +1,28 @@
package bolt
import "unsafe"
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0x7FFFFFFF // 2GB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0xFFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned bool
func init() {
// Simple check to see whether this arch handles unaligned load/stores
// correctly.
// ARM9 and older devices require load/stores to be from/to aligned
// addresses. If not, the lower 2 bits are cleared and that address is
// read in a jumbled up order.
// See http://infocenter.arm.com/help/index.jsp?topic=/com.arm.doc.faqs/ka15414.html
raw := [6]byte{0xfe, 0xef, 0x11, 0x22, 0x22, 0x11}
val := *(*uint32)(unsafe.Pointer(uintptr(unsafe.Pointer(&raw)) + 2))
brokenUnaligned = val != 0x11222211
}

View file

@ -0,0 +1,12 @@
// +build arm64
package bolt
// maxMapSize represents the largest mmap size supported by Bolt.
const maxMapSize = 0xFFFFFFFFFFFF // 256TB
// maxAllocSize is the size used when creating array pointers.
const maxAllocSize = 0x7FFFFFFF
// Are unaligned load/stores broken on this arch?
var brokenUnaligned = false

View file

@ -0,0 +1,10 @@
package bolt
import (
"syscall"
)
// fdatasync flushes written data to a file descriptor.
func fdatasync(db *DB) error {
return syscall.Fdatasync(int(db.file.Fd()))
}

Some files were not shown because too many files have changed in this diff Show more