mirror of
https://github.com/matrix-org/dendrite.git
synced 2025-12-14 18:33:09 -06:00
Remove gometalinter vendor folder
This commit is contained in:
parent
e7988f71f2
commit
3ef301fca5
|
|
@ -1,56 +0,0 @@
|
||||||
### Please only report errors with gometalinter itself
|
|
||||||
|
|
||||||
gometalinter relies on underlying linters to detect issues in source code.
|
|
||||||
If your issue seems to be related to an underlying linter, please report an
|
|
||||||
issue against that linter rather than gometalinter. For a full list of linters
|
|
||||||
and their repositories please see the [README](README.md).
|
|
||||||
|
|
||||||
### Do you want to upgrade a vendored linter?
|
|
||||||
|
|
||||||
Please send a PR. We use [GVT](https://github.com/FiloSottile/gvt). It should be as simple as:
|
|
||||||
|
|
||||||
```
|
|
||||||
go get github.com/FiloSottile/gvt
|
|
||||||
cd _linters
|
|
||||||
gvt update <linter>
|
|
||||||
git add <paths>
|
|
||||||
```
|
|
||||||
|
|
||||||
### Before you report an issue
|
|
||||||
|
|
||||||
Sometimes gometalinter will not report issues that you think it should. There
|
|
||||||
are three things to try in that case:
|
|
||||||
|
|
||||||
#### 1. Update to the latest build of gometalinter and all linters
|
|
||||||
|
|
||||||
go get -u github.com/alecthomas/gometalinter
|
|
||||||
gometalinter --install
|
|
||||||
|
|
||||||
If you're lucky, this will fix the problem.
|
|
||||||
|
|
||||||
#### 2. Analyse the debug output
|
|
||||||
|
|
||||||
If that doesn't help, the problem may be elsewhere (in no particular order):
|
|
||||||
|
|
||||||
1. Upstream linter has changed its output or semantics.
|
|
||||||
2. gometalinter is not invoking the tool correctly.
|
|
||||||
3. gometalinter regular expression matches are not correct for a linter.
|
|
||||||
4. Linter is exceeding the deadline.
|
|
||||||
|
|
||||||
To find out what's going on run in debug mode:
|
|
||||||
|
|
||||||
gometalinter --debug
|
|
||||||
|
|
||||||
This will show all output from the linters and should indicate why it is
|
|
||||||
failing.
|
|
||||||
|
|
||||||
#### 3. Run linters manually
|
|
||||||
|
|
||||||
The output of `gometalinter --debug` should show the exact commands gometalinter
|
|
||||||
is running. Run these commands from the command line to determine if the linter
|
|
||||||
or gometaliner is at fault.
|
|
||||||
|
|
||||||
#### 4. Report an issue.
|
|
||||||
|
|
||||||
Failing all else, if the problem looks like a bug please file an issue and
|
|
||||||
include the output of `gometalinter --debug`
|
|
||||||
|
|
@ -1,19 +0,0 @@
|
||||||
Copyright (C) 2012 Alec Thomas
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy of
|
|
||||||
this software and associated documentation files (the "Software"), to deal in
|
|
||||||
the Software without restriction, including without limitation the rights to
|
|
||||||
use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies
|
|
||||||
of the Software, and to permit persons to whom the Software is furnished to do
|
|
||||||
so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
||||||
|
|
@ -1,365 +0,0 @@
|
||||||
# Go Meta Linter
|
|
||||||
[](https://travis-ci.org/alecthomas/gometalinter) [](https://gitter.im/alecthomas/Lobby)
|
|
||||||
|
|
||||||
<!-- MarkdownTOC -->
|
|
||||||
|
|
||||||
- [Editor integration](#editor-integration)
|
|
||||||
- [Supported linters](#supported-linters)
|
|
||||||
- [Configuration file](#configuration-file)
|
|
||||||
- [Installing](#installing)
|
|
||||||
- [Comment directives](#comment-directives)
|
|
||||||
- [Quickstart](#quickstart)
|
|
||||||
- [FAQ](#faq)
|
|
||||||
- [Exit status](#exit-status)
|
|
||||||
- [What's the best way to use `gometalinter` in CI?](#whats-the-best-way-to-use-gometalinter-in-ci)
|
|
||||||
- [How do I make `gometalinter` work with Go 1.5 vendoring?](#how-do-i-make-gometalinter-work-with-go-15-vendoring)
|
|
||||||
- [Why does `gometalinter --install` install a fork of gocyclo?](#why-does-gometalinter---install-install-a-fork-of-gocyclo)
|
|
||||||
- [Gometalinter is not working](#gometalinter-is-not-working)
|
|
||||||
- [1. Update to the latest build of gometalinter and all linters](#1-update-to-the-latest-build-of-gometalinter-and-all-linters)
|
|
||||||
- [2. Analyse the debug output](#2-analyse-the-debug-output)
|
|
||||||
- [3. Report an issue.](#3-report-an-issue)
|
|
||||||
- [How do I filter issues between two git refs?](#how-do-i-filter-issues-between-two-git-refs)
|
|
||||||
- [Checkstyle XML format](#checkstyle-xml-format)
|
|
||||||
|
|
||||||
<!-- /MarkdownTOC -->
|
|
||||||
|
|
||||||
|
|
||||||
The number of tools for statically checking Go source for errors and warnings
|
|
||||||
is impressive.
|
|
||||||
|
|
||||||
This is a tool that concurrently runs a whole bunch of those linters and
|
|
||||||
normalises their output to a standard format:
|
|
||||||
|
|
||||||
<file>:<line>:[<column>]: <message> (<linter>)
|
|
||||||
|
|
||||||
eg.
|
|
||||||
|
|
||||||
stutter.go:9::warning: unused global variable unusedGlobal (varcheck)
|
|
||||||
stutter.go:12:6:warning: exported type MyStruct should have comment or be unexported (golint)
|
|
||||||
|
|
||||||
It is intended for use with editor/IDE integration.
|
|
||||||
|
|
||||||
## Editor integration
|
|
||||||
|
|
||||||
- [SublimeLinter plugin](https://github.com/alecthomas/SublimeLinter-contrib-gometalinter).
|
|
||||||
- [Atom go-plus package](https://atom.io/packages/go-plus).
|
|
||||||
- [Emacs Flycheck checker](https://github.com/favadi/flycheck-gometalinter).
|
|
||||||
- [Go for Visual Studio Code](https://marketplace.visualstudio.com/items?itemName=lukehoban.Go).
|
|
||||||
- Vim/Neovim
|
|
||||||
- [Neomake](https://github.com/neomake/neomake).
|
|
||||||
- [Syntastic](https://github.com/scrooloose/syntastic/wiki/Go:---gometalinter) `let g:syntastic_go_checkers = ['gometalinter']`.
|
|
||||||
- [ale](https://github.com/w0rp/ale) `let g:ale_linters = {'go': ['gometalinter']}`
|
|
||||||
- [vim-go](https://github.com/fatih/vim-go) with the `:GoMetaLinter` command.
|
|
||||||
|
|
||||||
## Supported linters
|
|
||||||
|
|
||||||
- [go vet](https://golang.org/cmd/vet/) - Reports potential errors that otherwise compile.
|
|
||||||
- [go tool vet --shadow](https://golang.org/cmd/vet/#hdr-Shadowed_variables) - Reports variables that may have been unintentionally shadowed.
|
|
||||||
- [gotype](https://golang.org/x/tools/cmd/gotype) - Syntactic and semantic analysis similar to the Go compiler.
|
|
||||||
- [gotype -x](https://golang.org/x/tools/cmd/gotype) - Syntactic and semantic analysis in external test packages (similar to the Go compiler).
|
|
||||||
- [deadcode](https://github.com/tsenart/deadcode) - Finds unused code.
|
|
||||||
- [gocyclo](https://github.com/alecthomas/gocyclo) - Computes the cyclomatic complexity of functions.
|
|
||||||
- [golint](https://github.com/golang/lint) - Google's (mostly stylistic) linter.
|
|
||||||
- [varcheck](https://github.com/opennota/check) - Find unused global variables and constants.
|
|
||||||
- [structcheck](https://github.com/opennota/check) - Find unused struct fields.
|
|
||||||
- [maligned](https://github.com/mdempsky/maligned) - Detect structs that would take less memory if their fields were sorted.
|
|
||||||
- [errcheck](https://github.com/kisielk/errcheck) - Check that error return values are used.
|
|
||||||
- [megacheck](https://github.com/dominikh/go-tools/tree/master/cmd/megacheck) - Run staticcheck, gosimple and unused, sharing work.
|
|
||||||
- [dupl](https://github.com/mibk/dupl) - Reports potentially duplicated code.
|
|
||||||
- [ineffassign](https://github.com/gordonklaus/ineffassign/blob/master/list) - Detect when assignments to *existing* variables are not used.
|
|
||||||
- [interfacer](https://github.com/mvdan/interfacer) - Suggest narrower interfaces that can be used.
|
|
||||||
- [unconvert](https://github.com/mdempsky/unconvert) - Detect redundant type conversions.
|
|
||||||
- [goconst](https://github.com/jgautheron/goconst) - Finds repeated strings that could be replaced by a constant.
|
|
||||||
- [gas](https://github.com/GoASTScanner/gas) - Inspects source code for security problems by scanning the Go AST.
|
|
||||||
|
|
||||||
Disabled by default (enable with `--enable=<linter>`):
|
|
||||||
|
|
||||||
- [testify](https://github.com/stretchr/testify) - Show location of failed testify assertions.
|
|
||||||
- [test](http://golang.org/pkg/testing/) - Show location of test failures from the stdlib testing module.
|
|
||||||
- [gofmt -s](https://golang.org/cmd/gofmt/) - Checks if the code is properly formatted and could not be further simplified.
|
|
||||||
- [goimports](https://godoc.org/golang.org/x/tools/cmd/goimports) - Checks missing or unreferenced package imports.
|
|
||||||
- [gosimple](https://github.com/dominikh/go-tools/tree/master/cmd/gosimple) - Report simplifications in code.
|
|
||||||
- [lll](https://github.com/walle/lll) - Report long lines (see `--line-length=N`).
|
|
||||||
- [misspell](https://github.com/client9/misspell) - Finds commonly misspelled English words.
|
|
||||||
- [nakedret](https://github.com/alexkohler/nakedret) - Finds naked returns.
|
|
||||||
- [unparam](https://github.com/mvdan/unparam) - Find unused function parameters.
|
|
||||||
- [unused](https://github.com/dominikh/go-tools/tree/master/cmd/unused) - Find unused variables.
|
|
||||||
- [safesql](https://github.com/stripe/safesql) - Finds potential SQL injection vulnerabilities.
|
|
||||||
- [staticcheck](https://github.com/dominikh/go-tools/tree/master/cmd/staticcheck) - Statically detect bugs, both obvious and subtle ones.
|
|
||||||
|
|
||||||
Additional linters can be added through the command line with `--linter=NAME:COMMAND:PATTERN` (see [below](#details)).
|
|
||||||
|
|
||||||
## Configuration file
|
|
||||||
|
|
||||||
gometalinter now supports a JSON configuration file which can be loaded via
|
|
||||||
`--config=<file>`. The format of this file is determined by the `Config` struct
|
|
||||||
in [config.go](https://github.com/alecthomas/gometalinter/blob/master/config.go).
|
|
||||||
|
|
||||||
The configuration file mostly corresponds to command-line flags, with the following exceptions:
|
|
||||||
|
|
||||||
- Linters defined in the configuration file will overlay existing definitions, not replace them.
|
|
||||||
- "Enable" defines the exact set of linters that will be enabled (default
|
|
||||||
linters are disabled). `--help` displays the list of default linters with the exact names
|
|
||||||
you must use.
|
|
||||||
|
|
||||||
Here is an example configuration file:
|
|
||||||
|
|
||||||
```json
|
|
||||||
{
|
|
||||||
"Enable": ["deadcode", "unconvert"]
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
#### `Format` key
|
|
||||||
|
|
||||||
The default `Format` key places the different fields of an `Issue` into a template. this
|
|
||||||
corresponds to the `--format` option command-line flag.
|
|
||||||
|
|
||||||
Default `Format`:
|
|
||||||
```
|
|
||||||
Format: "{{.Path}}:{{.Line}}:{{if .Col}}{{.Col}}{{end}}:{{.Severity}}: {{.Message}} ({{.Linter}})"
|
|
||||||
```
|
|
||||||
|
|
||||||
#### Format Methods
|
|
||||||
|
|
||||||
* `{{.Path.Relative}}` - equivalent to `{{.Path}}` which outputs a relative path to the file
|
|
||||||
* `{{.Path.Abs}}` - outputs an absolute path to the file
|
|
||||||
|
|
||||||
### Adding Custom linters
|
|
||||||
|
|
||||||
Linters can be added and customized from the config file using the `Linters` field.
|
|
||||||
Linters supports the following fields:
|
|
||||||
|
|
||||||
* `Command` - the path to the linter binary and any default arguments
|
|
||||||
* `Pattern` - a regular expression used to parse the linter output
|
|
||||||
* `IsFast` - if the linter should be run when the `--fast` flag is used
|
|
||||||
* `PartitionStrategy` - how paths args should be passed to the linter command:
|
|
||||||
* `directories` - call the linter once with a list of all the directories
|
|
||||||
* `files` - call the linter once with a list of all the files
|
|
||||||
* `packages` - call the linter once with a list of all the package paths
|
|
||||||
* `files-by-package` - call the linter once per package with a list of the
|
|
||||||
files in the package.
|
|
||||||
* `single-directory` - call the linter once per directory
|
|
||||||
|
|
||||||
The config for default linters can be overridden by using the name of the
|
|
||||||
linter.
|
|
||||||
|
|
||||||
Additional linters can be configured via the command line using the format
|
|
||||||
`NAME:COMMAND:PATTERN`.
|
|
||||||
|
|
||||||
Example:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ gometalinter --linter='vet:go tool vet -printfuncs=Infof,Debugf,Warningf,Errorf:PATH:LINE:MESSAGE' .
|
|
||||||
```
|
|
||||||
|
|
||||||
## Installing
|
|
||||||
|
|
||||||
There are two options for installing gometalinter.
|
|
||||||
|
|
||||||
1. Install a stable version, eg. `go get -u gopkg.in/alecthomas/gometalinter.v1`.
|
|
||||||
I will generally only tag a new stable version when it has passed the Travis
|
|
||||||
regression tests. The downside is that the binary will be called `gometalinter.v1`.
|
|
||||||
2. Install from HEAD with: `go get -u github.com/alecthomas/gometalinter`.
|
|
||||||
This has the downside that changes to gometalinter may break.
|
|
||||||
|
|
||||||
## Comment directives
|
|
||||||
|
|
||||||
gometalinter supports suppression of linter messages via comment directives. The
|
|
||||||
form of the directive is:
|
|
||||||
|
|
||||||
```
|
|
||||||
// nolint[: <linter>[, <linter>, ...]]
|
|
||||||
```
|
|
||||||
|
|
||||||
Suppression works in the following way:
|
|
||||||
|
|
||||||
1. Line-level suppression
|
|
||||||
|
|
||||||
A comment directive suppresses any linter messages on that line.
|
|
||||||
|
|
||||||
eg. In this example any messages for `a := 10` will be suppressed and errcheck
|
|
||||||
messages for `defer r.Close()` will also be suppressed.
|
|
||||||
|
|
||||||
```go
|
|
||||||
a := 10 // nolint
|
|
||||||
a = 2
|
|
||||||
defer r.Close() // nolint: errcheck
|
|
||||||
```
|
|
||||||
|
|
||||||
2. Statement-level suppression
|
|
||||||
|
|
||||||
A comment directive at the same indentation level as a statement it
|
|
||||||
immediately precedes will also suppress any linter messages in that entire
|
|
||||||
statement.
|
|
||||||
|
|
||||||
eg. In this example all messages for `SomeFunc()` will be suppressed.
|
|
||||||
|
|
||||||
```go
|
|
||||||
// nolint
|
|
||||||
func SomeFunc() {
|
|
||||||
}
|
|
||||||
```
|
|
||||||
|
|
||||||
Implementation details: gometalinter now performs parsing of Go source code,
|
|
||||||
to extract linter directives and associate them with line ranges. To avoid
|
|
||||||
unnecessary processing, parsing is on-demand: the first time a linter emits a
|
|
||||||
message for a file, that file is parsed for directives.
|
|
||||||
|
|
||||||
## Quickstart
|
|
||||||
|
|
||||||
Install gometalinter (see above).
|
|
||||||
|
|
||||||
Install all known linters:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ gometalinter --install
|
|
||||||
Installing:
|
|
||||||
structcheck
|
|
||||||
maligned
|
|
||||||
nakedret
|
|
||||||
deadcode
|
|
||||||
gocyclo
|
|
||||||
ineffassign
|
|
||||||
dupl
|
|
||||||
golint
|
|
||||||
gotype
|
|
||||||
goimports
|
|
||||||
errcheck
|
|
||||||
varcheck
|
|
||||||
interfacer
|
|
||||||
goconst
|
|
||||||
gosimple
|
|
||||||
staticcheck
|
|
||||||
unparam
|
|
||||||
unused
|
|
||||||
misspell
|
|
||||||
lll
|
|
||||||
gas
|
|
||||||
safesql
|
|
||||||
```
|
|
||||||
|
|
||||||
Run it:
|
|
||||||
|
|
||||||
```
|
|
||||||
$ cd example
|
|
||||||
$ gometalinter ./...
|
|
||||||
stutter.go:13::warning: unused struct field MyStruct.Unused (structcheck)
|
|
||||||
stutter.go:9::warning: unused global variable unusedGlobal (varcheck)
|
|
||||||
stutter.go:12:6:warning: exported type MyStruct should have comment or be unexported (golint)
|
|
||||||
stutter.go:16:6:warning: exported type PublicUndocumented should have comment or be unexported (golint)
|
|
||||||
stutter.go:8:1:warning: unusedGlobal is unused (deadcode)
|
|
||||||
stutter.go:12:1:warning: MyStruct is unused (deadcode)
|
|
||||||
stutter.go:16:1:warning: PublicUndocumented is unused (deadcode)
|
|
||||||
stutter.go:20:1:warning: duplicateDefer is unused (deadcode)
|
|
||||||
stutter.go:21:15:warning: error return value not checked (defer a.Close()) (errcheck)
|
|
||||||
stutter.go:22:15:warning: error return value not checked (defer a.Close()) (errcheck)
|
|
||||||
stutter.go:27:6:warning: error return value not checked (doit() // test for errcheck) (errcheck)
|
|
||||||
stutter.go:29::error: unreachable code (vet)
|
|
||||||
stutter.go:26::error: missing argument for Printf("%d"): format reads arg 1, have only 0 args (vet)
|
|
||||||
```
|
|
||||||
|
|
||||||
|
|
||||||
Gometalinter also supports the commonly seen `<path>/...` recursive path
|
|
||||||
format. Note that this can be *very* slow, and you may need to increase the linter `--deadline` to allow linters to complete.
|
|
||||||
|
|
||||||
## FAQ
|
|
||||||
|
|
||||||
### Exit status
|
|
||||||
|
|
||||||
gometalinter sets two bits of the exit status to indicate different issues:
|
|
||||||
|
|
||||||
| Bit | Meaning
|
|
||||||
|-----|----------
|
|
||||||
| 0 | A linter generated an issue.
|
|
||||||
| 1 | An underlying error occurred; eg. a linter failed to execute. In this situation a warning will also be displayed.
|
|
||||||
|
|
||||||
eg. linter only = 1, underlying only = 2, linter + underlying = 3
|
|
||||||
|
|
||||||
### What's the best way to use `gometalinter` in CI?
|
|
||||||
|
|
||||||
There are two main problems running in a CI:
|
|
||||||
|
|
||||||
1. <s>Linters break, causing `gometalinter --install --update` to error</s> (this is no longer an issue as all linters are vendored).
|
|
||||||
2. `gometalinter` adds a new linter.
|
|
||||||
|
|
||||||
I have solved 1 by vendoring the linters.
|
|
||||||
|
|
||||||
For 2, the best option is to disable all linters, then explicitly enable the
|
|
||||||
ones you want:
|
|
||||||
|
|
||||||
gometalinter --disable-all --enable=errcheck --enable=vet --enable=vetshadow ...
|
|
||||||
|
|
||||||
### How do I make `gometalinter` work with Go 1.5 vendoring?
|
|
||||||
|
|
||||||
`gometalinter` has a `--vendor` flag that just sets `GO15VENDOREXPERIMENT=1`, however the
|
|
||||||
underlying tools must support it. Ensure that all of the linters are up to date and built with Go 1.5
|
|
||||||
(`gometalinter --install --force`) then run `gometalinter --vendor .`. That should be it.
|
|
||||||
|
|
||||||
### Why does `gometalinter --install` install a fork of gocyclo?
|
|
||||||
|
|
||||||
I forked `gocyclo` because the upstream behaviour is to recursively check all
|
|
||||||
subdirectories even when just a single directory is specified. This made it
|
|
||||||
unusably slow when vendoring. The recursive behaviour can be achieved with
|
|
||||||
gometalinter by explicitly specifying `<path>/...`. There is a
|
|
||||||
[pull request](https://github.com/fzipp/gocyclo/pull/1) open.
|
|
||||||
|
|
||||||
### Gometalinter is not working
|
|
||||||
|
|
||||||
That's more of a statement than a question, but okay.
|
|
||||||
|
|
||||||
Sometimes gometalinter will not report issues that you think it should. There
|
|
||||||
are three things to try in that case:
|
|
||||||
|
|
||||||
#### 1. Update to the latest build of gometalinter and all linters
|
|
||||||
|
|
||||||
go get -u github.com/alecthomas/gometalinter
|
|
||||||
gometalinter --install
|
|
||||||
|
|
||||||
If you're lucky, this will fix the problem.
|
|
||||||
|
|
||||||
#### 2. Analyse the debug output
|
|
||||||
|
|
||||||
If that doesn't help, the problem may be elsewhere (in no particular order):
|
|
||||||
|
|
||||||
1. Upstream linter has changed its output or semantics.
|
|
||||||
2. gometalinter is not invoking the tool correctly.
|
|
||||||
3. gometalinter regular expression matches are not correct for a linter.
|
|
||||||
4. Linter is exceeding the deadline.
|
|
||||||
|
|
||||||
To find out what's going on run in debug mode:
|
|
||||||
|
|
||||||
gometalinter --debug
|
|
||||||
|
|
||||||
This will show all output from the linters and should indicate why it is
|
|
||||||
failing.
|
|
||||||
|
|
||||||
#### 3. Report an issue.
|
|
||||||
|
|
||||||
Failing all else, if the problem looks like a bug please file an issue and
|
|
||||||
include the output of `gometalinter --debug`.
|
|
||||||
|
|
||||||
### How do I filter issues between two git refs?
|
|
||||||
|
|
||||||
[revgrep](https://github.com/bradleyfalzon/revgrep) can be used to filter the output of `gometalinter`
|
|
||||||
to show issues on lines that have changed between two git refs, such as unstaged changes, changes in
|
|
||||||
`HEAD` vs `master` and between `master` and `origin/master`. See the project's documentation and `-help`
|
|
||||||
usage for more information.
|
|
||||||
|
|
||||||
```
|
|
||||||
go get -u github.com/bradleyfalzon/revgrep/...
|
|
||||||
gometalinter |& revgrep # If unstaged changes or untracked files, those issues are shown.
|
|
||||||
gometalinter |& revgrep # Else show issues in the last commit.
|
|
||||||
gometalinter |& revgrep master # Show issues between master and HEAD (or any other reference).
|
|
||||||
gometalinter |& revgrep origin/master # Show issues that haven't been pushed.
|
|
||||||
```
|
|
||||||
|
|
||||||
## Checkstyle XML format
|
|
||||||
|
|
||||||
`gometalinter` supports [checkstyle](http://checkstyle.sourceforge.net/)
|
|
||||||
compatible XML output format. It is triggered with `--checkstyle` flag:
|
|
||||||
|
|
||||||
gometalinter --checkstyle
|
|
||||||
|
|
||||||
Checkstyle format can be used to integrate gometalinter with Jenkins CI with the
|
|
||||||
help of [Checkstyle Plugin](https://wiki.jenkins-ci.org/display/JENKINS/Checkstyle+Plugin).
|
|
||||||
|
|
@ -1,5 +0,0 @@
|
||||||
This directory looks a bit like a normal vendor directory, but also like an
|
|
||||||
entry in GOPATH. That is not an accident. It looks like the former so that gvt
|
|
||||||
can be used to manage the vendored linters, and it looks like a GOPATH entry so
|
|
||||||
that we can install the vendored binaries (as Go does not support installing
|
|
||||||
binaries from vendored paths).
|
|
||||||
|
|
@ -1,154 +0,0 @@
|
||||||
Apache License
|
|
||||||
|
|
||||||
Version 2.0, January 2004
|
|
||||||
|
|
||||||
http://www.apache.org/licenses/
|
|
||||||
|
|
||||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
|
||||||
|
|
||||||
1. Definitions.
|
|
||||||
|
|
||||||
"License" shall mean the terms and conditions for use, reproduction, and
|
|
||||||
distribution as defined by Sections 1 through 9 of this document.
|
|
||||||
|
|
||||||
"Licensor" shall mean the copyright owner or entity authorized by the copyright
|
|
||||||
owner that is granting the License.
|
|
||||||
|
|
||||||
"Legal Entity" shall mean the union of the acting entity and all other entities
|
|
||||||
that control, are controlled by, or are under common control with that entity.
|
|
||||||
For the purposes of this definition, "control" means (i) the power, direct or
|
|
||||||
indirect, to cause the direction or management of such entity, whether by
|
|
||||||
contract or otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
|
||||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
|
||||||
|
|
||||||
"You" (or "Your") shall mean an individual or Legal Entity exercising
|
|
||||||
permissions granted by this License.
|
|
||||||
|
|
||||||
"Source" form shall mean the preferred form for making modifications, including
|
|
||||||
but not limited to software source code, documentation source, and configuration
|
|
||||||
files.
|
|
||||||
|
|
||||||
"Object" form shall mean any form resulting from mechanical transformation or
|
|
||||||
translation of a Source form, including but not limited to compiled object code,
|
|
||||||
generated documentation, and conversions to other media types.
|
|
||||||
|
|
||||||
"Work" shall mean the work of authorship, whether in Source or Object form, made
|
|
||||||
available under the License, as indicated by a copyright notice that is included
|
|
||||||
in or attached to the work (an example is provided in the Appendix below).
|
|
||||||
|
|
||||||
"Derivative Works" shall mean any work, whether in Source or Object form, that
|
|
||||||
is based on (or derived from) the Work and for which the editorial revisions,
|
|
||||||
annotations, elaborations, or other modifications represent, as a whole, an
|
|
||||||
original work of authorship. For the purposes of this License, Derivative Works
|
|
||||||
shall not include works that remain separable from, or merely link (or bind by
|
|
||||||
name) to the interfaces of, the Work and Derivative Works thereof.
|
|
||||||
|
|
||||||
"Contribution" shall mean any work of authorship, including the original version
|
|
||||||
of the Work and any modifications or additions to that Work or Derivative Works
|
|
||||||
thereof, that is intentionally submitted to Licensor for inclusion in the Work
|
|
||||||
by the copyright owner or by an individual or Legal Entity authorized to submit
|
|
||||||
on behalf of the copyright owner. For the purposes of this definition,
|
|
||||||
"submitted" means any form of electronic, verbal, or written communication sent
|
|
||||||
to the Licensor or its representatives, including but not limited to
|
|
||||||
communication on electronic mailing lists, source code control systems, and
|
|
||||||
issue tracking systems that are managed by, or on behalf of, the Licensor for
|
|
||||||
the purpose of discussing and improving the Work, but excluding communication
|
|
||||||
that is conspicuously marked or otherwise designated in writing by the copyright
|
|
||||||
owner as "Not a Contribution."
|
|
||||||
|
|
||||||
"Contributor" shall mean Licensor and any individual or Legal Entity on behalf
|
|
||||||
of whom a Contribution has been received by Licensor and subsequently
|
|
||||||
incorporated within the Work.
|
|
||||||
|
|
||||||
2. Grant of Copyright License. Subject to the terms and conditions of this
|
|
||||||
License, each Contributor hereby grants to You a perpetual, worldwide,
|
|
||||||
non-exclusive, no-charge, royalty-free, irrevocable copyright license to
|
|
||||||
reproduce, prepare Derivative Works of, publicly display, publicly perform,
|
|
||||||
sublicense, and distribute the Work and such Derivative Works in Source or
|
|
||||||
Object form.
|
|
||||||
|
|
||||||
3. Grant of Patent License. Subject to the terms and conditions of this License,
|
|
||||||
each Contributor hereby grants to You a perpetual, worldwide, non-exclusive,
|
|
||||||
no-charge, royalty-free, irrevocable (except as stated in this section) patent
|
|
||||||
license to make, have made, use, offer to sell, sell, import, and otherwise
|
|
||||||
transfer the Work, where such license applies only to those patent claims
|
|
||||||
licensable by such Contributor that are necessarily infringed by their
|
|
||||||
Contribution(s) alone or by combination of their Contribution(s) with the Work
|
|
||||||
to which such Contribution(s) was submitted. If You institute patent litigation
|
|
||||||
against any entity (including a cross-claim or counterclaim in a lawsuit)
|
|
||||||
alleging that the Work or a Contribution incorporated within the Work
|
|
||||||
constitutes direct or contributory patent infringement, then any patent licenses
|
|
||||||
granted to You under this License for that Work shall terminate as of the date
|
|
||||||
such litigation is filed.
|
|
||||||
|
|
||||||
4. Redistribution. You may reproduce and distribute copies of the Work or
|
|
||||||
Derivative Works thereof in any medium, with or without modifications, and in
|
|
||||||
Source or Object form, provided that You meet the following conditions:
|
|
||||||
|
|
||||||
You must give any other recipients of the Work or Derivative Works a copy of
|
|
||||||
this License; and You must cause any modified files to carry prominent notices
|
|
||||||
stating that You changed the files; and You must retain, in the Source form of
|
|
||||||
any Derivative Works that You distribute, all copyright, patent, trademark, and
|
|
||||||
attribution notices from the Source form of the Work, excluding those notices
|
|
||||||
that do not pertain to any part of the Derivative Works; and If the Work
|
|
||||||
includes a "NOTICE" text file as part of its distribution, then any Derivative
|
|
||||||
Works that You distribute must include a readable copy of the attribution
|
|
||||||
notices contained within such NOTICE file, excluding those notices that do not
|
|
||||||
pertain to any part of the Derivative Works, in at least one of the following
|
|
||||||
places: within a NOTICE text file distributed as part of the Derivative Works;
|
|
||||||
within the Source form or documentation, if provided along with the Derivative
|
|
||||||
Works; or, within a display generated by the Derivative Works, if and wherever
|
|
||||||
such third-party notices normally appear. The contents of the NOTICE file are
|
|
||||||
for informational purposes only and do not modify the License. You may add Your
|
|
||||||
own attribution notices within Derivative Works that You distribute, alongside
|
|
||||||
or as an addendum to the NOTICE text from the Work, provided that such
|
|
||||||
additional attribution notices cannot be construed as modifying the License.
|
|
||||||
|
|
||||||
You may add Your own copyright statement to Your modifications and may provide
|
|
||||||
additional or different license terms and conditions for use, reproduction, or
|
|
||||||
distribution of Your modifications, or for any such Derivative Works as a whole,
|
|
||||||
provided Your use, reproduction, and distribution of the Work otherwise complies
|
|
||||||
with the conditions stated in this License. 5. Submission of Contributions.
|
|
||||||
Unless You explicitly state otherwise, any Contribution intentionally submitted
|
|
||||||
for inclusion in the Work by You to the Licensor shall be under the terms and
|
|
||||||
conditions of this License, without any additional terms or conditions.
|
|
||||||
Notwithstanding the above, nothing herein shall supersede or modify the terms of
|
|
||||||
any separate license agreement you may have executed with Licensor regarding
|
|
||||||
such Contributions.
|
|
||||||
|
|
||||||
6. Trademarks. This License does not grant permission to use the trade names,
|
|
||||||
trademarks, service marks, or product names of the Licensor, except as required
|
|
||||||
for reasonable and customary use in describing the origin of the Work and
|
|
||||||
reproducing the content of the NOTICE file.
|
|
||||||
|
|
||||||
7. Disclaimer of Warranty. Unless required by applicable law or agreed to in
|
|
||||||
writing, Licensor provides the Work (and each Contributor provides its
|
|
||||||
Contributions) on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY
|
|
||||||
KIND, either express or implied, including, without limitation, any warranties
|
|
||||||
or conditions of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
|
||||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
|
||||||
appropriateness of using or redistributing the Work and assume any risks
|
|
||||||
associated with Your exercise of permissions under this License.
|
|
||||||
|
|
||||||
8. Limitation of Liability. In no event and under no legal theory, whether in
|
|
||||||
tort (including negligence), contract, or otherwise, unless required by
|
|
||||||
applicable law (such as deliberate and grossly negligent acts) or agreed to in
|
|
||||||
writing, shall any Contributor be liable to You for damages, including any
|
|
||||||
direct, indirect, special, incidental, or consequential damages of any character
|
|
||||||
arising as a result of this License or out of the use or inability to use the
|
|
||||||
Work (including but not limited to damages for loss of goodwill, work stoppage,
|
|
||||||
computer failure or malfunction, or any and all other commercial damages or
|
|
||||||
losses), even if such Contributor has been advised of the possibility of such
|
|
||||||
damages.
|
|
||||||
|
|
||||||
9. Accepting Warranty or Additional Liability. While redistributing the Work or
|
|
||||||
Derivative Works thereof, You may choose to offer, and charge a fee for,
|
|
||||||
acceptance of support, warranty, indemnity, or other liability obligations
|
|
||||||
and/or rights consistent with this License. However, in accepting such
|
|
||||||
obligations, You may act only on Your own behalf and on Your sole
|
|
||||||
responsibility, not on behalf of any other Contributor, and only if You agree to
|
|
||||||
indemnify, defend, and hold each Contributor harmless for any liability incurred
|
|
||||||
by, or claims asserted against, such Contributor by reason of your accepting any
|
|
||||||
such warranty or additional liability.
|
|
||||||
|
|
||||||
END OF TERMS AND CONDITIONS
|
|
||||||
|
|
@ -1,235 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
// Package core holds the central scanning logic used by GAS
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"go/importer"
|
|
||||||
"go/parser"
|
|
||||||
"go/token"
|
|
||||||
"go/types"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"reflect"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// ImportInfo is used to track aliased and initialization only imports.
|
|
||||||
type ImportInfo struct {
|
|
||||||
Imported map[string]string
|
|
||||||
Aliased map[string]string
|
|
||||||
InitOnly map[string]bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewImportInfo() *ImportInfo {
|
|
||||||
return &ImportInfo{
|
|
||||||
make(map[string]string),
|
|
||||||
make(map[string]string),
|
|
||||||
make(map[string]bool),
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// The Context is populated with data parsed from the source code as it is scanned.
|
|
||||||
// It is passed through to all rule functions as they are called. Rules may use
|
|
||||||
// this data in conjunction withe the encoutered AST node.
|
|
||||||
type Context struct {
|
|
||||||
FileSet *token.FileSet
|
|
||||||
Comments ast.CommentMap
|
|
||||||
Info *types.Info
|
|
||||||
Pkg *types.Package
|
|
||||||
Root *ast.File
|
|
||||||
Config map[string]interface{}
|
|
||||||
Imports *ImportInfo
|
|
||||||
}
|
|
||||||
|
|
||||||
// The Rule interface used by all rules supported by GAS.
|
|
||||||
type Rule interface {
|
|
||||||
Match(ast.Node, *Context) (*Issue, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// A RuleSet maps lists of rules to the type of AST node they should be run on.
|
|
||||||
// The anaylzer will only invoke rules contained in the list associated with the
|
|
||||||
// type of AST node it is currently visiting.
|
|
||||||
type RuleSet map[reflect.Type][]Rule
|
|
||||||
|
|
||||||
// Metrics used when reporting information about a scanning run.
|
|
||||||
type Metrics struct {
|
|
||||||
NumFiles int `json:"files"`
|
|
||||||
NumLines int `json:"lines"`
|
|
||||||
NumNosec int `json:"nosec"`
|
|
||||||
NumFound int `json:"found"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// The Analyzer object is the main object of GAS. It has methods traverse an AST
|
|
||||||
// and invoke the correct checking rules as on each node as required.
|
|
||||||
type Analyzer struct {
|
|
||||||
ignoreNosec bool
|
|
||||||
ruleset RuleSet
|
|
||||||
context *Context
|
|
||||||
logger *log.Logger
|
|
||||||
Issues []*Issue `json:"issues"`
|
|
||||||
Stats *Metrics `json:"metrics"`
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewAnalyzer builds a new anaylzer.
|
|
||||||
func NewAnalyzer(conf map[string]interface{}, logger *log.Logger) Analyzer {
|
|
||||||
if logger == nil {
|
|
||||||
logger = log.New(os.Stdout, "[gas]", 0)
|
|
||||||
}
|
|
||||||
a := Analyzer{
|
|
||||||
ignoreNosec: conf["ignoreNosec"].(bool),
|
|
||||||
ruleset: make(RuleSet),
|
|
||||||
context: &Context{nil, nil, nil, nil, nil, nil, nil},
|
|
||||||
logger: logger,
|
|
||||||
Issues: make([]*Issue, 0, 16),
|
|
||||||
Stats: &Metrics{0, 0, 0, 0},
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO(tkelsey): use the inc/exc lists
|
|
||||||
|
|
||||||
return a
|
|
||||||
}
|
|
||||||
|
|
||||||
func (gas *Analyzer) process(filename string, source interface{}) error {
|
|
||||||
mode := parser.ParseComments
|
|
||||||
gas.context.FileSet = token.NewFileSet()
|
|
||||||
root, err := parser.ParseFile(gas.context.FileSet, filename, source, mode)
|
|
||||||
if err == nil {
|
|
||||||
gas.context.Comments = ast.NewCommentMap(gas.context.FileSet, root, root.Comments)
|
|
||||||
gas.context.Root = root
|
|
||||||
|
|
||||||
// here we get type info
|
|
||||||
gas.context.Info = &types.Info{
|
|
||||||
Types: make(map[ast.Expr]types.TypeAndValue),
|
|
||||||
Defs: make(map[*ast.Ident]types.Object),
|
|
||||||
Uses: make(map[*ast.Ident]types.Object),
|
|
||||||
Selections: make(map[*ast.SelectorExpr]*types.Selection),
|
|
||||||
Scopes: make(map[ast.Node]*types.Scope),
|
|
||||||
Implicits: make(map[ast.Node]types.Object),
|
|
||||||
}
|
|
||||||
|
|
||||||
conf := types.Config{Importer: importer.Default()}
|
|
||||||
gas.context.Pkg, err = conf.Check("pkg", gas.context.FileSet, []*ast.File{root}, gas.context.Info)
|
|
||||||
if err != nil {
|
|
||||||
// TODO(gm) Type checker not currently considering all files within a package
|
|
||||||
// see: issue #113
|
|
||||||
gas.logger.Printf(`Error during type checking: "%s"`, err)
|
|
||||||
err = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
gas.context.Imports = NewImportInfo()
|
|
||||||
for _, pkg := range gas.context.Pkg.Imports() {
|
|
||||||
gas.context.Imports.Imported[pkg.Path()] = pkg.Name()
|
|
||||||
}
|
|
||||||
ast.Walk(gas, root)
|
|
||||||
gas.Stats.NumFiles++
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddRule adds a rule into a rule set list mapped to the given AST node's type.
|
|
||||||
// The node is only needed for its type and is not otherwise used.
|
|
||||||
func (gas *Analyzer) AddRule(r Rule, nodes []ast.Node) {
|
|
||||||
for _, n := range nodes {
|
|
||||||
t := reflect.TypeOf(n)
|
|
||||||
if val, ok := gas.ruleset[t]; ok {
|
|
||||||
gas.ruleset[t] = append(val, r)
|
|
||||||
} else {
|
|
||||||
gas.ruleset[t] = []Rule{r}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Process reads in a source file, convert it to an AST and traverse it.
|
|
||||||
// Rule methods added with AddRule will be invoked as necessary.
|
|
||||||
func (gas *Analyzer) Process(filename string) error {
|
|
||||||
err := gas.process(filename, nil)
|
|
||||||
fun := func(f *token.File) bool {
|
|
||||||
gas.Stats.NumLines += f.LineCount()
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
gas.context.FileSet.Iterate(fun)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// ProcessSource will convert a source code string into an AST and traverse it.
|
|
||||||
// Rule methods added with AddRule will be invoked as necessary. The string is
|
|
||||||
// identified by the filename given but no file IO will be done.
|
|
||||||
func (gas *Analyzer) ProcessSource(filename string, source string) error {
|
|
||||||
err := gas.process(filename, source)
|
|
||||||
fun := func(f *token.File) bool {
|
|
||||||
gas.Stats.NumLines += f.LineCount()
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
gas.context.FileSet.Iterate(fun)
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
// ignore a node (and sub-tree) if it is tagged with a "#nosec" comment
|
|
||||||
func (gas *Analyzer) ignore(n ast.Node) bool {
|
|
||||||
if groups, ok := gas.context.Comments[n]; ok && !gas.ignoreNosec {
|
|
||||||
for _, group := range groups {
|
|
||||||
if strings.Contains(group.Text(), "#nosec") {
|
|
||||||
gas.Stats.NumNosec++
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Visit runs the GAS visitor logic over an AST created by parsing go code.
|
|
||||||
// Rule methods added with AddRule will be invoked as necessary.
|
|
||||||
func (gas *Analyzer) Visit(n ast.Node) ast.Visitor {
|
|
||||||
if !gas.ignore(n) {
|
|
||||||
|
|
||||||
// Track aliased and initialization imports
|
|
||||||
if imported, ok := n.(*ast.ImportSpec); ok {
|
|
||||||
path := strings.Trim(imported.Path.Value, `"`)
|
|
||||||
if imported.Name != nil {
|
|
||||||
if imported.Name.Name == "_" {
|
|
||||||
// Initialization import
|
|
||||||
gas.context.Imports.InitOnly[path] = true
|
|
||||||
} else {
|
|
||||||
// Aliased import
|
|
||||||
gas.context.Imports.Aliased[path] = imported.Name.Name
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// unsafe is not included in Package.Imports()
|
|
||||||
if path == "unsafe" {
|
|
||||||
gas.context.Imports.Imported[path] = path
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if val, ok := gas.ruleset[reflect.TypeOf(n)]; ok {
|
|
||||||
for _, rule := range val {
|
|
||||||
ret, err := rule.Match(n, gas.context)
|
|
||||||
if err != nil {
|
|
||||||
file, line := GetLocation(n, gas.context)
|
|
||||||
file = path.Base(file)
|
|
||||||
gas.logger.Printf("Rule error: %v => %s (%s:%d)\n", reflect.TypeOf(rule), err, file, line)
|
|
||||||
}
|
|
||||||
if ret != nil {
|
|
||||||
gas.Issues = append(gas.Issues, ret)
|
|
||||||
gas.Stats.NumFound++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return gas
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,73 +0,0 @@
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
)
|
|
||||||
|
|
||||||
type set map[string]bool
|
|
||||||
|
|
||||||
/// CallList is used to check for usage of specific packages
|
|
||||||
/// and functions.
|
|
||||||
type CallList map[string]set
|
|
||||||
|
|
||||||
/// NewCallList creates a new empty CallList
|
|
||||||
func NewCallList() CallList {
|
|
||||||
return make(CallList)
|
|
||||||
}
|
|
||||||
|
|
||||||
/// AddAll will add several calls to the call list at once
|
|
||||||
func (c CallList) AddAll(selector string, idents ...string) {
|
|
||||||
for _, ident := range idents {
|
|
||||||
c.Add(selector, ident)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Add a selector and call to the call list
|
|
||||||
func (c CallList) Add(selector, ident string) {
|
|
||||||
if _, ok := c[selector]; !ok {
|
|
||||||
c[selector] = make(set)
|
|
||||||
}
|
|
||||||
c[selector][ident] = true
|
|
||||||
}
|
|
||||||
|
|
||||||
/// Contains returns true if the package and function are
|
|
||||||
/// members of this call list.
|
|
||||||
func (c CallList) Contains(selector, ident string) bool {
|
|
||||||
if idents, ok := c[selector]; ok {
|
|
||||||
_, found := idents[ident]
|
|
||||||
return found
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
/// ContainsCallExpr resolves the call expression name and type
|
|
||||||
/// or package and determines if it exists within the CallList
|
|
||||||
func (c CallList) ContainsCallExpr(n ast.Node, ctx *Context) bool {
|
|
||||||
selector, ident, err := GetCallInfo(n, ctx)
|
|
||||||
if err != nil {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
// Try direct resolution
|
|
||||||
if c.Contains(selector, ident) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// Also support explicit path
|
|
||||||
if path, ok := GetImportPath(selector, ctx); ok {
|
|
||||||
return c.Contains(path, ident)
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
@ -1,220 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"go/token"
|
|
||||||
"go/types"
|
|
||||||
"reflect"
|
|
||||||
"regexp"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// helpfull "canned" matching routines ----------------------------------------
|
|
||||||
|
|
||||||
func selectName(n ast.Node, s reflect.Type) (string, bool) {
|
|
||||||
t := reflect.TypeOf(&ast.SelectorExpr{})
|
|
||||||
if node, ok := SimpleSelect(n, s, t).(*ast.SelectorExpr); ok {
|
|
||||||
t = reflect.TypeOf(&ast.Ident{})
|
|
||||||
if ident, ok := SimpleSelect(node.X, t).(*ast.Ident); ok {
|
|
||||||
return strings.Join([]string{ident.Name, node.Sel.Name}, "."), ok
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
// MatchCall will match an ast.CallNode if its method name obays the given regex.
|
|
||||||
func MatchCall(n ast.Node, r *regexp.Regexp) *ast.CallExpr {
|
|
||||||
t := reflect.TypeOf(&ast.CallExpr{})
|
|
||||||
if name, ok := selectName(n, t); ok && r.MatchString(name) {
|
|
||||||
return n.(*ast.CallExpr)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// MatchCallByPackage ensures that the specified package is imported,
|
|
||||||
// adjusts the name for any aliases and ignores cases that are
|
|
||||||
// initialization only imports.
|
|
||||||
//
|
|
||||||
// Usage:
|
|
||||||
// node, matched := MatchCallByPackage(n, ctx, "math/rand", "Read")
|
|
||||||
//
|
|
||||||
func MatchCallByPackage(n ast.Node, c *Context, pkg string, names ...string) (*ast.CallExpr, bool) {
|
|
||||||
|
|
||||||
importedName, found := GetImportedName(pkg, c)
|
|
||||||
if !found {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
|
|
||||||
if callExpr, ok := n.(*ast.CallExpr); ok {
|
|
||||||
packageName, callName, err := GetCallInfo(callExpr, c)
|
|
||||||
if err != nil {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
if packageName == importedName {
|
|
||||||
for _, name := range names {
|
|
||||||
if callName == name {
|
|
||||||
return callExpr, true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
|
|
||||||
// MatchCallByType ensures that the node is a call expression to a
|
|
||||||
// specific object type.
|
|
||||||
//
|
|
||||||
// Usage:
|
|
||||||
// node, matched := MatchCallByType(n, ctx, "bytes.Buffer", "WriteTo", "Write")
|
|
||||||
//
|
|
||||||
func MatchCallByType(n ast.Node, ctx *Context, requiredType string, calls ...string) (*ast.CallExpr, bool) {
|
|
||||||
if callExpr, ok := n.(*ast.CallExpr); ok {
|
|
||||||
typeName, callName, err := GetCallInfo(callExpr, ctx)
|
|
||||||
if err != nil {
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
if typeName == requiredType {
|
|
||||||
for _, call := range calls {
|
|
||||||
if call == callName {
|
|
||||||
return callExpr, true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, false
|
|
||||||
}
|
|
||||||
|
|
||||||
// MatchCompLit will match an ast.CompositeLit if its string value obays the given regex.
|
|
||||||
func MatchCompLit(n ast.Node, r *regexp.Regexp) *ast.CompositeLit {
|
|
||||||
t := reflect.TypeOf(&ast.CompositeLit{})
|
|
||||||
if name, ok := selectName(n, t); ok && r.MatchString(name) {
|
|
||||||
return n.(*ast.CompositeLit)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetInt will read and return an integer value from an ast.BasicLit
|
|
||||||
func GetInt(n ast.Node) (int64, error) {
|
|
||||||
if node, ok := n.(*ast.BasicLit); ok && node.Kind == token.INT {
|
|
||||||
return strconv.ParseInt(node.Value, 0, 64)
|
|
||||||
}
|
|
||||||
return 0, fmt.Errorf("Unexpected AST node type: %T", n)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetInt will read and return a float value from an ast.BasicLit
|
|
||||||
func GetFloat(n ast.Node) (float64, error) {
|
|
||||||
if node, ok := n.(*ast.BasicLit); ok && node.Kind == token.FLOAT {
|
|
||||||
return strconv.ParseFloat(node.Value, 64)
|
|
||||||
}
|
|
||||||
return 0.0, fmt.Errorf("Unexpected AST node type: %T", n)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetInt will read and return a char value from an ast.BasicLit
|
|
||||||
func GetChar(n ast.Node) (byte, error) {
|
|
||||||
if node, ok := n.(*ast.BasicLit); ok && node.Kind == token.CHAR {
|
|
||||||
return node.Value[0], nil
|
|
||||||
}
|
|
||||||
return 0, fmt.Errorf("Unexpected AST node type: %T", n)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetInt will read and return a string value from an ast.BasicLit
|
|
||||||
func GetString(n ast.Node) (string, error) {
|
|
||||||
if node, ok := n.(*ast.BasicLit); ok && node.Kind == token.STRING {
|
|
||||||
return strconv.Unquote(node.Value)
|
|
||||||
}
|
|
||||||
return "", fmt.Errorf("Unexpected AST node type: %T", n)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetCallObject returns the object and call expression and associated
|
|
||||||
// object for a given AST node. nil, nil will be returned if the
|
|
||||||
// object cannot be resolved.
|
|
||||||
func GetCallObject(n ast.Node, ctx *Context) (*ast.CallExpr, types.Object) {
|
|
||||||
switch node := n.(type) {
|
|
||||||
case *ast.CallExpr:
|
|
||||||
switch fn := node.Fun.(type) {
|
|
||||||
case *ast.Ident:
|
|
||||||
return node, ctx.Info.Uses[fn]
|
|
||||||
case *ast.SelectorExpr:
|
|
||||||
return node, ctx.Info.Uses[fn.Sel]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetCallInfo returns the package or type and name associated with a
|
|
||||||
// call expression.
|
|
||||||
func GetCallInfo(n ast.Node, ctx *Context) (string, string, error) {
|
|
||||||
switch node := n.(type) {
|
|
||||||
case *ast.CallExpr:
|
|
||||||
switch fn := node.Fun.(type) {
|
|
||||||
case *ast.SelectorExpr:
|
|
||||||
switch expr := fn.X.(type) {
|
|
||||||
case *ast.Ident:
|
|
||||||
if expr.Obj != nil && expr.Obj.Kind == ast.Var {
|
|
||||||
t := ctx.Info.TypeOf(expr)
|
|
||||||
if t != nil {
|
|
||||||
return t.String(), fn.Sel.Name, nil
|
|
||||||
} else {
|
|
||||||
return "undefined", fn.Sel.Name, fmt.Errorf("missing type info")
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
return expr.Name, fn.Sel.Name, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case *ast.Ident:
|
|
||||||
return ctx.Pkg.Name(), fn.Name, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", "", fmt.Errorf("unable to determine call info")
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetImportedName returns the name used for the package within the
|
|
||||||
// code. It will resolve aliases and ignores initalization only imports.
|
|
||||||
func GetImportedName(path string, ctx *Context) (string, bool) {
|
|
||||||
importName, imported := ctx.Imports.Imported[path]
|
|
||||||
if !imported {
|
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
if _, initonly := ctx.Imports.InitOnly[path]; initonly {
|
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
if alias, ok := ctx.Imports.Aliased[path]; ok {
|
|
||||||
importName = alias
|
|
||||||
}
|
|
||||||
return importName, true
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetImportPath resolves the full import path of an identifer based on
|
|
||||||
// the imports in the current context.
|
|
||||||
func GetImportPath(name string, ctx *Context) (string, bool) {
|
|
||||||
for path, _ := range ctx.Imports.Imported {
|
|
||||||
if imported, ok := GetImportedName(path, ctx); ok && imported == name {
|
|
||||||
return path, true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return "", false
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetLocation returns the filename and line number of an ast.Node
|
|
||||||
func GetLocation(n ast.Node, ctx *Context) (string, int) {
|
|
||||||
fobj := ctx.FileSet.File(n.Pos())
|
|
||||||
return fobj.Name(), fobj.Line(n.Pos())
|
|
||||||
}
|
|
||||||
|
|
@ -1,108 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"os"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Score type used by severity and confidence values
|
|
||||||
type Score int
|
|
||||||
|
|
||||||
const (
|
|
||||||
Low Score = iota // Low value
|
|
||||||
Medium // Medium value
|
|
||||||
High // High value
|
|
||||||
)
|
|
||||||
|
|
||||||
// An Issue is returnd by a GAS rule if it discovers an issue with the scanned code.
|
|
||||||
type Issue struct {
|
|
||||||
Severity Score `json:"severity"` // issue severity (how problematic it is)
|
|
||||||
Confidence Score `json:"confidence"` // issue confidence (how sure we are we found it)
|
|
||||||
What string `json:"details"` // Human readable explanation
|
|
||||||
File string `json:"file"` // File name we found it in
|
|
||||||
Code string `json:"code"` // Impacted code line
|
|
||||||
Line int `json:"line"` // Line number in file
|
|
||||||
}
|
|
||||||
|
|
||||||
// MetaData is embedded in all GAS rules. The Severity, Confidence and What message
|
|
||||||
// will be passed tbhrough to reported issues.
|
|
||||||
type MetaData struct {
|
|
||||||
Severity Score
|
|
||||||
Confidence Score
|
|
||||||
What string
|
|
||||||
}
|
|
||||||
|
|
||||||
// MarshalJSON is used convert a Score object into a JSON representation
|
|
||||||
func (c Score) MarshalJSON() ([]byte, error) {
|
|
||||||
return json.Marshal(c.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
// String converts a Score into a string
|
|
||||||
func (c Score) String() string {
|
|
||||||
switch c {
|
|
||||||
case High:
|
|
||||||
return "HIGH"
|
|
||||||
case Medium:
|
|
||||||
return "MEDIUM"
|
|
||||||
case Low:
|
|
||||||
return "LOW"
|
|
||||||
}
|
|
||||||
return "UNDEFINED"
|
|
||||||
}
|
|
||||||
|
|
||||||
func codeSnippet(file *os.File, start int64, end int64, n ast.Node) (string, error) {
|
|
||||||
if n == nil {
|
|
||||||
return "", fmt.Errorf("Invalid AST node provided")
|
|
||||||
}
|
|
||||||
|
|
||||||
size := (int)(end - start) // Go bug, os.File.Read should return int64 ...
|
|
||||||
file.Seek(start, 0)
|
|
||||||
|
|
||||||
buf := make([]byte, size)
|
|
||||||
if nread, err := file.Read(buf); err != nil || nread != size {
|
|
||||||
return "", fmt.Errorf("Unable to read code")
|
|
||||||
}
|
|
||||||
return string(buf), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewIssue creates a new Issue
|
|
||||||
func NewIssue(ctx *Context, node ast.Node, desc string, severity Score, confidence Score) *Issue {
|
|
||||||
var code string
|
|
||||||
fobj := ctx.FileSet.File(node.Pos())
|
|
||||||
name := fobj.Name()
|
|
||||||
line := fobj.Line(node.Pos())
|
|
||||||
|
|
||||||
if file, err := os.Open(fobj.Name()); err == nil {
|
|
||||||
defer file.Close()
|
|
||||||
s := (int64)(fobj.Position(node.Pos()).Offset) // Go bug, should be int64
|
|
||||||
e := (int64)(fobj.Position(node.End()).Offset) // Go bug, should be int64
|
|
||||||
code, err = codeSnippet(file, s, e, node)
|
|
||||||
if err != nil {
|
|
||||||
code = err.Error()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Issue{
|
|
||||||
File: name,
|
|
||||||
Line: line,
|
|
||||||
What: desc,
|
|
||||||
Confidence: confidence,
|
|
||||||
Severity: severity,
|
|
||||||
Code: code,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,81 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package core
|
|
||||||
|
|
||||||
import "go/ast"
|
|
||||||
|
|
||||||
func resolveIdent(n *ast.Ident, c *Context) bool {
|
|
||||||
if n.Obj == nil || n.Obj.Kind != ast.Var {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if node, ok := n.Obj.Decl.(ast.Node); ok {
|
|
||||||
return TryResolve(node, c)
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveAssign(n *ast.AssignStmt, c *Context) bool {
|
|
||||||
for _, arg := range n.Rhs {
|
|
||||||
if !TryResolve(arg, c) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveCompLit(n *ast.CompositeLit, c *Context) bool {
|
|
||||||
for _, arg := range n.Elts {
|
|
||||||
if !TryResolve(arg, c) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveBinExpr(n *ast.BinaryExpr, c *Context) bool {
|
|
||||||
return (TryResolve(n.X, c) && TryResolve(n.Y, c))
|
|
||||||
}
|
|
||||||
|
|
||||||
func resolveCallExpr(n *ast.CallExpr, c *Context) bool {
|
|
||||||
// TODO(tkelsey): next step, full function resolution
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// TryResolve will attempt, given a subtree starting at some ATS node, to resolve
|
|
||||||
// all values contained within to a known constant. It is used to check for any
|
|
||||||
// unkown values in compound expressions.
|
|
||||||
func TryResolve(n ast.Node, c *Context) bool {
|
|
||||||
switch node := n.(type) {
|
|
||||||
case *ast.BasicLit:
|
|
||||||
return true
|
|
||||||
|
|
||||||
case *ast.CompositeLit:
|
|
||||||
return resolveCompLit(node, c)
|
|
||||||
|
|
||||||
case *ast.Ident:
|
|
||||||
return resolveIdent(node, c)
|
|
||||||
|
|
||||||
case *ast.AssignStmt:
|
|
||||||
return resolveAssign(node, c)
|
|
||||||
|
|
||||||
case *ast.CallExpr:
|
|
||||||
return resolveCallExpr(node, c)
|
|
||||||
|
|
||||||
case *ast.BinaryExpr:
|
|
||||||
return resolveBinExpr(node, c)
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
@ -1,404 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package core
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"reflect"
|
|
||||||
)
|
|
||||||
|
|
||||||
// SelectFunc is like an AST visitor, but has a richer interface. It
|
|
||||||
// is called with the current ast.Node being visitied and that nodes depth in
|
|
||||||
// the tree. The function can return true to continue traversing the tree, or
|
|
||||||
// false to end traversal here.
|
|
||||||
type SelectFunc func(ast.Node, int) bool
|
|
||||||
|
|
||||||
func walkIdentList(list []*ast.Ident, depth int, fun SelectFunc) {
|
|
||||||
for _, x := range list {
|
|
||||||
depthWalk(x, depth, fun)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func walkExprList(list []ast.Expr, depth int, fun SelectFunc) {
|
|
||||||
for _, x := range list {
|
|
||||||
depthWalk(x, depth, fun)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func walkStmtList(list []ast.Stmt, depth int, fun SelectFunc) {
|
|
||||||
for _, x := range list {
|
|
||||||
depthWalk(x, depth, fun)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func walkDeclList(list []ast.Decl, depth int, fun SelectFunc) {
|
|
||||||
for _, x := range list {
|
|
||||||
depthWalk(x, depth, fun)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func depthWalk(node ast.Node, depth int, fun SelectFunc) {
|
|
||||||
if !fun(node, depth) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
switch n := node.(type) {
|
|
||||||
// Comments and fields
|
|
||||||
case *ast.Comment:
|
|
||||||
|
|
||||||
case *ast.CommentGroup:
|
|
||||||
for _, c := range n.List {
|
|
||||||
depthWalk(c, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.Field:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
walkIdentList(n.Names, depth+1, fun)
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
if n.Tag != nil {
|
|
||||||
depthWalk(n.Tag, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Comment != nil {
|
|
||||||
depthWalk(n.Comment, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.FieldList:
|
|
||||||
for _, f := range n.List {
|
|
||||||
depthWalk(f, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Expressions
|
|
||||||
case *ast.BadExpr, *ast.Ident, *ast.BasicLit:
|
|
||||||
|
|
||||||
case *ast.Ellipsis:
|
|
||||||
if n.Elt != nil {
|
|
||||||
depthWalk(n.Elt, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.FuncLit:
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.CompositeLit:
|
|
||||||
if n.Type != nil {
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
}
|
|
||||||
walkExprList(n.Elts, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.ParenExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.SelectorExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
depthWalk(n.Sel, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.IndexExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
depthWalk(n.Index, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.SliceExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
if n.Low != nil {
|
|
||||||
depthWalk(n.Low, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.High != nil {
|
|
||||||
depthWalk(n.High, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Max != nil {
|
|
||||||
depthWalk(n.Max, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.TypeAssertExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
if n.Type != nil {
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.CallExpr:
|
|
||||||
depthWalk(n.Fun, depth+1, fun)
|
|
||||||
walkExprList(n.Args, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.StarExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.UnaryExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.BinaryExpr:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
depthWalk(n.Y, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.KeyValueExpr:
|
|
||||||
depthWalk(n.Key, depth+1, fun)
|
|
||||||
depthWalk(n.Value, depth+1, fun)
|
|
||||||
|
|
||||||
// Types
|
|
||||||
case *ast.ArrayType:
|
|
||||||
if n.Len != nil {
|
|
||||||
depthWalk(n.Len, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Elt, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.StructType:
|
|
||||||
depthWalk(n.Fields, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.FuncType:
|
|
||||||
if n.Params != nil {
|
|
||||||
depthWalk(n.Params, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Results != nil {
|
|
||||||
depthWalk(n.Results, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.InterfaceType:
|
|
||||||
depthWalk(n.Methods, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.MapType:
|
|
||||||
depthWalk(n.Key, depth+1, fun)
|
|
||||||
depthWalk(n.Value, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.ChanType:
|
|
||||||
depthWalk(n.Value, depth+1, fun)
|
|
||||||
|
|
||||||
// Statements
|
|
||||||
case *ast.BadStmt:
|
|
||||||
|
|
||||||
case *ast.DeclStmt:
|
|
||||||
depthWalk(n.Decl, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.EmptyStmt:
|
|
||||||
|
|
||||||
case *ast.LabeledStmt:
|
|
||||||
depthWalk(n.Label, depth+1, fun)
|
|
||||||
depthWalk(n.Stmt, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.ExprStmt:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.SendStmt:
|
|
||||||
depthWalk(n.Chan, depth+1, fun)
|
|
||||||
depthWalk(n.Value, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.IncDecStmt:
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.AssignStmt:
|
|
||||||
walkExprList(n.Lhs, depth+1, fun)
|
|
||||||
walkExprList(n.Rhs, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.GoStmt:
|
|
||||||
depthWalk(n.Call, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.DeferStmt:
|
|
||||||
depthWalk(n.Call, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.ReturnStmt:
|
|
||||||
walkExprList(n.Results, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.BranchStmt:
|
|
||||||
if n.Label != nil {
|
|
||||||
depthWalk(n.Label, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.BlockStmt:
|
|
||||||
walkStmtList(n.List, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.IfStmt:
|
|
||||||
if n.Init != nil {
|
|
||||||
depthWalk(n.Init, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Cond, depth+1, fun)
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
if n.Else != nil {
|
|
||||||
depthWalk(n.Else, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.CaseClause:
|
|
||||||
walkExprList(n.List, depth+1, fun)
|
|
||||||
walkStmtList(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.SwitchStmt:
|
|
||||||
if n.Init != nil {
|
|
||||||
depthWalk(n.Init, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Tag != nil {
|
|
||||||
depthWalk(n.Tag, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.TypeSwitchStmt:
|
|
||||||
if n.Init != nil {
|
|
||||||
depthWalk(n.Init, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Assign, depth+1, fun)
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.CommClause:
|
|
||||||
if n.Comm != nil {
|
|
||||||
depthWalk(n.Comm, depth+1, fun)
|
|
||||||
}
|
|
||||||
walkStmtList(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.SelectStmt:
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.ForStmt:
|
|
||||||
if n.Init != nil {
|
|
||||||
depthWalk(n.Init, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Cond != nil {
|
|
||||||
depthWalk(n.Cond, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Post != nil {
|
|
||||||
depthWalk(n.Post, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
case *ast.RangeStmt:
|
|
||||||
if n.Key != nil {
|
|
||||||
depthWalk(n.Key, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Value != nil {
|
|
||||||
depthWalk(n.Value, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.X, depth+1, fun)
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
|
|
||||||
// Declarations
|
|
||||||
case *ast.ImportSpec:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Name != nil {
|
|
||||||
depthWalk(n.Name, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Path, depth+1, fun)
|
|
||||||
if n.Comment != nil {
|
|
||||||
depthWalk(n.Comment, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.ValueSpec:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
walkIdentList(n.Names, depth+1, fun)
|
|
||||||
if n.Type != nil {
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
}
|
|
||||||
walkExprList(n.Values, depth+1, fun)
|
|
||||||
if n.Comment != nil {
|
|
||||||
depthWalk(n.Comment, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.TypeSpec:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Name, depth+1, fun)
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
if n.Comment != nil {
|
|
||||||
depthWalk(n.Comment, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.BadDecl:
|
|
||||||
|
|
||||||
case *ast.GenDecl:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
for _, s := range n.Specs {
|
|
||||||
depthWalk(s, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
case *ast.FuncDecl:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
if n.Recv != nil {
|
|
||||||
depthWalk(n.Recv, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Name, depth+1, fun)
|
|
||||||
depthWalk(n.Type, depth+1, fun)
|
|
||||||
if n.Body != nil {
|
|
||||||
depthWalk(n.Body, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Files and packages
|
|
||||||
case *ast.File:
|
|
||||||
if n.Doc != nil {
|
|
||||||
depthWalk(n.Doc, depth+1, fun)
|
|
||||||
}
|
|
||||||
depthWalk(n.Name, depth+1, fun)
|
|
||||||
walkDeclList(n.Decls, depth+1, fun)
|
|
||||||
// don't walk n.Comments - they have been
|
|
||||||
// visited already through the individual
|
|
||||||
// nodes
|
|
||||||
|
|
||||||
case *ast.Package:
|
|
||||||
for _, f := range n.Files {
|
|
||||||
depthWalk(f, depth+1, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
panic(fmt.Sprintf("gas.depthWalk: unexpected node type %T", n))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Selector interface {
|
|
||||||
Final(ast.Node)
|
|
||||||
Partial(ast.Node) bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func Select(s Selector, n ast.Node, bits ...reflect.Type) {
|
|
||||||
fun := func(n ast.Node, d int) bool {
|
|
||||||
if d < len(bits) && reflect.TypeOf(n) == bits[d] {
|
|
||||||
if d == len(bits)-1 {
|
|
||||||
s.Final(n)
|
|
||||||
return false
|
|
||||||
} else if s.Partial(n) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
depthWalk(n, 0, fun)
|
|
||||||
}
|
|
||||||
|
|
||||||
// SimpleSelect will try to match a path through a sub-tree starting at a given AST node.
|
|
||||||
// The type of each node in the path at a given depth must match its entry in list of
|
|
||||||
// node types given.
|
|
||||||
func SimpleSelect(n ast.Node, bits ...reflect.Type) ast.Node {
|
|
||||||
var found ast.Node
|
|
||||||
fun := func(n ast.Node, d int) bool {
|
|
||||||
if found != nil {
|
|
||||||
return false // short cut logic if we have found a match
|
|
||||||
}
|
|
||||||
|
|
||||||
if d < len(bits) && reflect.TypeOf(n) == bits[d] {
|
|
||||||
if d == len(bits)-1 {
|
|
||||||
found = n
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
depthWalk(n, 0, fun)
|
|
||||||
return found
|
|
||||||
}
|
|
||||||
|
|
@ -1,87 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"sort"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
"github.com/ryanuber/go-glob"
|
|
||||||
)
|
|
||||||
|
|
||||||
// fileList uses a map for patterns to ensure each pattern only
|
|
||||||
// appears once
|
|
||||||
type fileList struct {
|
|
||||||
patterns map[string]struct{}
|
|
||||||
}
|
|
||||||
|
|
||||||
func newFileList(paths ...string) *fileList {
|
|
||||||
f := &fileList{
|
|
||||||
patterns: make(map[string]struct{}),
|
|
||||||
}
|
|
||||||
for _, p := range paths {
|
|
||||||
f.patterns[p] = struct{}{}
|
|
||||||
}
|
|
||||||
return f
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *fileList) String() string {
|
|
||||||
ps := make([]string, 0, len(f.patterns))
|
|
||||||
for p := range f.patterns {
|
|
||||||
ps = append(ps, p)
|
|
||||||
}
|
|
||||||
sort.Strings(ps)
|
|
||||||
return strings.Join(ps, ", ")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f *fileList) Set(path string) error {
|
|
||||||
if path == "" {
|
|
||||||
// don't bother adding the empty path
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
f.patterns[path] = struct{}{}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (f fileList) Contains(path string) bool {
|
|
||||||
for p := range f.patterns {
|
|
||||||
if strings.Contains(p, glob.GLOB) {
|
|
||||||
if glob.Glob(p, path) {
|
|
||||||
if logger != nil {
|
|
||||||
logger.Printf("skipping: %s\n", path)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// check if only a sub-folder of the path is excluded
|
|
||||||
if strings.Contains(path, p) {
|
|
||||||
if logger != nil {
|
|
||||||
logger.Printf("skipping: %s\n", path)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
func (f fileList) Dump() {
|
|
||||||
for k, _ := range f.paths {
|
|
||||||
println(k)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
*/
|
|
||||||
|
|
@ -1,293 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"io/ioutil"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"github.com/GoASTScanner/gas/output"
|
|
||||||
)
|
|
||||||
|
|
||||||
type recursion bool
|
|
||||||
|
|
||||||
const (
|
|
||||||
recurse recursion = true
|
|
||||||
noRecurse recursion = false
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
// #nosec flag
|
|
||||||
flagIgnoreNoSec = flag.Bool("nosec", false, "Ignores #nosec comments when set")
|
|
||||||
|
|
||||||
// format output
|
|
||||||
flagFormat = flag.String("fmt", "text", "Set output format. Valid options are: json, csv, html, or text")
|
|
||||||
|
|
||||||
// output file
|
|
||||||
flagOutput = flag.String("out", "", "Set output file for results")
|
|
||||||
|
|
||||||
// config file
|
|
||||||
flagConfig = flag.String("conf", "", "Path to optional config file")
|
|
||||||
|
|
||||||
// quiet
|
|
||||||
flagQuiet = flag.Bool("quiet", false, "Only show output when errors are found")
|
|
||||||
|
|
||||||
usageText = `
|
|
||||||
GAS - Go AST Scanner
|
|
||||||
|
|
||||||
Gas analyzes Go source code to look for common programming mistakes that
|
|
||||||
can lead to security problems.
|
|
||||||
|
|
||||||
USAGE:
|
|
||||||
|
|
||||||
# Check a single Go file
|
|
||||||
$ gas example.go
|
|
||||||
|
|
||||||
# Check all files under the current directory and save results in
|
|
||||||
# json format.
|
|
||||||
$ gas -fmt=json -out=results.json ./...
|
|
||||||
|
|
||||||
# Run a specific set of rules (by default all rules will be run):
|
|
||||||
$ gas -include=G101,G203,G401 ./...
|
|
||||||
|
|
||||||
# Run all rules except the provided
|
|
||||||
$ gas -exclude=G101 ./...
|
|
||||||
|
|
||||||
`
|
|
||||||
|
|
||||||
logger *log.Logger
|
|
||||||
)
|
|
||||||
|
|
||||||
func extendConfList(conf map[string]interface{}, name string, inputStr string) {
|
|
||||||
if inputStr == "" {
|
|
||||||
conf[name] = []string{}
|
|
||||||
} else {
|
|
||||||
input := strings.Split(inputStr, ",")
|
|
||||||
if val, ok := conf[name]; ok {
|
|
||||||
if data, ok := val.(*[]string); ok {
|
|
||||||
conf[name] = append(*data, input...)
|
|
||||||
} else {
|
|
||||||
logger.Fatal("Config item must be a string list: ", name)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
conf[name] = input
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildConfig(incRules string, excRules string) map[string]interface{} {
|
|
||||||
config := make(map[string]interface{})
|
|
||||||
if flagConfig != nil && *flagConfig != "" { // parse config if we have one
|
|
||||||
if data, err := ioutil.ReadFile(*flagConfig); err == nil {
|
|
||||||
if err := json.Unmarshal(data, &(config)); err != nil {
|
|
||||||
logger.Fatal("Could not parse JSON config: ", *flagConfig, ": ", err)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
logger.Fatal("Could not read config file: ", *flagConfig)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// add in CLI include and exclude data
|
|
||||||
extendConfList(config, "include", incRules)
|
|
||||||
extendConfList(config, "exclude", excRules)
|
|
||||||
|
|
||||||
// override ignoreNosec if given on CLI
|
|
||||||
if flagIgnoreNoSec != nil {
|
|
||||||
config["ignoreNosec"] = *flagIgnoreNoSec
|
|
||||||
} else {
|
|
||||||
val, ok := config["ignoreNosec"]
|
|
||||||
if !ok {
|
|
||||||
config["ignoreNosec"] = false
|
|
||||||
} else if _, ok := val.(bool); !ok {
|
|
||||||
logger.Fatal("Config value must be a bool: 'ignoreNosec'")
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return config
|
|
||||||
}
|
|
||||||
|
|
||||||
// #nosec
|
|
||||||
func usage() {
|
|
||||||
|
|
||||||
fmt.Fprintln(os.Stderr, usageText)
|
|
||||||
fmt.Fprint(os.Stderr, "OPTIONS:\n\n")
|
|
||||||
flag.PrintDefaults()
|
|
||||||
fmt.Fprint(os.Stderr, "\n\nRULES:\n\n")
|
|
||||||
|
|
||||||
// sorted rule list for eas of reading
|
|
||||||
rl := GetFullRuleList()
|
|
||||||
keys := make([]string, 0, len(rl))
|
|
||||||
for key := range rl {
|
|
||||||
keys = append(keys, key)
|
|
||||||
}
|
|
||||||
sort.Strings(keys)
|
|
||||||
for _, k := range keys {
|
|
||||||
v := rl[k]
|
|
||||||
fmt.Fprintf(os.Stderr, "\t%s: %s\n", k, v.description)
|
|
||||||
}
|
|
||||||
fmt.Fprint(os.Stderr, "\n")
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
|
|
||||||
// Setup usage description
|
|
||||||
flag.Usage = usage
|
|
||||||
|
|
||||||
// Exclude files
|
|
||||||
excluded := newFileList("*_test.go")
|
|
||||||
flag.Var(excluded, "skip", "File pattern to exclude from scan. Uses simple * globs and requires full or partial match")
|
|
||||||
|
|
||||||
incRules := ""
|
|
||||||
flag.StringVar(&incRules, "include", "", "Comma separated list of rules IDs to include. (see rule list)")
|
|
||||||
|
|
||||||
excRules := ""
|
|
||||||
flag.StringVar(&excRules, "exclude", "", "Comma separated list of rules IDs to exclude. (see rule list)")
|
|
||||||
|
|
||||||
// Custom commands / utilities to run instead of default analyzer
|
|
||||||
tools := newUtils()
|
|
||||||
flag.Var(tools, "tool", "GAS utilities to assist with rule development")
|
|
||||||
|
|
||||||
// Setup logging
|
|
||||||
logger = log.New(os.Stderr, "[gas] ", log.LstdFlags)
|
|
||||||
|
|
||||||
// Parse command line arguments
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
// Ensure at least one file was specified
|
|
||||||
if flag.NArg() == 0 {
|
|
||||||
|
|
||||||
fmt.Fprintf(os.Stderr, "\nError: FILE [FILE...] or './...' expected\n")
|
|
||||||
flag.Usage()
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Run utils instead of analysis
|
|
||||||
if len(tools.call) > 0 {
|
|
||||||
tools.run(flag.Args()...)
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Setup analyzer
|
|
||||||
config := buildConfig(incRules, excRules)
|
|
||||||
analyzer := gas.NewAnalyzer(config, logger)
|
|
||||||
AddRules(&analyzer, config)
|
|
||||||
|
|
||||||
toAnalyze := getFilesToAnalyze(flag.Args(), excluded)
|
|
||||||
|
|
||||||
for _, file := range toAnalyze {
|
|
||||||
logger.Printf(`Processing "%s"...`, file)
|
|
||||||
if err := analyzer.Process(file); err != nil {
|
|
||||||
logger.Printf(`Failed to process: "%s"`, file)
|
|
||||||
logger.Println(err)
|
|
||||||
logger.Fatalf(`Halting execution.`)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
issuesFound := len(analyzer.Issues) > 0
|
|
||||||
// Exit quietly if nothing was found
|
|
||||||
if !issuesFound && *flagQuiet {
|
|
||||||
os.Exit(0)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create output report
|
|
||||||
if *flagOutput != "" {
|
|
||||||
outfile, err := os.Create(*flagOutput)
|
|
||||||
if err != nil {
|
|
||||||
logger.Fatalf("Couldn't open: %s for writing. Reason - %s", *flagOutput, err)
|
|
||||||
}
|
|
||||||
defer outfile.Close()
|
|
||||||
output.CreateReport(outfile, *flagFormat, &analyzer)
|
|
||||||
} else {
|
|
||||||
output.CreateReport(os.Stdout, *flagFormat, &analyzer)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Do we have an issue? If so exit 1
|
|
||||||
if issuesFound {
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// getFilesToAnalyze lists all files
|
|
||||||
func getFilesToAnalyze(paths []string, excluded *fileList) []string {
|
|
||||||
//log.Println("getFilesToAnalyze: start")
|
|
||||||
var toAnalyze []string
|
|
||||||
for _, relativePath := range paths {
|
|
||||||
//log.Printf("getFilesToAnalyze: processing \"%s\"\n", path)
|
|
||||||
// get the absolute path before doing anything else
|
|
||||||
path, err := filepath.Abs(relativePath)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
if filepath.Base(relativePath) == "..." {
|
|
||||||
toAnalyze = append(
|
|
||||||
toAnalyze,
|
|
||||||
listFiles(filepath.Dir(path), recurse, excluded)...,
|
|
||||||
)
|
|
||||||
} else {
|
|
||||||
var (
|
|
||||||
finfo os.FileInfo
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
if finfo, err = os.Stat(path); err != nil {
|
|
||||||
logger.Fatal(err)
|
|
||||||
}
|
|
||||||
if !finfo.IsDir() {
|
|
||||||
if shouldInclude(path, excluded) {
|
|
||||||
toAnalyze = append(toAnalyze, path)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
toAnalyze = listFiles(path, noRecurse, excluded)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
//log.Println("getFilesToAnalyze: end")
|
|
||||||
return toAnalyze
|
|
||||||
}
|
|
||||||
|
|
||||||
// listFiles returns a list of all files found that pass the shouldInclude check.
|
|
||||||
// If doRecursiveWalk it true, it will walk the tree rooted at absPath, otherwise it
|
|
||||||
// will only include files directly within the dir referenced by absPath.
|
|
||||||
func listFiles(absPath string, doRecursiveWalk recursion, excluded *fileList) []string {
|
|
||||||
var files []string
|
|
||||||
|
|
||||||
walk := func(path string, info os.FileInfo, err error) error {
|
|
||||||
if info.IsDir() && doRecursiveWalk == noRecurse {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
if shouldInclude(path, excluded) {
|
|
||||||
files = append(files, path)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if err := filepath.Walk(absPath, walk); err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
return files
|
|
||||||
}
|
|
||||||
|
|
||||||
// shouldInclude checks if a specific path which is expected to reference
|
|
||||||
// a regular file should be included
|
|
||||||
func shouldInclude(path string, excluded *fileList) bool {
|
|
||||||
return filepath.Ext(path) == ".go" && !excluded.Contains(path)
|
|
||||||
}
|
|
||||||
|
|
@ -1,116 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package output
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/csv"
|
|
||||||
"encoding/json"
|
|
||||||
htmlTemplate "html/template"
|
|
||||||
"io"
|
|
||||||
"strconv"
|
|
||||||
plainTemplate "text/template"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
// The output format for reported issues
|
|
||||||
type ReportFormat int
|
|
||||||
|
|
||||||
const (
|
|
||||||
ReportText ReportFormat = iota // Plain text format
|
|
||||||
ReportJSON // Json format
|
|
||||||
ReportCSV // CSV format
|
|
||||||
)
|
|
||||||
|
|
||||||
var text = `Results:
|
|
||||||
{{ range $index, $issue := .Issues }}
|
|
||||||
[{{ $issue.File }}:{{ $issue.Line }}] - {{ $issue.What }} (Confidence: {{ $issue.Confidence}}, Severity: {{ $issue.Severity }})
|
|
||||||
> {{ $issue.Code }}
|
|
||||||
|
|
||||||
{{ end }}
|
|
||||||
Summary:
|
|
||||||
Files: {{.Stats.NumFiles}}
|
|
||||||
Lines: {{.Stats.NumLines}}
|
|
||||||
Nosec: {{.Stats.NumNosec}}
|
|
||||||
Issues: {{.Stats.NumFound}}
|
|
||||||
|
|
||||||
`
|
|
||||||
|
|
||||||
func CreateReport(w io.Writer, format string, data *gas.Analyzer) error {
|
|
||||||
var err error
|
|
||||||
switch format {
|
|
||||||
case "json":
|
|
||||||
err = reportJSON(w, data)
|
|
||||||
case "csv":
|
|
||||||
err = reportCSV(w, data)
|
|
||||||
case "html":
|
|
||||||
err = reportFromHTMLTemplate(w, html, data)
|
|
||||||
case "text":
|
|
||||||
err = reportFromPlaintextTemplate(w, text, data)
|
|
||||||
default:
|
|
||||||
err = reportFromPlaintextTemplate(w, text, data)
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func reportJSON(w io.Writer, data *gas.Analyzer) error {
|
|
||||||
raw, err := json.MarshalIndent(data, "", "\t")
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, err = w.Write(raw)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
|
|
||||||
func reportCSV(w io.Writer, data *gas.Analyzer) error {
|
|
||||||
out := csv.NewWriter(w)
|
|
||||||
defer out.Flush()
|
|
||||||
for _, issue := range data.Issues {
|
|
||||||
err := out.Write([]string{
|
|
||||||
issue.File,
|
|
||||||
strconv.Itoa(issue.Line),
|
|
||||||
issue.What,
|
|
||||||
issue.Severity.String(),
|
|
||||||
issue.Confidence.String(),
|
|
||||||
issue.Code,
|
|
||||||
})
|
|
||||||
if err != nil {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func reportFromPlaintextTemplate(w io.Writer, reportTemplate string, data *gas.Analyzer) error {
|
|
||||||
t, e := plainTemplate.New("gas").Parse(reportTemplate)
|
|
||||||
if e != nil {
|
|
||||||
return e
|
|
||||||
}
|
|
||||||
|
|
||||||
return t.Execute(w, data)
|
|
||||||
}
|
|
||||||
|
|
||||||
func reportFromHTMLTemplate(w io.Writer, reportTemplate string, data *gas.Analyzer) error {
|
|
||||||
t, e := htmlTemplate.New("gas").Parse(reportTemplate)
|
|
||||||
if e != nil {
|
|
||||||
return e
|
|
||||||
}
|
|
||||||
|
|
||||||
return t.Execute(w, data)
|
|
||||||
}
|
|
||||||
|
|
@ -1,401 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package output
|
|
||||||
|
|
||||||
const html = `
|
|
||||||
<!doctype html>
|
|
||||||
<html lang="en">
|
|
||||||
<head>
|
|
||||||
<meta charset="utf-8">
|
|
||||||
<title>Go AST Scanner</title>
|
|
||||||
<link rel="stylesheet" href="https://cdnjs.cloudflare.com/ajax/libs/bulma/0.2.1/css/bulma.min.css" integrity="sha256-DRcOKg8NK1KkSkcymcGmxOtS/lAn0lHWJXRa15gMHHk=" crossorigin="anonymous"/>
|
|
||||||
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/react/15.3.2/react.min.js" integrity="sha256-cLWs9L+cjZg8CjGHMpJqUgKKouPlmoMP/0wIdPtaPGs=" crossorigin="anonymous"></script>
|
|
||||||
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/react/15.3.2/react-dom.min.js" integrity="sha256-JIW8lNqN2EtqC6ggNZYnAdKMJXRQfkPMvdRt+b0/Jxc=" crossorigin="anonymous"></script>
|
|
||||||
<script type="text/javascript" src="https://cdnjs.cloudflare.com/ajax/libs/babel-standalone/6.17.0/babel.min.js" integrity="sha256-1IWWLlCKFGFj/cjryvC7GDF5wRYnf9tSvNVVEj8Bm+o=" crossorigin="anonymous"></script>
|
|
||||||
<style>
|
|
||||||
div.issue div.tag, div.panel-block input[type="checkbox"] {
|
|
||||||
margin-right: 0.5em;
|
|
||||||
}
|
|
||||||
|
|
||||||
label.disabled {
|
|
||||||
text-decoration: line-through;
|
|
||||||
}
|
|
||||||
|
|
||||||
nav.panel select {
|
|
||||||
width: 100%;
|
|
||||||
}
|
|
||||||
|
|
||||||
.break-word {
|
|
||||||
word-wrap: break-word;
|
|
||||||
}
|
|
||||||
</style>
|
|
||||||
</head>
|
|
||||||
<body>
|
|
||||||
<section class="section">
|
|
||||||
<div class="container">
|
|
||||||
<div id="content"></div>
|
|
||||||
</div>
|
|
||||||
</section>
|
|
||||||
<script>
|
|
||||||
var data = {{ . }};
|
|
||||||
</script>
|
|
||||||
<script type="text/babel">
|
|
||||||
var IssueTag = React.createClass({
|
|
||||||
render: function() {
|
|
||||||
var level = ""
|
|
||||||
if (this.props.level === "HIGH") {
|
|
||||||
level = "is-danger";
|
|
||||||
}
|
|
||||||
if (this.props.level === "MEDIUM") {
|
|
||||||
level = "is-warning";
|
|
||||||
}
|
|
||||||
return (
|
|
||||||
<div className={ "tag " + level }>
|
|
||||||
{ this.props.label }: { this.props.level }
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var Issue = React.createClass({
|
|
||||||
render: function() {
|
|
||||||
return (
|
|
||||||
<div className="issue box">
|
|
||||||
<div className="is-pulled-right">
|
|
||||||
<IssueTag label="Severity" level={ this.props.data.severity }/>
|
|
||||||
<IssueTag label="Confidence" level={ this.props.data.confidence }/>
|
|
||||||
</div>
|
|
||||||
<p>
|
|
||||||
<strong className="break-word">
|
|
||||||
{ this.props.data.file } (line { this.props.data.line })
|
|
||||||
</strong>
|
|
||||||
<br/>
|
|
||||||
{ this.props.data.details }
|
|
||||||
</p>
|
|
||||||
<figure className="highlight">
|
|
||||||
<pre>
|
|
||||||
<code className="golang hljs">
|
|
||||||
{ this.props.data.code }
|
|
||||||
</code>
|
|
||||||
</pre>
|
|
||||||
</figure>
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var Stats = React.createClass({
|
|
||||||
render: function() {
|
|
||||||
return (
|
|
||||||
<p className="help">
|
|
||||||
Scanned { this.props.data.metrics.files.toLocaleString() } files
|
|
||||||
with { this.props.data.metrics.lines.toLocaleString() } lines of code.
|
|
||||||
</p>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var Issues = React.createClass({
|
|
||||||
render: function() {
|
|
||||||
if (this.props.data.metrics.files === 0) {
|
|
||||||
return (
|
|
||||||
<div className="notification">
|
|
||||||
No source files found. Do you even Go?
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
if (this.props.data.issues.length === 0) {
|
|
||||||
return (
|
|
||||||
<div>
|
|
||||||
<div className="notification">
|
|
||||||
Awesome! No issues found!
|
|
||||||
</div>
|
|
||||||
<Stats data={ this.props.data } />
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
var issues = this.props.data.issues
|
|
||||||
.filter(function(issue) {
|
|
||||||
return this.props.severity.includes(issue.severity);
|
|
||||||
}.bind(this))
|
|
||||||
.filter(function(issue) {
|
|
||||||
return this.props.confidence.includes(issue.confidence);
|
|
||||||
}.bind(this))
|
|
||||||
.filter(function(issue) {
|
|
||||||
if (this.props.issueType) {
|
|
||||||
return issue.details.toLowerCase().startsWith(this.props.issueType.toLowerCase());
|
|
||||||
} else {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}.bind(this))
|
|
||||||
.map(function(issue) {
|
|
||||||
return (<Issue data={issue} />);
|
|
||||||
}.bind(this));
|
|
||||||
|
|
||||||
if (issues.length === 0) {
|
|
||||||
return (
|
|
||||||
<div>
|
|
||||||
<div className="notification">
|
|
||||||
No issues matched given filters
|
|
||||||
(of total { this.props.data.issues.length } issues).
|
|
||||||
</div>
|
|
||||||
<Stats data={ this.props.data } />
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
|
|
||||||
return (
|
|
||||||
<div className="issues">
|
|
||||||
{ issues }
|
|
||||||
<Stats data={ this.props.data } />
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var LevelSelector = React.createClass({
|
|
||||||
handleChange: function(level) {
|
|
||||||
return function(e) {
|
|
||||||
var updated = this.props.selected
|
|
||||||
.filter(function(item) { return item != level; });
|
|
||||||
if (e.target.checked) {
|
|
||||||
updated.push(level);
|
|
||||||
}
|
|
||||||
this.props.onChange(updated);
|
|
||||||
}.bind(this);
|
|
||||||
},
|
|
||||||
render: function() {
|
|
||||||
var highDisabled = !this.props.available.includes("HIGH");
|
|
||||||
var mediumDisabled = !this.props.available.includes("MEDIUM");
|
|
||||||
var lowDisabled = !this.props.available.includes("LOW");
|
|
||||||
|
|
||||||
return (
|
|
||||||
<span>
|
|
||||||
<label className={"label checkbox " + (highDisabled ? "disabled" : "") }>
|
|
||||||
<input
|
|
||||||
type="checkbox"
|
|
||||||
checked={ this.props.selected.includes("HIGH") }
|
|
||||||
disabled={ highDisabled }
|
|
||||||
onChange={ this.handleChange("HIGH") }/>
|
|
||||||
High
|
|
||||||
</label>
|
|
||||||
<label className={"label checkbox " + (mediumDisabled ? "disabled" : "") }>
|
|
||||||
<input
|
|
||||||
type="checkbox"
|
|
||||||
checked={ this.props.selected.includes("MEDIUM") }
|
|
||||||
disabled={ mediumDisabled }
|
|
||||||
onChange={ this.handleChange("MEDIUM") }/>
|
|
||||||
Medium
|
|
||||||
</label>
|
|
||||||
<label className={"label checkbox " + (lowDisabled ? "disabled" : "") }>
|
|
||||||
<input
|
|
||||||
type="checkbox"
|
|
||||||
checked={ this.props.selected.includes("LOW") }
|
|
||||||
disabled={ lowDisabled }
|
|
||||||
onChange={ this.handleChange("LOW") }/>
|
|
||||||
Low
|
|
||||||
</label>
|
|
||||||
</span>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var Navigation = React.createClass({
|
|
||||||
updateSeverity: function(vals) {
|
|
||||||
this.props.onSeverity(vals);
|
|
||||||
},
|
|
||||||
updateConfidence: function(vals) {
|
|
||||||
this.props.onConfidence(vals);
|
|
||||||
},
|
|
||||||
updateIssueType: function(e) {
|
|
||||||
if (e.target.value == "all") {
|
|
||||||
this.props.onIssueType(null);
|
|
||||||
} else {
|
|
||||||
this.props.onIssueType(e.target.value);
|
|
||||||
}
|
|
||||||
},
|
|
||||||
render: function() {
|
|
||||||
var issueTypes = this.props.allIssueTypes
|
|
||||||
.map(function(it) {
|
|
||||||
return (
|
|
||||||
<option value={ it } selected={ this.props.issueType == it }>
|
|
||||||
{ it }
|
|
||||||
</option>
|
|
||||||
);
|
|
||||||
}.bind(this));
|
|
||||||
|
|
||||||
return (
|
|
||||||
<nav className="panel">
|
|
||||||
<div className="panel-heading">
|
|
||||||
Filters
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<strong>
|
|
||||||
Severity
|
|
||||||
</strong>
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<LevelSelector
|
|
||||||
selected={ this.props.severity }
|
|
||||||
available={ this.props.allSeverities }
|
|
||||||
onChange={ this.updateSeverity } />
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<strong>
|
|
||||||
Confidence
|
|
||||||
</strong>
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<LevelSelector
|
|
||||||
selected={ this.props.confidence }
|
|
||||||
available={ this.props.allConfidences }
|
|
||||||
onChange={ this.updateConfidence } />
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<strong>
|
|
||||||
Issue Type
|
|
||||||
</strong>
|
|
||||||
</div>
|
|
||||||
<div className="panel-block">
|
|
||||||
<select onChange={ this.updateIssueType }>
|
|
||||||
<option value="all" selected={ !this.props.issueType }>
|
|
||||||
(all)
|
|
||||||
</option>
|
|
||||||
{ issueTypes }
|
|
||||||
</select>
|
|
||||||
</div>
|
|
||||||
</nav>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
var IssueBrowser = React.createClass({
|
|
||||||
getInitialState: function() {
|
|
||||||
return {};
|
|
||||||
},
|
|
||||||
componentWillMount: function() {
|
|
||||||
this.updateIssues(this.props.data);
|
|
||||||
},
|
|
||||||
handleSeverity: function(val) {
|
|
||||||
this.updateIssueTypes(this.props.data.issues, val, this.state.confidence);
|
|
||||||
this.setState({severity: val});
|
|
||||||
},
|
|
||||||
handleConfidence: function(val) {
|
|
||||||
this.updateIssueTypes(this.props.data.issues, this.state.severity, val);
|
|
||||||
this.setState({confidence: val});
|
|
||||||
},
|
|
||||||
handleIssueType: function(val) {
|
|
||||||
this.setState({issueType: val});
|
|
||||||
},
|
|
||||||
updateIssues: function(data) {
|
|
||||||
if (!data) {
|
|
||||||
this.setState({data: data});
|
|
||||||
return;
|
|
||||||
}
|
|
||||||
|
|
||||||
var allSeverities = data.issues
|
|
||||||
.map(function(issue) {
|
|
||||||
return issue.severity
|
|
||||||
})
|
|
||||||
.sort()
|
|
||||||
.filter(function(item, pos, ary) {
|
|
||||||
return !pos || item != ary[pos - 1];
|
|
||||||
});
|
|
||||||
|
|
||||||
var allConfidences = data.issues
|
|
||||||
.map(function(issue) {
|
|
||||||
return issue.confidence
|
|
||||||
})
|
|
||||||
.sort()
|
|
||||||
.filter(function(item, pos, ary) {
|
|
||||||
return !pos || item != ary[pos - 1];
|
|
||||||
});
|
|
||||||
|
|
||||||
var selectedSeverities = allSeverities;
|
|
||||||
var selectedConfidences = allConfidences;
|
|
||||||
|
|
||||||
this.updateIssueTypes(data.issues, selectedSeverities, selectedConfidences);
|
|
||||||
|
|
||||||
this.setState({
|
|
||||||
data: data,
|
|
||||||
severity: selectedSeverities,
|
|
||||||
allSeverities: allSeverities,
|
|
||||||
confidence: selectedConfidences,
|
|
||||||
allConfidences: allConfidences,
|
|
||||||
issueType: null
|
|
||||||
});
|
|
||||||
},
|
|
||||||
updateIssueTypes: function(issues, severities, confidences) {
|
|
||||||
var allTypes = issues
|
|
||||||
.filter(function(issue) {
|
|
||||||
return severities.includes(issue.severity);
|
|
||||||
})
|
|
||||||
.filter(function(issue) {
|
|
||||||
return confidences.includes(issue.confidence);
|
|
||||||
})
|
|
||||||
.map(function(issue) {
|
|
||||||
return issue.details;
|
|
||||||
})
|
|
||||||
.sort()
|
|
||||||
.filter(function(item, pos, ary) {
|
|
||||||
return !pos || item != ary[pos - 1];
|
|
||||||
});
|
|
||||||
|
|
||||||
if (this.state.issueType && !allTypes.includes(this.state.issueType)) {
|
|
||||||
this.setState({issueType: null});
|
|
||||||
}
|
|
||||||
|
|
||||||
this.setState({allIssueTypes: allTypes});
|
|
||||||
},
|
|
||||||
render: function() {
|
|
||||||
return (
|
|
||||||
<div className="content">
|
|
||||||
<div className="columns">
|
|
||||||
<div className="column is-one-quarter">
|
|
||||||
<Navigation
|
|
||||||
severity={ this.state.severity }
|
|
||||||
confidence={ this.state.confidence }
|
|
||||||
issueType={ this.state.issueType }
|
|
||||||
allSeverities={ this.state.allSeverities }
|
|
||||||
allConfidences={ this.state.allConfidences }
|
|
||||||
allIssueTypes={ this.state.allIssueTypes }
|
|
||||||
onSeverity={ this.handleSeverity }
|
|
||||||
onConfidence={ this.handleConfidence }
|
|
||||||
onIssueType={ this.handleIssueType }
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
<div className="column is-three-quarters">
|
|
||||||
<Issues
|
|
||||||
data={ this.props.data }
|
|
||||||
severity={ this.state.severity }
|
|
||||||
confidence={ this.state.confidence }
|
|
||||||
issueType={ this.state.issueType }
|
|
||||||
/>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
</div>
|
|
||||||
);
|
|
||||||
}
|
|
||||||
});
|
|
||||||
|
|
||||||
ReactDOM.render(
|
|
||||||
<IssueBrowser data={ data } />,
|
|
||||||
document.getElementById("content")
|
|
||||||
);
|
|
||||||
</script>
|
|
||||||
</body>
|
|
||||||
</html>`
|
|
||||||
|
|
@ -1,91 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"github.com/GoASTScanner/gas/rules"
|
|
||||||
)
|
|
||||||
|
|
||||||
type RuleInfo struct {
|
|
||||||
description string
|
|
||||||
build func(map[string]interface{}) (gas.Rule, []ast.Node)
|
|
||||||
}
|
|
||||||
|
|
||||||
// GetFullRuleList get the full list of all rules available to GAS
|
|
||||||
func GetFullRuleList() map[string]RuleInfo {
|
|
||||||
return map[string]RuleInfo{
|
|
||||||
// misc
|
|
||||||
"G101": RuleInfo{"Look for hardcoded credentials", rules.NewHardcodedCredentials},
|
|
||||||
"G102": RuleInfo{"Bind to all interfaces", rules.NewBindsToAllNetworkInterfaces},
|
|
||||||
"G103": RuleInfo{"Audit the use of unsafe block", rules.NewUsingUnsafe},
|
|
||||||
"G104": RuleInfo{"Audit errors not checked", rules.NewNoErrorCheck},
|
|
||||||
"G105": RuleInfo{"Audit the use of big.Exp function", rules.NewUsingBigExp},
|
|
||||||
|
|
||||||
// injection
|
|
||||||
"G201": RuleInfo{"SQL query construction using format string", rules.NewSqlStrFormat},
|
|
||||||
"G202": RuleInfo{"SQL query construction using string concatenation", rules.NewSqlStrConcat},
|
|
||||||
"G203": RuleInfo{"Use of unescaped data in HTML templates", rules.NewTemplateCheck},
|
|
||||||
"G204": RuleInfo{"Audit use of command execution", rules.NewSubproc},
|
|
||||||
|
|
||||||
// filesystem
|
|
||||||
"G301": RuleInfo{"Poor file permissions used when creating a directory", rules.NewMkdirPerms},
|
|
||||||
"G302": RuleInfo{"Poor file permisions used when creation file or using chmod", rules.NewFilePerms},
|
|
||||||
"G303": RuleInfo{"Creating tempfile using a predictable path", rules.NewBadTempFile},
|
|
||||||
|
|
||||||
// crypto
|
|
||||||
"G401": RuleInfo{"Detect the usage of DES, RC4, or MD5", rules.NewUsesWeakCryptography},
|
|
||||||
"G402": RuleInfo{"Look for bad TLS connection settings", rules.NewIntermediateTlsCheck},
|
|
||||||
"G403": RuleInfo{"Ensure minimum RSA key length of 2048 bits", rules.NewWeakKeyStrength},
|
|
||||||
"G404": RuleInfo{"Insecure random number source (rand)", rules.NewWeakRandCheck},
|
|
||||||
|
|
||||||
// blacklist
|
|
||||||
"G501": RuleInfo{"Import blacklist: crypto/md5", rules.NewBlacklist_crypto_md5},
|
|
||||||
"G502": RuleInfo{"Import blacklist: crypto/des", rules.NewBlacklist_crypto_des},
|
|
||||||
"G503": RuleInfo{"Import blacklist: crypto/rc4", rules.NewBlacklist_crypto_rc4},
|
|
||||||
"G504": RuleInfo{"Import blacklist: net/http/cgi", rules.NewBlacklist_net_http_cgi},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func AddRules(analyzer *gas.Analyzer, conf map[string]interface{}) {
|
|
||||||
var all map[string]RuleInfo
|
|
||||||
|
|
||||||
inc := conf["include"].([]string)
|
|
||||||
exc := conf["exclude"].([]string)
|
|
||||||
|
|
||||||
// add included rules
|
|
||||||
if len(inc) == 0 {
|
|
||||||
all = GetFullRuleList()
|
|
||||||
} else {
|
|
||||||
all = map[string]RuleInfo{}
|
|
||||||
tmp := GetFullRuleList()
|
|
||||||
for _, v := range inc {
|
|
||||||
if val, ok := tmp[v]; ok {
|
|
||||||
all[v] = val
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// remove excluded rules
|
|
||||||
for _, v := range exc {
|
|
||||||
delete(all, v)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, v := range all {
|
|
||||||
analyzer.AddRule(v.build(conf))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,44 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"go/ast"
|
|
||||||
)
|
|
||||||
|
|
||||||
type UsingBigExp struct {
|
|
||||||
gas.MetaData
|
|
||||||
pkg string
|
|
||||||
calls []string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UsingBigExp) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if _, matched := gas.MatchCallByType(n, c, r.pkg, r.calls...); matched {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
func NewUsingBigExp(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &UsingBigExp{
|
|
||||||
pkg: "*math/big.Int",
|
|
||||||
calls: []string{"Exp"},
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
What: "Use of math/big.Int.Exp function should be audited for modulus == 0",
|
|
||||||
Severity: gas.Low,
|
|
||||||
Confidence: gas.High,
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,52 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Looks for net.Listen("0.0.0.0") or net.Listen(":8080")
|
|
||||||
type BindsToAllNetworkInterfaces struct {
|
|
||||||
gas.MetaData
|
|
||||||
call *regexp.Regexp
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *BindsToAllNetworkInterfaces) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node := gas.MatchCall(n, r.call); node != nil {
|
|
||||||
if arg, err := gas.GetString(node.Args[1]); err == nil {
|
|
||||||
if r.pattern.MatchString(arg) {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBindsToAllNetworkInterfaces(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BindsToAllNetworkInterfaces{
|
|
||||||
call: regexp.MustCompile(`^(net|tls)\.Listen$`),
|
|
||||||
pattern: regexp.MustCompile(`^(0.0.0.0|:).*$`),
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Binds to all network interfaces",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,79 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BlacklistImport struct {
|
|
||||||
gas.MetaData
|
|
||||||
Path string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *BlacklistImport) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node, ok := n.(*ast.ImportSpec); ok {
|
|
||||||
if r.Path == node.Path.Value && node.Name.String() != "_" {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBlacklist_crypto_md5(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BlacklistImport{
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.High,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Use of weak cryptographic primitive",
|
|
||||||
},
|
|
||||||
Path: `"crypto/md5"`,
|
|
||||||
}, []ast.Node{(*ast.ImportSpec)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBlacklist_crypto_des(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BlacklistImport{
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.High,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Use of weak cryptographic primitive",
|
|
||||||
},
|
|
||||||
Path: `"crypto/des"`,
|
|
||||||
}, []ast.Node{(*ast.ImportSpec)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBlacklist_crypto_rc4(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BlacklistImport{
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.High,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Use of weak cryptographic primitive",
|
|
||||||
},
|
|
||||||
Path: `"crypto/rc4"`,
|
|
||||||
}, []ast.Node{(*ast.ImportSpec)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBlacklist_net_http_cgi(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BlacklistImport{
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.High,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Go versions < 1.6.3 are vulnerable to Httpoxy attack: (CVE-2016-5386)",
|
|
||||||
},
|
|
||||||
Path: `"net/http/cgi"`,
|
|
||||||
}, []ast.Node{(*ast.ImportSpec)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,96 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"go/ast"
|
|
||||||
"go/types"
|
|
||||||
)
|
|
||||||
|
|
||||||
type NoErrorCheck struct {
|
|
||||||
gas.MetaData
|
|
||||||
whitelist gas.CallList
|
|
||||||
}
|
|
||||||
|
|
||||||
func returnsError(callExpr *ast.CallExpr, ctx *gas.Context) int {
|
|
||||||
if tv := ctx.Info.TypeOf(callExpr); tv != nil {
|
|
||||||
switch t := tv.(type) {
|
|
||||||
case *types.Tuple:
|
|
||||||
for pos := 0; pos < t.Len(); pos += 1 {
|
|
||||||
variable := t.At(pos)
|
|
||||||
if variable != nil && variable.Type().String() == "error" {
|
|
||||||
return pos
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case *types.Named:
|
|
||||||
if t.String() == "error" {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *NoErrorCheck) Match(n ast.Node, ctx *gas.Context) (*gas.Issue, error) {
|
|
||||||
switch stmt := n.(type) {
|
|
||||||
case *ast.AssignStmt:
|
|
||||||
for _, expr := range stmt.Rhs {
|
|
||||||
if callExpr, ok := expr.(*ast.CallExpr); ok && !r.whitelist.ContainsCallExpr(callExpr, ctx) {
|
|
||||||
pos := returnsError(callExpr, ctx)
|
|
||||||
if pos < 0 || pos >= len(stmt.Lhs) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
if id, ok := stmt.Lhs[pos].(*ast.Ident); ok && id.Name == "_" {
|
|
||||||
return gas.NewIssue(ctx, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case *ast.ExprStmt:
|
|
||||||
if callExpr, ok := stmt.X.(*ast.CallExpr); ok && !r.whitelist.ContainsCallExpr(callExpr, ctx) {
|
|
||||||
pos := returnsError(callExpr, ctx)
|
|
||||||
if pos >= 0 {
|
|
||||||
return gas.NewIssue(ctx, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewNoErrorCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
|
|
||||||
// TODO(gm) Come up with sensible defaults here. Or flip it to use a
|
|
||||||
// black list instead.
|
|
||||||
whitelist := gas.NewCallList()
|
|
||||||
whitelist.AddAll("bytes.Buffer", "Write", "WriteByte", "WriteRune", "WriteString")
|
|
||||||
whitelist.AddAll("fmt", "Print", "Printf", "Println")
|
|
||||||
whitelist.Add("io.PipeWriter", "CloseWithError")
|
|
||||||
|
|
||||||
if configured, ok := conf["G104"]; ok {
|
|
||||||
if whitelisted, ok := configured.(map[string][]string); ok {
|
|
||||||
for key, val := range whitelisted {
|
|
||||||
whitelist.AddAll(key, val...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return &NoErrorCheck{
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Low,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Errors unhandled.",
|
|
||||||
},
|
|
||||||
whitelist: whitelist,
|
|
||||||
}, []ast.Node{(*ast.AssignStmt)(nil), (*ast.ExprStmt)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,85 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"strconv"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type FilePermissions struct {
|
|
||||||
gas.MetaData
|
|
||||||
mode int64
|
|
||||||
pkg string
|
|
||||||
calls []string
|
|
||||||
}
|
|
||||||
|
|
||||||
func getConfiguredMode(conf map[string]interface{}, configKey string, defaultMode int64) int64 {
|
|
||||||
var mode int64 = defaultMode
|
|
||||||
if value, ok := conf[configKey]; ok {
|
|
||||||
switch value.(type) {
|
|
||||||
case int64:
|
|
||||||
mode = value.(int64)
|
|
||||||
case string:
|
|
||||||
if m, e := strconv.ParseInt(value.(string), 0, 64); e != nil {
|
|
||||||
mode = defaultMode
|
|
||||||
} else {
|
|
||||||
mode = m
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return mode
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *FilePermissions) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
if callexpr, matched := gas.MatchCallByPackage(n, c, r.pkg, r.calls...); matched {
|
|
||||||
modeArg := callexpr.Args[len(callexpr.Args)-1]
|
|
||||||
if mode, err := gas.GetInt(modeArg); err == nil && mode > r.mode {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewFilePerms(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
mode := getConfiguredMode(conf, "G302", 0600)
|
|
||||||
return &FilePermissions{
|
|
||||||
mode: mode,
|
|
||||||
pkg: "os",
|
|
||||||
calls: []string{"OpenFile", "Chmod"},
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: fmt.Sprintf("Expect file permissions to be %#o or less", mode),
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewMkdirPerms(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
mode := getConfiguredMode(conf, "G301", 0700)
|
|
||||||
return &FilePermissions{
|
|
||||||
mode: mode,
|
|
||||||
pkg: "os",
|
|
||||||
calls: []string{"Mkdir", "MkdirAll"},
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: fmt.Sprintf("Expect directory permissions to be %#o or less", mode),
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,148 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"go/ast"
|
|
||||||
"go/token"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
"github.com/nbutton23/zxcvbn-go"
|
|
||||||
"strconv"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Credentials struct {
|
|
||||||
gas.MetaData
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
entropyThreshold float64
|
|
||||||
perCharThreshold float64
|
|
||||||
truncate int
|
|
||||||
ignoreEntropy bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func truncate(s string, n int) string {
|
|
||||||
if n > len(s) {
|
|
||||||
return s
|
|
||||||
}
|
|
||||||
return s[:n]
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Credentials) isHighEntropyString(str string) bool {
|
|
||||||
s := truncate(str, r.truncate)
|
|
||||||
info := zxcvbn.PasswordStrength(s, []string{})
|
|
||||||
entropyPerChar := info.Entropy / float64(len(s))
|
|
||||||
return (info.Entropy >= r.entropyThreshold ||
|
|
||||||
(info.Entropy >= (r.entropyThreshold/2) &&
|
|
||||||
entropyPerChar >= r.perCharThreshold))
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Credentials) Match(n ast.Node, ctx *gas.Context) (*gas.Issue, error) {
|
|
||||||
switch node := n.(type) {
|
|
||||||
case *ast.AssignStmt:
|
|
||||||
return r.matchAssign(node, ctx)
|
|
||||||
case *ast.GenDecl:
|
|
||||||
return r.matchGenDecl(node, ctx)
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Credentials) matchAssign(assign *ast.AssignStmt, ctx *gas.Context) (*gas.Issue, error) {
|
|
||||||
for _, i := range assign.Lhs {
|
|
||||||
if ident, ok := i.(*ast.Ident); ok {
|
|
||||||
if r.pattern.MatchString(ident.Name) {
|
|
||||||
for _, e := range assign.Rhs {
|
|
||||||
if val, err := gas.GetString(e); err == nil {
|
|
||||||
if r.ignoreEntropy || (!r.ignoreEntropy && r.isHighEntropyString(val)) {
|
|
||||||
return gas.NewIssue(ctx, assign, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Credentials) matchGenDecl(decl *ast.GenDecl, ctx *gas.Context) (*gas.Issue, error) {
|
|
||||||
if decl.Tok != token.CONST && decl.Tok != token.VAR {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
for _, spec := range decl.Specs {
|
|
||||||
if valueSpec, ok := spec.(*ast.ValueSpec); ok {
|
|
||||||
for index, ident := range valueSpec.Names {
|
|
||||||
if r.pattern.MatchString(ident.Name) && valueSpec.Values != nil {
|
|
||||||
// const foo, bar = "same value"
|
|
||||||
if len(valueSpec.Values) <= index {
|
|
||||||
index = len(valueSpec.Values) - 1
|
|
||||||
}
|
|
||||||
if val, err := gas.GetString(valueSpec.Values[index]); err == nil {
|
|
||||||
if r.ignoreEntropy || (!r.ignoreEntropy && r.isHighEntropyString(val)) {
|
|
||||||
return gas.NewIssue(ctx, valueSpec, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewHardcodedCredentials(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
pattern := `(?i)passwd|pass|password|pwd|secret|token`
|
|
||||||
entropyThreshold := 80.0
|
|
||||||
perCharThreshold := 3.0
|
|
||||||
ignoreEntropy := false
|
|
||||||
var truncateString int = 16
|
|
||||||
if val, ok := conf["G101"]; ok {
|
|
||||||
conf := val.(map[string]string)
|
|
||||||
if configPattern, ok := conf["pattern"]; ok {
|
|
||||||
pattern = configPattern
|
|
||||||
}
|
|
||||||
if configIgnoreEntropy, ok := conf["ignore_entropy"]; ok {
|
|
||||||
if parsedBool, err := strconv.ParseBool(configIgnoreEntropy); err == nil {
|
|
||||||
ignoreEntropy = parsedBool
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if configEntropyThreshold, ok := conf["entropy_threshold"]; ok {
|
|
||||||
if parsedNum, err := strconv.ParseFloat(configEntropyThreshold, 64); err == nil {
|
|
||||||
entropyThreshold = parsedNum
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if configCharThreshold, ok := conf["per_char_threshold"]; ok {
|
|
||||||
if parsedNum, err := strconv.ParseFloat(configCharThreshold, 64); err == nil {
|
|
||||||
perCharThreshold = parsedNum
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if configTruncate, ok := conf["truncate"]; ok {
|
|
||||||
if parsedInt, err := strconv.Atoi(configTruncate); err == nil {
|
|
||||||
truncateString = parsedInt
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return &Credentials{
|
|
||||||
pattern: regexp.MustCompile(pattern),
|
|
||||||
entropyThreshold: entropyThreshold,
|
|
||||||
perCharThreshold: perCharThreshold,
|
|
||||||
ignoreEntropy: ignoreEntropy,
|
|
||||||
truncate: truncateString,
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
What: "Potential hardcoded credentials",
|
|
||||||
Confidence: gas.Low,
|
|
||||||
Severity: gas.High,
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.AssignStmt)(nil), (*ast.GenDecl)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type WeakRand struct {
|
|
||||||
gas.MetaData
|
|
||||||
funcNames []string
|
|
||||||
packagePath string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *WeakRand) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
for _, funcName := range w.funcNames {
|
|
||||||
if _, matched := gas.MatchCallByPackage(n, c, w.packagePath, funcName); matched {
|
|
||||||
return gas.NewIssue(c, n, w.What, w.Severity, w.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewWeakRandCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &WeakRand{
|
|
||||||
funcNames: []string{"Read", "Int"},
|
|
||||||
packagePath: "math/rand",
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.High,
|
|
||||||
Confidence: gas.Medium,
|
|
||||||
What: "Use of weak random number generator (math/rand instead of crypto/rand)",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,51 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type WeakKeyStrength struct {
|
|
||||||
gas.MetaData
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
bits int
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w *WeakKeyStrength) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
if node := gas.MatchCall(n, w.pattern); node != nil {
|
|
||||||
if bits, err := gas.GetInt(node.Args[1]); err == nil && bits < (int64)(w.bits) {
|
|
||||||
return gas.NewIssue(c, n, w.What, w.Severity, w.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewWeakKeyStrength(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
bits := 2048
|
|
||||||
return &WeakKeyStrength{
|
|
||||||
pattern: regexp.MustCompile(`^rsa\.GenerateKey$`),
|
|
||||||
bits: bits,
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: fmt.Sprintf("RSA keys should be at least %d bits", bits),
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,99 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type SqlStatement struct {
|
|
||||||
gas.MetaData
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
type SqlStrConcat struct {
|
|
||||||
SqlStatement
|
|
||||||
}
|
|
||||||
|
|
||||||
// see if we can figure out what it is
|
|
||||||
func (s *SqlStrConcat) checkObject(n *ast.Ident) bool {
|
|
||||||
if n.Obj != nil {
|
|
||||||
return n.Obj.Kind != ast.Var && n.Obj.Kind != ast.Fun
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Look for "SELECT * FROM table WHERE " + " ' OR 1=1"
|
|
||||||
func (s *SqlStrConcat) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
if node, ok := n.(*ast.BinaryExpr); ok {
|
|
||||||
if start, ok := node.X.(*ast.BasicLit); ok {
|
|
||||||
if str, e := gas.GetString(start); s.pattern.MatchString(str) && e == nil {
|
|
||||||
if _, ok := node.Y.(*ast.BasicLit); ok {
|
|
||||||
return nil, nil // string cat OK
|
|
||||||
}
|
|
||||||
if second, ok := node.Y.(*ast.Ident); ok && s.checkObject(second) {
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
return gas.NewIssue(c, n, s.What, s.Severity, s.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewSqlStrConcat(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &SqlStrConcat{
|
|
||||||
SqlStatement: SqlStatement{
|
|
||||||
pattern: regexp.MustCompile(`(?)(SELECT|DELETE|INSERT|UPDATE|INTO|FROM|WHERE) `),
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "SQL string concatenation",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.BinaryExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
type SqlStrFormat struct {
|
|
||||||
SqlStatement
|
|
||||||
call *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
// Looks for "fmt.Sprintf("SELECT * FROM foo where '%s', userInput)"
|
|
||||||
func (s *SqlStrFormat) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node := gas.MatchCall(n, s.call); node != nil {
|
|
||||||
if arg, e := gas.GetString(node.Args[0]); s.pattern.MatchString(arg) && e == nil {
|
|
||||||
return gas.NewIssue(c, n, s.What, s.Severity, s.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewSqlStrFormat(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &SqlStrFormat{
|
|
||||||
call: regexp.MustCompile(`^fmt\.Sprintf$`),
|
|
||||||
SqlStatement: SqlStatement{
|
|
||||||
pattern: regexp.MustCompile("(?)(SELECT|DELETE|INSERT|UPDATE|INTO|FROM|WHERE) "),
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "SQL string formatting",
|
|
||||||
},
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,56 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
"strings"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Subprocess struct {
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *Subprocess) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
if node := gas.MatchCall(n, r.pattern); node != nil {
|
|
||||||
for _, arg := range node.Args {
|
|
||||||
if !gas.TryResolve(arg, c) {
|
|
||||||
what := "Subprocess launching with variable."
|
|
||||||
return gas.NewIssue(c, n, what, gas.High, gas.High), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// call with partially qualified command
|
|
||||||
if str, err := gas.GetString(node.Args[0]); err == nil {
|
|
||||||
if !strings.HasPrefix(str, "/") {
|
|
||||||
what := "Subprocess launching with partial path."
|
|
||||||
return gas.NewIssue(c, n, what, gas.Medium, gas.High), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
what := "Subprocess launching should be audited."
|
|
||||||
return gas.NewIssue(c, n, what, gas.Low, gas.High), nil
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewSubproc(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &Subprocess{
|
|
||||||
pattern: regexp.MustCompile(`^exec\.Command|syscall\.Exec$`),
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BadTempFile struct {
|
|
||||||
gas.MetaData
|
|
||||||
args *regexp.Regexp
|
|
||||||
call *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *BadTempFile) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node := gas.MatchCall(n, t.call); node != nil {
|
|
||||||
if arg, e := gas.GetString(node.Args[0]); t.args.MatchString(arg) && e == nil {
|
|
||||||
return gas.NewIssue(c, n, t.What, t.Severity, t.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBadTempFile(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &BadTempFile{
|
|
||||||
call: regexp.MustCompile(`ioutil\.WriteFile|os\.Create`),
|
|
||||||
args: regexp.MustCompile(`^/tmp/.*$|^/var/tmp/.*$`),
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "File creation in shared tmp directory without using ioutil.Tempfile",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type TemplateCheck struct {
|
|
||||||
gas.MetaData
|
|
||||||
call *regexp.Regexp
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *TemplateCheck) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node := gas.MatchCall(n, t.call); node != nil {
|
|
||||||
for _, arg := range node.Args {
|
|
||||||
if _, ok := arg.(*ast.BasicLit); !ok { // basic lits are safe
|
|
||||||
return gas.NewIssue(c, n, t.What, t.Severity, t.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewTemplateCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &TemplateCheck{
|
|
||||||
call: regexp.MustCompile(`^template\.(HTML|JS|URL)$`),
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.Low,
|
|
||||||
What: "this method will not auto-escape HTML. Verify data is well formed.",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,191 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"reflect"
|
|
||||||
"regexp"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type InsecureConfigTLS struct {
|
|
||||||
MinVersion int16
|
|
||||||
MaxVersion int16
|
|
||||||
pattern *regexp.Regexp
|
|
||||||
goodCiphers []string
|
|
||||||
}
|
|
||||||
|
|
||||||
func stringInSlice(a string, list []string) bool {
|
|
||||||
for _, b := range list {
|
|
||||||
if b == a {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *InsecureConfigTLS) processTlsCipherSuites(n ast.Node, c *gas.Context) *gas.Issue {
|
|
||||||
a := reflect.TypeOf(&ast.KeyValueExpr{})
|
|
||||||
b := reflect.TypeOf(&ast.CompositeLit{})
|
|
||||||
if node, ok := gas.SimpleSelect(n, a, b).(*ast.CompositeLit); ok {
|
|
||||||
for _, elt := range node.Elts {
|
|
||||||
if ident, ok := elt.(*ast.SelectorExpr); ok {
|
|
||||||
if !stringInSlice(ident.Sel.Name, t.goodCiphers) {
|
|
||||||
str := fmt.Sprintf("TLS Bad Cipher Suite: %s", ident.Sel.Name)
|
|
||||||
return gas.NewIssue(c, n, str, gas.High, gas.High)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *InsecureConfigTLS) processTlsConfVal(n *ast.KeyValueExpr, c *gas.Context) *gas.Issue {
|
|
||||||
if ident, ok := n.Key.(*ast.Ident); ok {
|
|
||||||
switch ident.Name {
|
|
||||||
case "InsecureSkipVerify":
|
|
||||||
if node, ok := n.Value.(*ast.Ident); ok {
|
|
||||||
if node.Name != "false" {
|
|
||||||
return gas.NewIssue(c, n, "TLS InsecureSkipVerify set true.", gas.High, gas.High)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// TODO(tk): symbol tab look up to get the actual value
|
|
||||||
return gas.NewIssue(c, n, "TLS InsecureSkipVerify may be true.", gas.High, gas.Low)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "PreferServerCipherSuites":
|
|
||||||
if node, ok := n.Value.(*ast.Ident); ok {
|
|
||||||
if node.Name == "false" {
|
|
||||||
return gas.NewIssue(c, n, "TLS PreferServerCipherSuites set false.", gas.Medium, gas.High)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
// TODO(tk): symbol tab look up to get the actual value
|
|
||||||
return gas.NewIssue(c, n, "TLS PreferServerCipherSuites may be false.", gas.Medium, gas.Low)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "MinVersion":
|
|
||||||
if ival, ierr := gas.GetInt(n.Value); ierr == nil {
|
|
||||||
if (int16)(ival) < t.MinVersion {
|
|
||||||
return gas.NewIssue(c, n, "TLS MinVersion too low.", gas.High, gas.High)
|
|
||||||
}
|
|
||||||
// TODO(tk): symbol tab look up to get the actual value
|
|
||||||
return gas.NewIssue(c, n, "TLS MinVersion may be too low.", gas.High, gas.Low)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "MaxVersion":
|
|
||||||
if ival, ierr := gas.GetInt(n.Value); ierr == nil {
|
|
||||||
if (int16)(ival) < t.MaxVersion {
|
|
||||||
return gas.NewIssue(c, n, "TLS MaxVersion too low.", gas.High, gas.High)
|
|
||||||
}
|
|
||||||
// TODO(tk): symbol tab look up to get the actual value
|
|
||||||
return gas.NewIssue(c, n, "TLS MaxVersion may be too low.", gas.High, gas.Low)
|
|
||||||
}
|
|
||||||
|
|
||||||
case "CipherSuites":
|
|
||||||
if ret := t.processTlsCipherSuites(n, c); ret != nil {
|
|
||||||
return ret
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *InsecureConfigTLS) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if node := gas.MatchCompLit(n, t.pattern); node != nil {
|
|
||||||
for _, elt := range node.Elts {
|
|
||||||
if kve, ok := elt.(*ast.KeyValueExpr); ok {
|
|
||||||
gi = t.processTlsConfVal(kve, c)
|
|
||||||
if gi != nil {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewModernTlsCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
// https://wiki.mozilla.org/Security/Server_Side_TLS#Modern_compatibility
|
|
||||||
return &InsecureConfigTLS{
|
|
||||||
pattern: regexp.MustCompile(`^tls\.Config$`),
|
|
||||||
MinVersion: 0x0303, // TLS 1.2 only
|
|
||||||
MaxVersion: 0x0303,
|
|
||||||
goodCiphers: []string{
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CompositeLit)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewIntermediateTlsCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
// https://wiki.mozilla.org/Security/Server_Side_TLS#Intermediate_compatibility_.28default.29
|
|
||||||
return &InsecureConfigTLS{
|
|
||||||
pattern: regexp.MustCompile(`^tls\.Config$`),
|
|
||||||
MinVersion: 0x0301, // TLS 1.2, 1.1, 1.0
|
|
||||||
MaxVersion: 0x0303,
|
|
||||||
goodCiphers: []string{
|
|
||||||
"TLS_RSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_RSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_RSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_RSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_RC4_128_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CompositeLit)(nil)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewCompatTlsCheck(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
// https://wiki.mozilla.org/Security/Server_Side_TLS#Old_compatibility_.28default.29
|
|
||||||
return &InsecureConfigTLS{
|
|
||||||
pattern: regexp.MustCompile(`^tls\.Config$`),
|
|
||||||
MinVersion: 0x0301, // TLS 1.2, 1.1, 1.0
|
|
||||||
MaxVersion: 0x0303,
|
|
||||||
goodCiphers: []string{
|
|
||||||
"TLS_RSA_WITH_RC4_128_SHA",
|
|
||||||
"TLS_RSA_WITH_3DES_EDE_CBC_SHA",
|
|
||||||
"TLS_RSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_RSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_RSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_RSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_RC4_128_SHA",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_RC4_128_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_3DES_EDE_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_CBC_SHA",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_128_GCM_SHA256",
|
|
||||||
"TLS_ECDHE_RSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
"TLS_ECDHE_ECDSA_WITH_AES_256_GCM_SHA384",
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CompositeLit)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,45 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
"go/ast"
|
|
||||||
)
|
|
||||||
|
|
||||||
type UsingUnsafe struct {
|
|
||||||
gas.MetaData
|
|
||||||
pkg string
|
|
||||||
calls []string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UsingUnsafe) Match(n ast.Node, c *gas.Context) (gi *gas.Issue, err error) {
|
|
||||||
if _, matches := gas.MatchCallByPackage(n, c, r.pkg, r.calls...); matches {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewUsingUnsafe(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
return &UsingUnsafe{
|
|
||||||
pkg: "unsafe",
|
|
||||||
calls: []string{"Alignof", "Offsetof", "Sizeof", "Pointer"},
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
What: "Use of unsafe calls should be audited",
|
|
||||||
Severity: gas.Low,
|
|
||||||
Confidence: gas.High,
|
|
||||||
},
|
|
||||||
}, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,53 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package rules
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
|
|
||||||
gas "github.com/GoASTScanner/gas/core"
|
|
||||||
)
|
|
||||||
|
|
||||||
type UsesWeakCryptography struct {
|
|
||||||
gas.MetaData
|
|
||||||
blacklist map[string][]string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *UsesWeakCryptography) Match(n ast.Node, c *gas.Context) (*gas.Issue, error) {
|
|
||||||
|
|
||||||
for pkg, funcs := range r.blacklist {
|
|
||||||
if _, matched := gas.MatchCallByPackage(n, c, pkg, funcs...); matched {
|
|
||||||
return gas.NewIssue(c, n, r.What, r.Severity, r.Confidence), nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Uses des.* md5.* or rc4.*
|
|
||||||
func NewUsesWeakCryptography(conf map[string]interface{}) (gas.Rule, []ast.Node) {
|
|
||||||
calls := make(map[string][]string)
|
|
||||||
calls["crypto/des"] = []string{"NewCipher", "NewTripleDESCipher"}
|
|
||||||
calls["crypto/md5"] = []string{"New", "Sum"}
|
|
||||||
calls["crypto/rc4"] = []string{"NewCipher"}
|
|
||||||
rule := &UsesWeakCryptography{
|
|
||||||
blacklist: calls,
|
|
||||||
MetaData: gas.MetaData{
|
|
||||||
Severity: gas.Medium,
|
|
||||||
Confidence: gas.High,
|
|
||||||
What: "Use of weak cryptographic primitive",
|
|
||||||
},
|
|
||||||
}
|
|
||||||
return rule, []ast.Node{(*ast.CallExpr)(nil)}
|
|
||||||
}
|
|
||||||
|
|
@ -1,276 +0,0 @@
|
||||||
// (c) Copyright 2016 Hewlett Packard Enterprise Development LP
|
|
||||||
//
|
|
||||||
// Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
// you may not use this file except in compliance with the License.
|
|
||||||
// You may obtain a copy of the License at
|
|
||||||
//
|
|
||||||
// http://www.apache.org/licenses/LICENSE-2.0
|
|
||||||
//
|
|
||||||
// Unless required by applicable law or agreed to in writing, software
|
|
||||||
// distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
// WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
|
||||||
// See the License for the specific language governing permissions and
|
|
||||||
// limitations under the License.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"go/importer"
|
|
||||||
"go/parser"
|
|
||||||
"go/token"
|
|
||||||
"go/types"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type command func(args ...string)
|
|
||||||
type utilities struct {
|
|
||||||
commands map[string]command
|
|
||||||
call []string
|
|
||||||
}
|
|
||||||
|
|
||||||
// Custom commands / utilities to run instead of default analyzer
|
|
||||||
func newUtils() *utilities {
|
|
||||||
utils := make(map[string]command)
|
|
||||||
utils["ast"] = dumpAst
|
|
||||||
utils["callobj"] = dumpCallObj
|
|
||||||
utils["uses"] = dumpUses
|
|
||||||
utils["types"] = dumpTypes
|
|
||||||
utils["defs"] = dumpDefs
|
|
||||||
utils["comments"] = dumpComments
|
|
||||||
utils["imports"] = dumpImports
|
|
||||||
return &utilities{utils, make([]string, 0)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *utilities) String() string {
|
|
||||||
i := 0
|
|
||||||
keys := make([]string, len(u.commands))
|
|
||||||
for k := range u.commands {
|
|
||||||
keys[i] = k
|
|
||||||
i++
|
|
||||||
}
|
|
||||||
return strings.Join(keys, ", ")
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *utilities) Set(opt string) error {
|
|
||||||
if _, ok := u.commands[opt]; !ok {
|
|
||||||
return fmt.Errorf("valid tools are: %s", u.String())
|
|
||||||
|
|
||||||
}
|
|
||||||
u.call = append(u.call, opt)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (u *utilities) run(args ...string) {
|
|
||||||
for _, util := range u.call {
|
|
||||||
if cmd, ok := u.commands[util]; ok {
|
|
||||||
cmd(args...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func shouldSkip(path string) bool {
|
|
||||||
st, e := os.Stat(path)
|
|
||||||
if e != nil {
|
|
||||||
// #nosec
|
|
||||||
fmt.Fprintf(os.Stderr, "Skipping: %s - %s\n", path, e)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
if st.IsDir() {
|
|
||||||
// #nosec
|
|
||||||
fmt.Fprintf(os.Stderr, "Skipping: %s - directory\n", path)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpAst(files ...string) {
|
|
||||||
for _, arg := range files {
|
|
||||||
// Ensure file exists and not a directory
|
|
||||||
if shouldSkip(arg) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Create the AST by parsing src.
|
|
||||||
fset := token.NewFileSet() // positions are relative to fset
|
|
||||||
f, err := parser.ParseFile(fset, arg, nil, 0)
|
|
||||||
if err != nil {
|
|
||||||
// #nosec
|
|
||||||
fmt.Fprintf(os.Stderr, "Unable to parse file %s\n", err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// Print the AST. #nosec
|
|
||||||
ast.Print(fset, f)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type context struct {
|
|
||||||
fileset *token.FileSet
|
|
||||||
comments ast.CommentMap
|
|
||||||
info *types.Info
|
|
||||||
pkg *types.Package
|
|
||||||
config *types.Config
|
|
||||||
root *ast.File
|
|
||||||
}
|
|
||||||
|
|
||||||
func createContext(filename string) *context {
|
|
||||||
fileset := token.NewFileSet()
|
|
||||||
root, e := parser.ParseFile(fileset, filename, nil, parser.ParseComments)
|
|
||||||
if e != nil {
|
|
||||||
// #nosec
|
|
||||||
fmt.Fprintf(os.Stderr, "Unable to parse file: %s. Reason: %s\n", filename, e)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
comments := ast.NewCommentMap(fileset, root, root.Comments)
|
|
||||||
info := &types.Info{
|
|
||||||
Types: make(map[ast.Expr]types.TypeAndValue),
|
|
||||||
Defs: make(map[*ast.Ident]types.Object),
|
|
||||||
Uses: make(map[*ast.Ident]types.Object),
|
|
||||||
Selections: make(map[*ast.SelectorExpr]*types.Selection),
|
|
||||||
Scopes: make(map[ast.Node]*types.Scope),
|
|
||||||
Implicits: make(map[ast.Node]types.Object),
|
|
||||||
}
|
|
||||||
config := types.Config{Importer: importer.Default()}
|
|
||||||
pkg, e := config.Check("main.go", fileset, []*ast.File{root}, info)
|
|
||||||
if e != nil {
|
|
||||||
// #nosec
|
|
||||||
fmt.Fprintf(os.Stderr, "Type check failed for file: %s. Reason: %s\n", filename, e)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return &context{fileset, comments, info, pkg, &config, root}
|
|
||||||
}
|
|
||||||
|
|
||||||
func printObject(obj types.Object) {
|
|
||||||
fmt.Println("OBJECT")
|
|
||||||
if obj == nil {
|
|
||||||
fmt.Println("object is nil")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
fmt.Printf(" Package = %v\n", obj.Pkg())
|
|
||||||
if obj.Pkg() != nil {
|
|
||||||
fmt.Println(" Path = ", obj.Pkg().Path())
|
|
||||||
fmt.Println(" Name = ", obj.Pkg().Name())
|
|
||||||
fmt.Println(" String = ", obj.Pkg().String())
|
|
||||||
}
|
|
||||||
fmt.Printf(" Name = %v\n", obj.Name())
|
|
||||||
fmt.Printf(" Type = %v\n", obj.Type())
|
|
||||||
fmt.Printf(" Id = %v\n", obj.Id())
|
|
||||||
}
|
|
||||||
|
|
||||||
func checkContext(ctx *context, file string) bool {
|
|
||||||
// #nosec
|
|
||||||
if ctx == nil {
|
|
||||||
fmt.Fprintln(os.Stderr, "Failed to create context for file: ", file)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpCallObj(files ...string) {
|
|
||||||
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
ast.Inspect(context.root, func(n ast.Node) bool {
|
|
||||||
var obj types.Object
|
|
||||||
switch node := n.(type) {
|
|
||||||
case *ast.Ident:
|
|
||||||
obj = context.info.ObjectOf(node) //context.info.Uses[node]
|
|
||||||
case *ast.SelectorExpr:
|
|
||||||
obj = context.info.ObjectOf(node.Sel) //context.info.Uses[node.Sel]
|
|
||||||
default:
|
|
||||||
obj = nil
|
|
||||||
}
|
|
||||||
if obj != nil {
|
|
||||||
printObject(obj)
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpUses(files ...string) {
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for ident, obj := range context.info.Uses {
|
|
||||||
fmt.Printf("IDENT: %v, OBJECT: %v\n", ident, obj)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpTypes(files ...string) {
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for expr, tv := range context.info.Types {
|
|
||||||
fmt.Printf("EXPR: %v, TYPE: %v\n", expr, tv)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpDefs(files ...string) {
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for ident, obj := range context.info.Defs {
|
|
||||||
fmt.Printf("IDENT: %v, OBJ: %v\n", ident, obj)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpComments(files ...string) {
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for _, group := range context.comments.Comments() {
|
|
||||||
fmt.Println(group.Text())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func dumpImports(files ...string) {
|
|
||||||
for _, file := range files {
|
|
||||||
if shouldSkip(file) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
context := createContext(file)
|
|
||||||
if !checkContext(context, file) {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for _, pkg := range context.pkg.Imports() {
|
|
||||||
fmt.Println(pkg.Path(), pkg.Name())
|
|
||||||
for _, name := range pkg.Scope().Names() {
|
|
||||||
fmt.Println(" => ", name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,96 +0,0 @@
|
||||||
package adjacency
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"log"
|
|
||||||
// "fmt"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/data"
|
|
||||||
)
|
|
||||||
|
|
||||||
type AdjacencyGraph struct {
|
|
||||||
Graph map[string][]string
|
|
||||||
averageDegree float64
|
|
||||||
Name string
|
|
||||||
}
|
|
||||||
|
|
||||||
var AdjacencyGph = make(map[string]AdjacencyGraph)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
AdjacencyGph["qwerty"] = BuildQwerty()
|
|
||||||
AdjacencyGph["dvorak"] = BuildDvorak()
|
|
||||||
AdjacencyGph["keypad"] = BuildKeypad()
|
|
||||||
AdjacencyGph["macKeypad"] = BuildMacKeypad()
|
|
||||||
AdjacencyGph["l33t"] = BuildLeet()
|
|
||||||
}
|
|
||||||
|
|
||||||
func BuildQwerty() AdjacencyGraph {
|
|
||||||
data, err := zxcvbn_data.Asset("data/Qwerty.json")
|
|
||||||
if err != nil {
|
|
||||||
panic("Can't find asset")
|
|
||||||
}
|
|
||||||
return GetAdjancencyGraphFromFile(data, "qwerty")
|
|
||||||
}
|
|
||||||
func BuildDvorak() AdjacencyGraph {
|
|
||||||
data, err := zxcvbn_data.Asset("data/Dvorak.json")
|
|
||||||
if err != nil {
|
|
||||||
panic("Can't find asset")
|
|
||||||
}
|
|
||||||
return GetAdjancencyGraphFromFile(data, "dvorak")
|
|
||||||
}
|
|
||||||
func BuildKeypad() AdjacencyGraph {
|
|
||||||
data, err := zxcvbn_data.Asset("data/Keypad.json")
|
|
||||||
if err != nil {
|
|
||||||
panic("Can't find asset")
|
|
||||||
}
|
|
||||||
return GetAdjancencyGraphFromFile(data, "keypad")
|
|
||||||
}
|
|
||||||
func BuildMacKeypad() AdjacencyGraph {
|
|
||||||
data, err := zxcvbn_data.Asset("data/MacKeypad.json")
|
|
||||||
if err != nil {
|
|
||||||
panic("Can't find asset")
|
|
||||||
}
|
|
||||||
return GetAdjancencyGraphFromFile(data, "mac_keypad")
|
|
||||||
}
|
|
||||||
func BuildLeet() AdjacencyGraph {
|
|
||||||
data, err := zxcvbn_data.Asset("data/L33t.json")
|
|
||||||
if err != nil {
|
|
||||||
panic("Can't find asset")
|
|
||||||
}
|
|
||||||
return GetAdjancencyGraphFromFile(data, "keypad")
|
|
||||||
}
|
|
||||||
|
|
||||||
func GetAdjancencyGraphFromFile(data []byte, name string) AdjacencyGraph {
|
|
||||||
|
|
||||||
var graph AdjacencyGraph
|
|
||||||
err := json.Unmarshal(data, &graph)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
graph.Name = name
|
|
||||||
return graph
|
|
||||||
}
|
|
||||||
|
|
||||||
//on qwerty, 'g' has degree 6, being adjacent to 'ftyhbv'. '\' has degree 1.
|
|
||||||
//this calculates the average over all keys.
|
|
||||||
//TODO double check that i ported this correctly scoring.coffee ln 5
|
|
||||||
func (adjGrp AdjacencyGraph) CalculateAvgDegree() float64 {
|
|
||||||
if adjGrp.averageDegree != float64(0) {
|
|
||||||
return adjGrp.averageDegree
|
|
||||||
}
|
|
||||||
var avg float64
|
|
||||||
var count float64
|
|
||||||
for _, value := range adjGrp.Graph {
|
|
||||||
|
|
||||||
for _, char := range value {
|
|
||||||
if char != "" || char != " " {
|
|
||||||
avg += float64(len(char))
|
|
||||||
count++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
adjGrp.averageDegree = avg / count
|
|
||||||
|
|
||||||
return adjGrp.averageDegree
|
|
||||||
}
|
|
||||||
File diff suppressed because one or more lines are too long
|
|
@ -1,215 +0,0 @@
|
||||||
package entropy
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/adjacency"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/utils/math"
|
|
||||||
"math"
|
|
||||||
"regexp"
|
|
||||||
"unicode"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
START_UPPER string = `^[A-Z][^A-Z]+$`
|
|
||||||
END_UPPER string = `^[^A-Z]+[A-Z]$'`
|
|
||||||
ALL_UPPER string = `^[A-Z]+$`
|
|
||||||
NUM_YEARS = float64(119) // years match against 1900 - 2019
|
|
||||||
NUM_MONTHS = float64(12)
|
|
||||||
NUM_DAYS = float64(31)
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
KEYPAD_STARTING_POSITIONS = len(adjacency.AdjacencyGph["keypad"].Graph)
|
|
||||||
KEYPAD_AVG_DEGREE = adjacency.AdjacencyGph["keypad"].CalculateAvgDegree()
|
|
||||||
)
|
|
||||||
|
|
||||||
func DictionaryEntropy(match match.Match, rank float64) float64 {
|
|
||||||
baseEntropy := math.Log2(rank)
|
|
||||||
upperCaseEntropy := extraUpperCaseEntropy(match)
|
|
||||||
//TODO: L33t
|
|
||||||
return baseEntropy + upperCaseEntropy
|
|
||||||
}
|
|
||||||
|
|
||||||
func extraUpperCaseEntropy(match match.Match) float64 {
|
|
||||||
word := match.Token
|
|
||||||
|
|
||||||
allLower := true
|
|
||||||
|
|
||||||
for _, char := range word {
|
|
||||||
if unicode.IsUpper(char) {
|
|
||||||
allLower = false
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if allLower {
|
|
||||||
return float64(0)
|
|
||||||
}
|
|
||||||
|
|
||||||
//a capitalized word is the most common capitalization scheme,
|
|
||||||
//so it only doubles the search space (uncapitalized + capitalized): 1 extra bit of entropy.
|
|
||||||
//allcaps and end-capitalized are common enough too, underestimate as 1 extra bit to be safe.
|
|
||||||
|
|
||||||
for _, regex := range []string{START_UPPER, END_UPPER, ALL_UPPER} {
|
|
||||||
matcher := regexp.MustCompile(regex)
|
|
||||||
|
|
||||||
if matcher.MatchString(word) {
|
|
||||||
return float64(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
//Otherwise calculate the number of ways to capitalize U+L uppercase+lowercase letters with U uppercase letters or
|
|
||||||
//less. Or, if there's more uppercase than lower (for e.g. PASSwORD), the number of ways to lowercase U+L letters
|
|
||||||
//with L lowercase letters or less.
|
|
||||||
|
|
||||||
countUpper, countLower := float64(0), float64(0)
|
|
||||||
for _, char := range word {
|
|
||||||
if unicode.IsUpper(char) {
|
|
||||||
countUpper++
|
|
||||||
} else if unicode.IsLower(char) {
|
|
||||||
countLower++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
totalLenght := countLower + countUpper
|
|
||||||
var possibililities float64
|
|
||||||
|
|
||||||
for i := float64(0); i <= math.Min(countUpper, countLower); i++ {
|
|
||||||
possibililities += float64(zxcvbn_math.NChoseK(totalLenght, i))
|
|
||||||
}
|
|
||||||
|
|
||||||
if possibililities < 1 {
|
|
||||||
return float64(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
return float64(math.Log2(possibililities))
|
|
||||||
}
|
|
||||||
|
|
||||||
func SpatialEntropy(match match.Match, turns int, shiftCount int) float64 {
|
|
||||||
var s, d float64
|
|
||||||
if match.DictionaryName == "qwerty" || match.DictionaryName == "dvorak" {
|
|
||||||
//todo: verify qwerty and dvorak have the same length and degree
|
|
||||||
s = float64(len(adjacency.BuildQwerty().Graph))
|
|
||||||
d = adjacency.BuildQwerty().CalculateAvgDegree()
|
|
||||||
} else {
|
|
||||||
s = float64(KEYPAD_STARTING_POSITIONS)
|
|
||||||
d = KEYPAD_AVG_DEGREE
|
|
||||||
}
|
|
||||||
|
|
||||||
possibilities := float64(0)
|
|
||||||
|
|
||||||
length := float64(len(match.Token))
|
|
||||||
|
|
||||||
//TODO: Should this be <= or just < ?
|
|
||||||
//Estimate the number of possible patterns w/ length L or less with t turns or less
|
|
||||||
for i := float64(2); i <= length+1; i++ {
|
|
||||||
possibleTurns := math.Min(float64(turns), i-1)
|
|
||||||
for j := float64(1); j <= possibleTurns+1; j++ {
|
|
||||||
x := zxcvbn_math.NChoseK(i-1, j-1) * s * math.Pow(d, j)
|
|
||||||
possibilities += x
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
entropy := math.Log2(possibilities)
|
|
||||||
//add extra entropu for shifted keys. ( % instead of 5 A instead of a)
|
|
||||||
//Math is similar to extra entropy for uppercase letters in dictionary matches.
|
|
||||||
|
|
||||||
if S := float64(shiftCount); S > float64(0) {
|
|
||||||
possibilities = float64(0)
|
|
||||||
U := length - S
|
|
||||||
|
|
||||||
for i := float64(0); i < math.Min(S, U)+1; i++ {
|
|
||||||
possibilities += zxcvbn_math.NChoseK(S+U, i)
|
|
||||||
}
|
|
||||||
|
|
||||||
entropy += math.Log2(possibilities)
|
|
||||||
}
|
|
||||||
|
|
||||||
return entropy
|
|
||||||
}
|
|
||||||
|
|
||||||
func RepeatEntropy(match match.Match) float64 {
|
|
||||||
cardinality := CalcBruteForceCardinality(match.Token)
|
|
||||||
entropy := math.Log2(cardinality * float64(len(match.Token)))
|
|
||||||
|
|
||||||
return entropy
|
|
||||||
}
|
|
||||||
|
|
||||||
//TODO: Validate against python
|
|
||||||
func CalcBruteForceCardinality(password string) float64 {
|
|
||||||
lower, upper, digits, symbols := float64(0), float64(0), float64(0), float64(0)
|
|
||||||
|
|
||||||
for _, char := range password {
|
|
||||||
if unicode.IsLower(char) {
|
|
||||||
lower = float64(26)
|
|
||||||
} else if unicode.IsDigit(char) {
|
|
||||||
digits = float64(10)
|
|
||||||
} else if unicode.IsUpper(char) {
|
|
||||||
upper = float64(26)
|
|
||||||
} else {
|
|
||||||
symbols = float64(33)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
cardinality := lower + upper + digits + symbols
|
|
||||||
return cardinality
|
|
||||||
}
|
|
||||||
|
|
||||||
func SequenceEntropy(match match.Match, dictionaryLength int, ascending bool) float64 {
|
|
||||||
firstChar := match.Token[0]
|
|
||||||
baseEntropy := float64(0)
|
|
||||||
if string(firstChar) == "a" || string(firstChar) == "1" {
|
|
||||||
baseEntropy = float64(0)
|
|
||||||
} else {
|
|
||||||
baseEntropy = math.Log2(float64(dictionaryLength))
|
|
||||||
//TODO: should this be just the first or any char?
|
|
||||||
if unicode.IsUpper(rune(firstChar)) {
|
|
||||||
baseEntropy++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if !ascending {
|
|
||||||
baseEntropy++
|
|
||||||
}
|
|
||||||
return baseEntropy + math.Log2(float64(len(match.Token)))
|
|
||||||
}
|
|
||||||
|
|
||||||
func ExtraLeetEntropy(match match.Match, password string) float64 {
|
|
||||||
var subsitutions float64
|
|
||||||
var unsub float64
|
|
||||||
subPassword := password[match.I:match.J]
|
|
||||||
for index, char := range subPassword {
|
|
||||||
if string(char) != string(match.Token[index]) {
|
|
||||||
subsitutions++
|
|
||||||
} else {
|
|
||||||
//TODO: Make this only true for 1337 chars that are not subs?
|
|
||||||
unsub++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
var possibilities float64
|
|
||||||
|
|
||||||
for i := float64(0); i <= math.Min(subsitutions, unsub)+1; i++ {
|
|
||||||
possibilities += zxcvbn_math.NChoseK(subsitutions+unsub, i)
|
|
||||||
}
|
|
||||||
|
|
||||||
if possibilities <= 1 {
|
|
||||||
return float64(1)
|
|
||||||
}
|
|
||||||
return math.Log2(possibilities)
|
|
||||||
}
|
|
||||||
|
|
||||||
func YearEntropy(dateMatch match.DateMatch) float64 {
|
|
||||||
return math.Log2(NUM_YEARS)
|
|
||||||
}
|
|
||||||
|
|
||||||
func DateEntropy(dateMatch match.DateMatch) float64 {
|
|
||||||
var entropy float64
|
|
||||||
if dateMatch.Year < 100 {
|
|
||||||
entropy = math.Log2(NUM_DAYS * NUM_MONTHS * 100)
|
|
||||||
} else {
|
|
||||||
entropy = math.Log2(NUM_DAYS * NUM_MONTHS * NUM_YEARS)
|
|
||||||
}
|
|
||||||
|
|
||||||
if dateMatch.Separator != "" {
|
|
||||||
entropy += 2 //add two bits for separator selection [/,-,.,etc]
|
|
||||||
}
|
|
||||||
return entropy
|
|
||||||
}
|
|
||||||
|
|
@ -1,47 +0,0 @@
|
||||||
package frequency
|
|
||||||
|
|
||||||
import (
|
|
||||||
"encoding/json"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/data"
|
|
||||||
"log"
|
|
||||||
)
|
|
||||||
|
|
||||||
type FrequencyList struct {
|
|
||||||
Name string
|
|
||||||
List []string
|
|
||||||
}
|
|
||||||
|
|
||||||
var FrequencyLists = make(map[string]FrequencyList)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
maleFilePath := getAsset("data/MaleNames.json")
|
|
||||||
femaleFilePath := getAsset("data/FemaleNames.json")
|
|
||||||
surnameFilePath := getAsset("data/Surnames.json")
|
|
||||||
englishFilePath := getAsset("data/English.json")
|
|
||||||
passwordsFilePath := getAsset("data/Passwords.json")
|
|
||||||
|
|
||||||
FrequencyLists["MaleNames"] = GetStringListFromAsset(maleFilePath, "MaleNames")
|
|
||||||
FrequencyLists["FemaleNames"] = GetStringListFromAsset(femaleFilePath, "FemaleNames")
|
|
||||||
FrequencyLists["Surname"] = GetStringListFromAsset(surnameFilePath, "Surname")
|
|
||||||
FrequencyLists["English"] = GetStringListFromAsset(englishFilePath, "English")
|
|
||||||
FrequencyLists["Passwords"] = GetStringListFromAsset(passwordsFilePath, "Passwords")
|
|
||||||
|
|
||||||
}
|
|
||||||
func getAsset(name string) []byte {
|
|
||||||
data, err := zxcvbn_data.Asset(name)
|
|
||||||
if err != nil {
|
|
||||||
panic("Error getting asset " + name)
|
|
||||||
}
|
|
||||||
|
|
||||||
return data
|
|
||||||
}
|
|
||||||
func GetStringListFromAsset(data []byte, name string) FrequencyList {
|
|
||||||
|
|
||||||
var tempList FrequencyList
|
|
||||||
err := json.Unmarshal(data, &tempList)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatal(err)
|
|
||||||
}
|
|
||||||
tempList.Name = name
|
|
||||||
return tempList
|
|
||||||
}
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
type Matches []Match
|
|
||||||
|
|
||||||
func (s Matches) Len() int {
|
|
||||||
return len(s)
|
|
||||||
}
|
|
||||||
func (s Matches) Swap(i, j int) {
|
|
||||||
s[i], s[j] = s[j], s[i]
|
|
||||||
}
|
|
||||||
func (s Matches) Less(i, j int) bool {
|
|
||||||
if s[i].I < s[j].I {
|
|
||||||
return true
|
|
||||||
} else if s[i].I == s[j].I {
|
|
||||||
return s[i].J < s[j].J
|
|
||||||
} else {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Match struct {
|
|
||||||
Pattern string
|
|
||||||
I, J int
|
|
||||||
Token string
|
|
||||||
DictionaryName string
|
|
||||||
Entropy float64
|
|
||||||
}
|
|
||||||
|
|
||||||
type DateMatch struct {
|
|
||||||
Pattern string
|
|
||||||
I, J int
|
|
||||||
Token string
|
|
||||||
Separator string
|
|
||||||
Day, Month, Year int64
|
|
||||||
}
|
|
||||||
|
|
@ -1,189 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"regexp"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func checkDate(day, month, year int64) (bool, int64, int64, int64) {
|
|
||||||
if (12 <= month && month <= 31) && day <= 12 {
|
|
||||||
day, month = month, day
|
|
||||||
}
|
|
||||||
|
|
||||||
if day > 31 || month > 12 {
|
|
||||||
return false, 0, 0, 0
|
|
||||||
}
|
|
||||||
|
|
||||||
if !((1900 <= year && year <= 2019) || (0 <= year && year <= 99)) {
|
|
||||||
return false, 0, 0, 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return true, day, month, year
|
|
||||||
}
|
|
||||||
func dateSepMatcher(password string) []match.Match {
|
|
||||||
dateMatches := dateSepMatchHelper(password)
|
|
||||||
|
|
||||||
var matches []match.Match
|
|
||||||
for _, dateMatch := range dateMatches {
|
|
||||||
match := match.Match{
|
|
||||||
I: dateMatch.I,
|
|
||||||
J: dateMatch.J,
|
|
||||||
Entropy: entropy.DateEntropy(dateMatch),
|
|
||||||
DictionaryName: "date_match",
|
|
||||||
Token: dateMatch.Token,
|
|
||||||
}
|
|
||||||
|
|
||||||
matches = append(matches, match)
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
func dateSepMatchHelper(password string) []match.DateMatch {
|
|
||||||
|
|
||||||
var matches []match.DateMatch
|
|
||||||
|
|
||||||
matcher := regexp.MustCompile(DATE_RX_YEAR_SUFFIX)
|
|
||||||
for _, v := range matcher.FindAllString(password, len(password)) {
|
|
||||||
splitV := matcher.FindAllStringSubmatch(v, len(v))
|
|
||||||
i := strings.Index(password, v)
|
|
||||||
j := i + len(v)
|
|
||||||
day, _ := strconv.ParseInt(splitV[0][4], 10, 16)
|
|
||||||
month, _ := strconv.ParseInt(splitV[0][2], 10, 16)
|
|
||||||
year, _ := strconv.ParseInt(splitV[0][6], 10, 16)
|
|
||||||
match := match.DateMatch{Day: day, Month: month, Year: year, Separator: splitV[0][5], I: i, J: j, Token: password[i:j]}
|
|
||||||
matches = append(matches, match)
|
|
||||||
}
|
|
||||||
|
|
||||||
matcher = regexp.MustCompile(DATE_RX_YEAR_PREFIX)
|
|
||||||
for _, v := range matcher.FindAllString(password, len(password)) {
|
|
||||||
splitV := matcher.FindAllStringSubmatch(v, len(v))
|
|
||||||
i := strings.Index(password, v)
|
|
||||||
j := i + len(v)
|
|
||||||
day, _ := strconv.ParseInt(splitV[0][4], 10, 16)
|
|
||||||
month, _ := strconv.ParseInt(splitV[0][6], 10, 16)
|
|
||||||
year, _ := strconv.ParseInt(splitV[0][2], 10, 16)
|
|
||||||
match := match.DateMatch{Day: day, Month: month, Year: year, Separator: splitV[0][5], I: i, J: j, Token: password[i:j]}
|
|
||||||
matches = append(matches, match)
|
|
||||||
}
|
|
||||||
|
|
||||||
var out []match.DateMatch
|
|
||||||
for _, match := range matches {
|
|
||||||
if valid, day, month, year := checkDate(match.Day, match.Month, match.Year); valid {
|
|
||||||
match.Pattern = "date"
|
|
||||||
match.Day = day
|
|
||||||
match.Month = month
|
|
||||||
match.Year = year
|
|
||||||
out = append(out, match)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return out
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
type DateMatchCandidate struct {
|
|
||||||
DayMonth string
|
|
||||||
Year string
|
|
||||||
I, J int
|
|
||||||
}
|
|
||||||
|
|
||||||
type DateMatchCandidateTwo struct {
|
|
||||||
Day string
|
|
||||||
Month string
|
|
||||||
Year string
|
|
||||||
I, J int
|
|
||||||
}
|
|
||||||
|
|
||||||
func dateWithoutSepMatch(password string) []match.Match {
|
|
||||||
dateMatches := dateWithoutSepMatchHelper(password)
|
|
||||||
|
|
||||||
var matches []match.Match
|
|
||||||
for _, dateMatch := range dateMatches {
|
|
||||||
match := match.Match{
|
|
||||||
I: dateMatch.I,
|
|
||||||
J: dateMatch.J,
|
|
||||||
Entropy: entropy.DateEntropy(dateMatch),
|
|
||||||
DictionaryName: "date_match",
|
|
||||||
Token: dateMatch.Token,
|
|
||||||
}
|
|
||||||
|
|
||||||
matches = append(matches, match)
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
//TODO Has issues with 6 digit dates
|
|
||||||
func dateWithoutSepMatchHelper(password string) (matches []match.DateMatch) {
|
|
||||||
matcher := regexp.MustCompile(DATE_WITHOUT_SEP_MATCH)
|
|
||||||
for _, v := range matcher.FindAllString(password, len(password)) {
|
|
||||||
i := strings.Index(password, v)
|
|
||||||
j := i + len(v)
|
|
||||||
length := len(v)
|
|
||||||
lastIndex := length - 1
|
|
||||||
var candidatesRoundOne []DateMatchCandidate
|
|
||||||
|
|
||||||
if length <= 6 {
|
|
||||||
//2-digit year prefix
|
|
||||||
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[2:], v[0:2], i, j))
|
|
||||||
|
|
||||||
//2-digityear suffix
|
|
||||||
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[0:lastIndex-2], v[lastIndex-2:], i, j))
|
|
||||||
}
|
|
||||||
if length >= 6 {
|
|
||||||
//4-digit year prefix
|
|
||||||
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[4:], v[0:4], i, j))
|
|
||||||
|
|
||||||
//4-digit year sufix
|
|
||||||
candidatesRoundOne = append(candidatesRoundOne, buildDateMatchCandidate(v[0:lastIndex-3], v[lastIndex-3:], i, j))
|
|
||||||
}
|
|
||||||
|
|
||||||
var candidatesRoundTwo []DateMatchCandidateTwo
|
|
||||||
for _, c := range candidatesRoundOne {
|
|
||||||
if len(c.DayMonth) == 2 {
|
|
||||||
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:0], c.DayMonth[1:1], c.Year, c.I, c.J))
|
|
||||||
} else if len(c.DayMonth) == 3 {
|
|
||||||
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:2], c.DayMonth[2:2], c.Year, c.I, c.J))
|
|
||||||
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:0], c.DayMonth[1:3], c.Year, c.I, c.J))
|
|
||||||
} else if len(c.DayMonth) == 4 {
|
|
||||||
candidatesRoundTwo = append(candidatesRoundTwo, buildDateMatchCandidateTwo(c.DayMonth[0:2], c.DayMonth[2:4], c.Year, c.I, c.J))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, candidate := range candidatesRoundTwo {
|
|
||||||
intDay, err := strconv.ParseInt(candidate.Day, 10, 16)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
intMonth, err := strconv.ParseInt(candidate.Month, 10, 16)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
intYear, err := strconv.ParseInt(candidate.Year, 10, 16)
|
|
||||||
if err != nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if ok, _, _, _ := checkDate(intDay, intMonth, intYear); ok {
|
|
||||||
matches = append(matches, match.DateMatch{Token: password, Pattern: "date", Day: intDay, Month: intMonth, Year: intYear, I: i, J: j})
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildDateMatchCandidate(dayMonth, year string, i, j int) DateMatchCandidate {
|
|
||||||
return DateMatchCandidate{DayMonth: dayMonth, Year: year, I: i, J: j}
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildDateMatchCandidateTwo(day, month string, year string, i, j int) DateMatchCandidateTwo {
|
|
||||||
|
|
||||||
return DateMatchCandidateTwo{Day: day, Month: month, Year: year, I: i, J: j}
|
|
||||||
}
|
|
||||||
|
|
@ -1,54 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func buildDictMatcher(dictName string, rankedDict map[string]int) func(password string) []match.Match {
|
|
||||||
return func(password string) []match.Match {
|
|
||||||
matches := dictionaryMatch(password, dictName, rankedDict)
|
|
||||||
for _, v := range matches {
|
|
||||||
v.DictionaryName = dictName
|
|
||||||
}
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
func dictionaryMatch(password string, dictionaryName string, rankedDict map[string]int) []match.Match {
|
|
||||||
length := len(password)
|
|
||||||
var results []match.Match
|
|
||||||
pwLower := strings.ToLower(password)
|
|
||||||
|
|
||||||
for i := 0; i < length; i++ {
|
|
||||||
for j := i; j < length; j++ {
|
|
||||||
word := pwLower[i : j+1]
|
|
||||||
if val, ok := rankedDict[word]; ok {
|
|
||||||
matchDic := match.Match{Pattern: "dictionary",
|
|
||||||
DictionaryName: dictionaryName,
|
|
||||||
I: i,
|
|
||||||
J: j,
|
|
||||||
Token: password[i : j+1],
|
|
||||||
}
|
|
||||||
matchDic.Entropy = entropy.DictionaryEntropy(matchDic, float64(val))
|
|
||||||
|
|
||||||
results = append(results, matchDic)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return results
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildRankedDict(unrankedList []string) map[string]int {
|
|
||||||
|
|
||||||
result := make(map[string]int)
|
|
||||||
|
|
||||||
for i, v := range unrankedList {
|
|
||||||
result[strings.ToLower(v)] = i + 1
|
|
||||||
}
|
|
||||||
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
@ -1,68 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func l33tMatch(password string) []match.Match {
|
|
||||||
|
|
||||||
substitutions := relevantL33tSubtable(password)
|
|
||||||
|
|
||||||
permutations := getAllPermutationsOfLeetSubstitutions(password, substitutions)
|
|
||||||
|
|
||||||
var matches []match.Match
|
|
||||||
|
|
||||||
for _, permutation := range permutations {
|
|
||||||
for _, mather := range DICTIONARY_MATCHERS {
|
|
||||||
matches = append(matches, mather(permutation)...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, match := range matches {
|
|
||||||
match.Entropy += entropy.ExtraLeetEntropy(match, password)
|
|
||||||
match.DictionaryName = match.DictionaryName + "_3117"
|
|
||||||
}
|
|
||||||
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
func getAllPermutationsOfLeetSubstitutions(password string, substitutionsMap map[string][]string) []string {
|
|
||||||
|
|
||||||
var permutations []string
|
|
||||||
|
|
||||||
for index, char := range password {
|
|
||||||
for value, splice := range substitutionsMap {
|
|
||||||
for _, sub := range splice {
|
|
||||||
if string(char) == sub {
|
|
||||||
var permutation string
|
|
||||||
permutation = password[:index] + value + password[index+1:]
|
|
||||||
|
|
||||||
permutations = append(permutations, permutation)
|
|
||||||
if index < len(permutation) {
|
|
||||||
tempPermutations := getAllPermutationsOfLeetSubstitutions(permutation[index+1:], substitutionsMap)
|
|
||||||
for _, temp := range tempPermutations {
|
|
||||||
permutations = append(permutations, permutation[:index+1]+temp)
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return permutations
|
|
||||||
}
|
|
||||||
|
|
||||||
func relevantL33tSubtable(password string) map[string][]string {
|
|
||||||
relevantSubs := make(map[string][]string)
|
|
||||||
for key, values := range L33T_TABLE.Graph {
|
|
||||||
for _, value := range values {
|
|
||||||
if strings.Contains(password, value) {
|
|
||||||
relevantSubs[key] = append(relevantSubs[key], value)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return relevantSubs
|
|
||||||
}
|
|
||||||
|
|
@ -1,77 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/adjacency"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/frequency"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
DICTIONARY_MATCHERS []func(password string) []match.Match
|
|
||||||
MATCHERS []func(password string) []match.Match
|
|
||||||
ADJACENCY_GRAPHS []adjacency.AdjacencyGraph
|
|
||||||
L33T_TABLE adjacency.AdjacencyGraph
|
|
||||||
|
|
||||||
SEQUENCES map[string]string
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
DATE_RX_YEAR_SUFFIX string = `((\d{1,2})(\s|-|\/|\\|_|\.)(\d{1,2})(\s|-|\/|\\|_|\.)(19\d{2}|200\d|201\d|\d{2}))`
|
|
||||||
DATE_RX_YEAR_PREFIX string = `((19\d{2}|200\d|201\d|\d{2})(\s|-|/|\\|_|\.)(\d{1,2})(\s|-|/|\\|_|\.)(\d{1,2}))`
|
|
||||||
DATE_WITHOUT_SEP_MATCH string = `\d{4,8}`
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
loadFrequencyList()
|
|
||||||
}
|
|
||||||
|
|
||||||
func Omnimatch(password string, userInputs []string) (matches []match.Match) {
|
|
||||||
|
|
||||||
//Can I run into the issue where nil is not equal to nil?
|
|
||||||
if DICTIONARY_MATCHERS == nil || ADJACENCY_GRAPHS == nil {
|
|
||||||
loadFrequencyList()
|
|
||||||
}
|
|
||||||
|
|
||||||
if userInputs != nil {
|
|
||||||
userInputMatcher := buildDictMatcher("user_inputs", buildRankedDict(userInputs))
|
|
||||||
matches = userInputMatcher(password)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, matcher := range MATCHERS {
|
|
||||||
matches = append(matches, matcher(password)...)
|
|
||||||
}
|
|
||||||
sort.Sort(match.Matches(matches))
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
func loadFrequencyList() {
|
|
||||||
|
|
||||||
for n, list := range frequency.FrequencyLists {
|
|
||||||
DICTIONARY_MATCHERS = append(DICTIONARY_MATCHERS, buildDictMatcher(n, buildRankedDict(list.List)))
|
|
||||||
}
|
|
||||||
|
|
||||||
L33T_TABLE = adjacency.AdjacencyGph["l33t"]
|
|
||||||
|
|
||||||
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["qwerty"])
|
|
||||||
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["dvorak"])
|
|
||||||
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["keypad"])
|
|
||||||
ADJACENCY_GRAPHS = append(ADJACENCY_GRAPHS, adjacency.AdjacencyGph["macKeypad"])
|
|
||||||
|
|
||||||
//l33tFilePath, _ := filepath.Abs("adjacency/L33t.json")
|
|
||||||
//L33T_TABLE = adjacency.GetAdjancencyGraphFromFile(l33tFilePath, "l33t")
|
|
||||||
|
|
||||||
SEQUENCES = make(map[string]string)
|
|
||||||
SEQUENCES["lower"] = "abcdefghijklmnopqrstuvwxyz"
|
|
||||||
SEQUENCES["upper"] = "ABCDEFGHIJKLMNOPQRSTUVWXYZ"
|
|
||||||
SEQUENCES["digits"] = "0123456789"
|
|
||||||
|
|
||||||
MATCHERS = append(MATCHERS, DICTIONARY_MATCHERS...)
|
|
||||||
MATCHERS = append(MATCHERS, spatialMatch)
|
|
||||||
MATCHERS = append(MATCHERS, repeatMatch)
|
|
||||||
MATCHERS = append(MATCHERS, sequenceMatch)
|
|
||||||
MATCHERS = append(MATCHERS, l33tMatch)
|
|
||||||
MATCHERS = append(MATCHERS, dateSepMatcher)
|
|
||||||
MATCHERS = append(MATCHERS, dateWithoutSepMatch)
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
@ -1,59 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func repeatMatch(password string) []match.Match {
|
|
||||||
var matches []match.Match
|
|
||||||
|
|
||||||
//Loop through password. if current == prev currentStreak++ else if currentStreak > 2 {buildMatch; currentStreak = 1} prev = current
|
|
||||||
var current, prev string
|
|
||||||
currentStreak := 1
|
|
||||||
var i int
|
|
||||||
var char rune
|
|
||||||
for i, char = range password {
|
|
||||||
current = string(char)
|
|
||||||
if i == 0 {
|
|
||||||
prev = current
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.ToLower(current) == strings.ToLower(prev) {
|
|
||||||
currentStreak++
|
|
||||||
|
|
||||||
} else if currentStreak > 2 {
|
|
||||||
iPos := i - currentStreak
|
|
||||||
jPos := i - 1
|
|
||||||
matchRepeat := match.Match{
|
|
||||||
Pattern: "repeat",
|
|
||||||
I: iPos,
|
|
||||||
J: jPos,
|
|
||||||
Token: password[iPos : jPos+1],
|
|
||||||
DictionaryName: prev}
|
|
||||||
matchRepeat.Entropy = entropy.RepeatEntropy(matchRepeat)
|
|
||||||
matches = append(matches, matchRepeat)
|
|
||||||
currentStreak = 1
|
|
||||||
} else {
|
|
||||||
currentStreak = 1
|
|
||||||
}
|
|
||||||
|
|
||||||
prev = current
|
|
||||||
}
|
|
||||||
|
|
||||||
if currentStreak > 2 {
|
|
||||||
iPos := i - currentStreak + 1
|
|
||||||
jPos := i
|
|
||||||
matchRepeat := match.Match{
|
|
||||||
Pattern: "repeat",
|
|
||||||
I: iPos,
|
|
||||||
J: jPos,
|
|
||||||
Token: password[iPos : jPos+1],
|
|
||||||
DictionaryName: prev}
|
|
||||||
matchRepeat.Entropy = entropy.RepeatEntropy(matchRepeat)
|
|
||||||
matches = append(matches, matchRepeat)
|
|
||||||
}
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
@ -1,68 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func sequenceMatch(password string) []match.Match {
|
|
||||||
var matches []match.Match
|
|
||||||
for i := 0; i < len(password); {
|
|
||||||
j := i + 1
|
|
||||||
var seq string
|
|
||||||
var seqName string
|
|
||||||
seqDirection := 0
|
|
||||||
for seqCandidateName, seqCandidate := range SEQUENCES {
|
|
||||||
iN := strings.Index(seqCandidate, string(password[i]))
|
|
||||||
var jN int
|
|
||||||
if j < len(password) {
|
|
||||||
jN = strings.Index(seqCandidate, string(password[j]))
|
|
||||||
} else {
|
|
||||||
jN = -1
|
|
||||||
}
|
|
||||||
|
|
||||||
if iN > -1 && jN > -1 {
|
|
||||||
direction := jN - iN
|
|
||||||
if direction == 1 || direction == -1 {
|
|
||||||
seq = seqCandidate
|
|
||||||
seqName = seqCandidateName
|
|
||||||
seqDirection = direction
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
if seq != "" {
|
|
||||||
for {
|
|
||||||
var prevN, curN int
|
|
||||||
if j < len(password) {
|
|
||||||
prevChar, curChar := password[j-1], password[j]
|
|
||||||
prevN, curN = strings.Index(seq, string(prevChar)), strings.Index(seq, string(curChar))
|
|
||||||
}
|
|
||||||
|
|
||||||
if j == len(password) || curN-prevN != seqDirection {
|
|
||||||
if j-i > 2 {
|
|
||||||
matchSequence := match.Match{
|
|
||||||
Pattern: "sequence",
|
|
||||||
I: i,
|
|
||||||
J: j - 1,
|
|
||||||
Token: password[i:j],
|
|
||||||
DictionaryName: seqName,
|
|
||||||
}
|
|
||||||
|
|
||||||
matchSequence.Entropy = entropy.SequenceEntropy(matchSequence, len(seq), (seqDirection == 1))
|
|
||||||
matches = append(matches, matchSequence)
|
|
||||||
}
|
|
||||||
break
|
|
||||||
} else {
|
|
||||||
j += 1
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
i = j
|
|
||||||
}
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
@ -1,80 +0,0 @@
|
||||||
package matching
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/adjacency"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
func spatialMatch(password string) (matches []match.Match) {
|
|
||||||
for _, graph := range ADJACENCY_GRAPHS {
|
|
||||||
if graph.Graph != nil {
|
|
||||||
matches = append(matches, spatialMatchHelper(password, graph)...)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
||||||
func spatialMatchHelper(password string, graph adjacency.AdjacencyGraph) (matches []match.Match) {
|
|
||||||
|
|
||||||
for i := 0; i < len(password)-1; {
|
|
||||||
j := i + 1
|
|
||||||
lastDirection := -99 //an int that it should never be!
|
|
||||||
turns := 0
|
|
||||||
shiftedCount := 0
|
|
||||||
|
|
||||||
for {
|
|
||||||
prevChar := password[j-1]
|
|
||||||
found := false
|
|
||||||
foundDirection := -1
|
|
||||||
curDirection := -1
|
|
||||||
//My graphs seem to be wrong. . . and where the hell is qwerty
|
|
||||||
adjacents := graph.Graph[string(prevChar)]
|
|
||||||
//Consider growing pattern by one character if j hasn't gone over the edge
|
|
||||||
if j < len(password) {
|
|
||||||
curChar := password[j]
|
|
||||||
for _, adj := range adjacents {
|
|
||||||
curDirection += 1
|
|
||||||
|
|
||||||
if strings.Index(adj, string(curChar)) != -1 {
|
|
||||||
found = true
|
|
||||||
foundDirection = curDirection
|
|
||||||
|
|
||||||
if strings.Index(adj, string(curChar)) == 1 {
|
|
||||||
//index 1 in the adjacency means the key is shifted, 0 means unshifted: A vs a, % vs 5, etc.
|
|
||||||
//for example, 'q' is adjacent to the entry '2@'. @ is shifted w/ index 1, 2 is unshifted.
|
|
||||||
shiftedCount += 1
|
|
||||||
}
|
|
||||||
|
|
||||||
if lastDirection != foundDirection {
|
|
||||||
//adding a turn is correct even in the initial case when last_direction is null:
|
|
||||||
//every spatial pattern starts with a turn.
|
|
||||||
turns += 1
|
|
||||||
lastDirection = foundDirection
|
|
||||||
}
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//if the current pattern continued, extend j and try to grow again
|
|
||||||
if found {
|
|
||||||
j += 1
|
|
||||||
} else {
|
|
||||||
//otherwise push the pattern discovered so far, if any...
|
|
||||||
//don't consider length 1 or 2 chains.
|
|
||||||
if j-i > 2 {
|
|
||||||
matchSpc := match.Match{Pattern: "spatial", I: i, J: j - 1, Token: password[i:j], DictionaryName: graph.Name}
|
|
||||||
matchSpc.Entropy = entropy.SpatialEntropy(matchSpc, turns, shiftedCount)
|
|
||||||
matches = append(matches, matchSpc)
|
|
||||||
}
|
|
||||||
//. . . and then start a new search from the rest of the password
|
|
||||||
i = j
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
return matches
|
|
||||||
}
|
|
||||||
|
|
@ -1,180 +0,0 @@
|
||||||
package scoring
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/entropy"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/match"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/utils/math"
|
|
||||||
"math"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
START_UPPER string = `^[A-Z][^A-Z]+$`
|
|
||||||
END_UPPER string = `^[^A-Z]+[A-Z]$'`
|
|
||||||
ALL_UPPER string = `^[A-Z]+$`
|
|
||||||
|
|
||||||
//for a hash function like bcrypt/scrypt/PBKDF2, 10ms per guess is a safe lower bound.
|
|
||||||
//(usually a guess would take longer -- this assumes fast hardware and a small work factor.)
|
|
||||||
//adjust for your site accordingly if you use another hash function, possibly by
|
|
||||||
//several orders of magnitude!
|
|
||||||
SINGLE_GUESS float64 = 0.010
|
|
||||||
NUM_ATTACKERS float64 = 100 //Cores used to make guesses
|
|
||||||
SECONDS_PER_GUESS float64 = SINGLE_GUESS / NUM_ATTACKERS
|
|
||||||
)
|
|
||||||
|
|
||||||
type MinEntropyMatch struct {
|
|
||||||
Password string
|
|
||||||
Entropy float64
|
|
||||||
MatchSequence []match.Match
|
|
||||||
CrackTime float64
|
|
||||||
CrackTimeDisplay string
|
|
||||||
Score int
|
|
||||||
CalcTime float64
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
Returns minimum entropy
|
|
||||||
|
|
||||||
Takes a list of overlapping matches, returns the non-overlapping sublist with
|
|
||||||
minimum entropy. O(nm) dp alg for length-n password with m candidate matches.
|
|
||||||
*/
|
|
||||||
func MinimumEntropyMatchSequence(password string, matches []match.Match) MinEntropyMatch {
|
|
||||||
bruteforceCardinality := float64(entropy.CalcBruteForceCardinality(password))
|
|
||||||
upToK := make([]float64, len(password))
|
|
||||||
backPointers := make([]match.Match, len(password))
|
|
||||||
|
|
||||||
for k := 0; k < len(password); k++ {
|
|
||||||
upToK[k] = get(upToK, k-1) + math.Log2(bruteforceCardinality)
|
|
||||||
|
|
||||||
for _, match := range matches {
|
|
||||||
if match.J != k {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
i, j := match.I, match.J
|
|
||||||
//see if best entropy up to i-1 + entropy of match is less that current min at j
|
|
||||||
upTo := get(upToK, i-1)
|
|
||||||
candidateEntropy := upTo + match.Entropy
|
|
||||||
|
|
||||||
if candidateEntropy < upToK[j] {
|
|
||||||
upToK[j] = candidateEntropy
|
|
||||||
match.Entropy = candidateEntropy
|
|
||||||
backPointers[j] = match
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
//walk backwards and decode the best sequence
|
|
||||||
var matchSequence []match.Match
|
|
||||||
passwordLen := len(password)
|
|
||||||
passwordLen--
|
|
||||||
for k := passwordLen; k >= 0; {
|
|
||||||
match := backPointers[k]
|
|
||||||
if match.Pattern != "" {
|
|
||||||
matchSequence = append(matchSequence, match)
|
|
||||||
k = match.I - 1
|
|
||||||
|
|
||||||
} else {
|
|
||||||
k--
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
sort.Sort(match.Matches(matchSequence))
|
|
||||||
|
|
||||||
makeBruteForceMatch := func(i, j int) match.Match {
|
|
||||||
return match.Match{Pattern: "bruteforce",
|
|
||||||
I: i,
|
|
||||||
J: j,
|
|
||||||
Token: password[i : j+1],
|
|
||||||
Entropy: math.Log2(math.Pow(bruteforceCardinality, float64(j-i)))}
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
k := 0
|
|
||||||
var matchSequenceCopy []match.Match
|
|
||||||
for _, match := range matchSequence {
|
|
||||||
i, j := match.I, match.J
|
|
||||||
if i-k > 0 {
|
|
||||||
matchSequenceCopy = append(matchSequenceCopy, makeBruteForceMatch(k, i-1))
|
|
||||||
}
|
|
||||||
k = j + 1
|
|
||||||
matchSequenceCopy = append(matchSequenceCopy, match)
|
|
||||||
}
|
|
||||||
|
|
||||||
if k < len(password) {
|
|
||||||
matchSequenceCopy = append(matchSequenceCopy, makeBruteForceMatch(k, len(password)-1))
|
|
||||||
}
|
|
||||||
var minEntropy float64
|
|
||||||
if len(password) == 0 {
|
|
||||||
minEntropy = float64(0)
|
|
||||||
} else {
|
|
||||||
minEntropy = upToK[len(password)-1]
|
|
||||||
}
|
|
||||||
|
|
||||||
crackTime := roundToXDigits(entropyToCrackTime(minEntropy), 3)
|
|
||||||
return MinEntropyMatch{Password: password,
|
|
||||||
Entropy: roundToXDigits(minEntropy, 3),
|
|
||||||
MatchSequence: matchSequenceCopy,
|
|
||||||
CrackTime: crackTime,
|
|
||||||
CrackTimeDisplay: displayTime(crackTime),
|
|
||||||
Score: crackTimeToScore(crackTime)}
|
|
||||||
|
|
||||||
}
|
|
||||||
func get(a []float64, i int) float64 {
|
|
||||||
if i < 0 || i >= len(a) {
|
|
||||||
return float64(0)
|
|
||||||
}
|
|
||||||
|
|
||||||
return a[i]
|
|
||||||
}
|
|
||||||
|
|
||||||
func entropyToCrackTime(entropy float64) float64 {
|
|
||||||
crackTime := (0.5 * math.Pow(float64(2), entropy)) * SECONDS_PER_GUESS
|
|
||||||
|
|
||||||
return crackTime
|
|
||||||
}
|
|
||||||
|
|
||||||
func roundToXDigits(number float64, digits int) float64 {
|
|
||||||
return zxcvbn_math.Round(number, .5, digits)
|
|
||||||
}
|
|
||||||
|
|
||||||
func displayTime(seconds float64) string {
|
|
||||||
formater := "%.1f %s"
|
|
||||||
minute := float64(60)
|
|
||||||
hour := minute * float64(60)
|
|
||||||
day := hour * float64(24)
|
|
||||||
month := day * float64(31)
|
|
||||||
year := month * float64(12)
|
|
||||||
century := year * float64(100)
|
|
||||||
|
|
||||||
if seconds < minute {
|
|
||||||
return "instant"
|
|
||||||
} else if seconds < hour {
|
|
||||||
return fmt.Sprintf(formater, (1 + math.Ceil(seconds/minute)), "minutes")
|
|
||||||
} else if seconds < day {
|
|
||||||
return fmt.Sprintf(formater, (1 + math.Ceil(seconds/hour)), "hours")
|
|
||||||
} else if seconds < month {
|
|
||||||
return fmt.Sprintf(formater, (1 + math.Ceil(seconds/day)), "days")
|
|
||||||
} else if seconds < year {
|
|
||||||
return fmt.Sprintf(formater, (1 + math.Ceil(seconds/month)), "months")
|
|
||||||
} else if seconds < century {
|
|
||||||
return fmt.Sprintf(formater, (1 + math.Ceil(seconds/century)), "years")
|
|
||||||
} else {
|
|
||||||
return "centuries"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func crackTimeToScore(seconds float64) int {
|
|
||||||
if seconds < math.Pow(10, 2) {
|
|
||||||
return 0
|
|
||||||
} else if seconds < math.Pow(10, 4) {
|
|
||||||
return 1
|
|
||||||
} else if seconds < math.Pow(10, 6) {
|
|
||||||
return 2
|
|
||||||
} else if seconds < math.Pow(10, 8) {
|
|
||||||
return 3
|
|
||||||
}
|
|
||||||
|
|
||||||
return 4
|
|
||||||
}
|
|
||||||
|
|
@ -1,40 +0,0 @@
|
||||||
package zxcvbn_math
|
|
||||||
|
|
||||||
import "math"
|
|
||||||
|
|
||||||
/**
|
|
||||||
I am surprised that I have to define these. . . Maybe i just didn't look hard enough for a lib.
|
|
||||||
*/
|
|
||||||
|
|
||||||
//http://blog.plover.com/math/choose.html
|
|
||||||
func NChoseK(n, k float64) float64 {
|
|
||||||
if k > n {
|
|
||||||
return 0
|
|
||||||
} else if k == 0 {
|
|
||||||
return 1
|
|
||||||
}
|
|
||||||
|
|
||||||
var r float64 = 1
|
|
||||||
|
|
||||||
for d := float64(1); d <= k; d++ {
|
|
||||||
r *= n
|
|
||||||
r /= d
|
|
||||||
n--
|
|
||||||
}
|
|
||||||
|
|
||||||
return r
|
|
||||||
}
|
|
||||||
|
|
||||||
func Round(val float64, roundOn float64, places int) (newVal float64) {
|
|
||||||
var round float64
|
|
||||||
pow := math.Pow(10, float64(places))
|
|
||||||
digit := pow * val
|
|
||||||
_, div := math.Modf(digit)
|
|
||||||
if div >= roundOn {
|
|
||||||
round = math.Ceil(digit)
|
|
||||||
} else {
|
|
||||||
round = math.Floor(digit)
|
|
||||||
}
|
|
||||||
newVal = round / pow
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
@ -1,19 +0,0 @@
|
||||||
package zxcvbn
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/nbutton23/zxcvbn-go/matching"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/scoring"
|
|
||||||
"github.com/nbutton23/zxcvbn-go/utils/math"
|
|
||||||
"time"
|
|
||||||
)
|
|
||||||
|
|
||||||
func PasswordStrength(password string, userInputs []string) scoring.MinEntropyMatch {
|
|
||||||
start := time.Now()
|
|
||||||
matches := matching.Omnimatch(password, userInputs)
|
|
||||||
result := scoring.MinimumEntropyMatchSequence(password, matches)
|
|
||||||
end := time.Now()
|
|
||||||
|
|
||||||
calcTime := end.Nanosecond() - start.Nanosecond()
|
|
||||||
result.CalcTime = zxcvbn_math.Round(float64(calcTime)*time.Nanosecond.Seconds(), .5, 3)
|
|
||||||
return result
|
|
||||||
}
|
|
||||||
|
|
@ -1,51 +0,0 @@
|
||||||
package glob
|
|
||||||
|
|
||||||
import "strings"
|
|
||||||
|
|
||||||
// The character which is treated like a glob
|
|
||||||
const GLOB = "*"
|
|
||||||
|
|
||||||
// Glob will test a string pattern, potentially containing globs, against a
|
|
||||||
// subject string. The result is a simple true/false, determining whether or
|
|
||||||
// not the glob pattern matched the subject text.
|
|
||||||
func Glob(pattern, subj string) bool {
|
|
||||||
// Empty pattern can only match empty subject
|
|
||||||
if pattern == "" {
|
|
||||||
return subj == pattern
|
|
||||||
}
|
|
||||||
|
|
||||||
// If the pattern _is_ a glob, it matches everything
|
|
||||||
if pattern == GLOB {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
parts := strings.Split(pattern, GLOB)
|
|
||||||
|
|
||||||
if len(parts) == 1 {
|
|
||||||
// No globs in pattern, so test for equality
|
|
||||||
return subj == pattern
|
|
||||||
}
|
|
||||||
|
|
||||||
leadingGlob := strings.HasPrefix(pattern, GLOB)
|
|
||||||
trailingGlob := strings.HasSuffix(pattern, GLOB)
|
|
||||||
end := len(parts) - 1
|
|
||||||
|
|
||||||
// Check the first section. Requires special handling.
|
|
||||||
if !leadingGlob && !strings.HasPrefix(subj, parts[0]) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Go over the middle parts and ensure they match.
|
|
||||||
for i := 1; i < end; i++ {
|
|
||||||
if !strings.Contains(subj, parts[i]) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// Trim evaluated text from subj as we loop over the pattern.
|
|
||||||
idx := strings.Index(subj, parts[i]) + len(parts[i])
|
|
||||||
subj = subj[idx:]
|
|
||||||
}
|
|
||||||
|
|
||||||
// Reached the last section. Requires special handling.
|
|
||||||
return trailingGlob || strings.HasSuffix(subj, parts[end])
|
|
||||||
}
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
Copyright (c) 2013 Frederik Zipp. All rights reserved.
|
|
||||||
|
|
||||||
Redistribution and use in source and binary forms, with or without
|
|
||||||
modification, are permitted provided that the following conditions are
|
|
||||||
met:
|
|
||||||
|
|
||||||
* Redistributions of source code must retain the above copyright
|
|
||||||
notice, this list of conditions and the following disclaimer.
|
|
||||||
* Redistributions in binary form must reproduce the above
|
|
||||||
copyright notice, this list of conditions and the following disclaimer
|
|
||||||
in the documentation and/or other materials provided with the
|
|
||||||
distribution.
|
|
||||||
* Neither the name of the copyright owner nor the names of its
|
|
||||||
contributors may be used to endorse or promote products derived from
|
|
||||||
this software without specific prior written permission.
|
|
||||||
|
|
||||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
||||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
||||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
||||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
||||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
||||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
||||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
||||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
|
|
@ -1,222 +0,0 @@
|
||||||
// Copyright 2013 Frederik Zipp. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// Gocyclo calculates the cyclomatic complexities of functions and
|
|
||||||
// methods in Go source code.
|
|
||||||
//
|
|
||||||
// Usage:
|
|
||||||
// gocyclo [<flag> ...] <Go file or directory> ...
|
|
||||||
//
|
|
||||||
// Flags
|
|
||||||
// -over N show functions with complexity > N only and
|
|
||||||
// return exit code 1 if the output is non-empty
|
|
||||||
// -top N show the top N most complex functions only
|
|
||||||
// -avg show the average complexity
|
|
||||||
//
|
|
||||||
// The output fields for each line are:
|
|
||||||
// <complexity> <package> <function> <file:row:column>
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"go/parser"
|
|
||||||
"go/token"
|
|
||||||
"io"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"sort"
|
|
||||||
)
|
|
||||||
|
|
||||||
const usageDoc = `Calculate cyclomatic complexities of Go functions.
|
|
||||||
usage:
|
|
||||||
gocyclo [<flag> ...] <Go file or directory> ...
|
|
||||||
|
|
||||||
Flags
|
|
||||||
-over N show functions with complexity > N only and
|
|
||||||
return exit code 1 if the set is non-empty
|
|
||||||
-top N show the top N most complex functions only
|
|
||||||
-avg show the average complexity over all functions,
|
|
||||||
not depending on whether -over or -top are set
|
|
||||||
|
|
||||||
The output fields for each line are:
|
|
||||||
<complexity> <package> <function> <file:row:column>
|
|
||||||
`
|
|
||||||
|
|
||||||
func usage() {
|
|
||||||
fmt.Fprintf(os.Stderr, usageDoc)
|
|
||||||
os.Exit(2)
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
over = flag.Int("over", 0, "show functions with complexity > N only")
|
|
||||||
top = flag.Int("top", -1, "show the top N most complex functions only")
|
|
||||||
avg = flag.Bool("avg", false, "show the average complexity")
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
flag.Usage = usage
|
|
||||||
flag.Parse()
|
|
||||||
args := flag.Args()
|
|
||||||
if len(args) == 0 {
|
|
||||||
usage()
|
|
||||||
}
|
|
||||||
|
|
||||||
stats := analyze(args)
|
|
||||||
sort.Sort(byComplexity(stats))
|
|
||||||
written := writeStats(os.Stdout, stats)
|
|
||||||
|
|
||||||
if *avg {
|
|
||||||
showAverage(stats)
|
|
||||||
}
|
|
||||||
|
|
||||||
if *over > 0 && written > 0 {
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func analyze(paths []string) []stat {
|
|
||||||
stats := make([]stat, 0)
|
|
||||||
for _, path := range paths {
|
|
||||||
if isDir(path) {
|
|
||||||
stats = analyzeDir(path, stats)
|
|
||||||
} else {
|
|
||||||
stats = analyzeFile(path, stats)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return stats
|
|
||||||
}
|
|
||||||
|
|
||||||
func isDir(filename string) bool {
|
|
||||||
fi, err := os.Stat(filename)
|
|
||||||
return err == nil && fi.IsDir()
|
|
||||||
}
|
|
||||||
|
|
||||||
func analyzeFile(fname string, stats []stat) []stat {
|
|
||||||
fset := token.NewFileSet()
|
|
||||||
f, err := parser.ParseFile(fset, fname, nil, 0)
|
|
||||||
if err != nil {
|
|
||||||
exitError(err)
|
|
||||||
}
|
|
||||||
return buildStats(f, fset, stats)
|
|
||||||
}
|
|
||||||
|
|
||||||
func analyzeDir(dirname string, stats []stat) []stat {
|
|
||||||
files, _ := filepath.Glob(filepath.Join(dirname, "*.go"))
|
|
||||||
for _, file := range files {
|
|
||||||
stats = analyzeFile(file, stats)
|
|
||||||
}
|
|
||||||
return stats
|
|
||||||
}
|
|
||||||
|
|
||||||
func exitError(err error) {
|
|
||||||
fmt.Fprintln(os.Stderr, err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
func writeStats(w io.Writer, sortedStats []stat) int {
|
|
||||||
for i, stat := range sortedStats {
|
|
||||||
if i == *top {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
if stat.Complexity <= *over {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
fmt.Fprintln(w, stat)
|
|
||||||
}
|
|
||||||
return len(sortedStats)
|
|
||||||
}
|
|
||||||
|
|
||||||
func showAverage(stats []stat) {
|
|
||||||
fmt.Printf("Average: %.3g\n", average(stats))
|
|
||||||
}
|
|
||||||
|
|
||||||
func average(stats []stat) float64 {
|
|
||||||
total := 0
|
|
||||||
for _, s := range stats {
|
|
||||||
total += s.Complexity
|
|
||||||
}
|
|
||||||
return float64(total) / float64(len(stats))
|
|
||||||
}
|
|
||||||
|
|
||||||
type stat struct {
|
|
||||||
PkgName string
|
|
||||||
FuncName string
|
|
||||||
Complexity int
|
|
||||||
Pos token.Position
|
|
||||||
}
|
|
||||||
|
|
||||||
func (s stat) String() string {
|
|
||||||
return fmt.Sprintf("%d %s %s %s", s.Complexity, s.PkgName, s.FuncName, s.Pos)
|
|
||||||
}
|
|
||||||
|
|
||||||
type byComplexity []stat
|
|
||||||
|
|
||||||
func (s byComplexity) Len() int { return len(s) }
|
|
||||||
func (s byComplexity) Swap(i, j int) { s[i], s[j] = s[j], s[i] }
|
|
||||||
func (s byComplexity) Less(i, j int) bool {
|
|
||||||
return s[i].Complexity >= s[j].Complexity
|
|
||||||
}
|
|
||||||
|
|
||||||
func buildStats(f *ast.File, fset *token.FileSet, stats []stat) []stat {
|
|
||||||
for _, decl := range f.Decls {
|
|
||||||
if fn, ok := decl.(*ast.FuncDecl); ok {
|
|
||||||
stats = append(stats, stat{
|
|
||||||
PkgName: f.Name.Name,
|
|
||||||
FuncName: funcName(fn),
|
|
||||||
Complexity: complexity(fn),
|
|
||||||
Pos: fset.Position(fn.Pos()),
|
|
||||||
})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return stats
|
|
||||||
}
|
|
||||||
|
|
||||||
// funcName returns the name representation of a function or method:
|
|
||||||
// "(Type).Name" for methods or simply "Name" for functions.
|
|
||||||
func funcName(fn *ast.FuncDecl) string {
|
|
||||||
if fn.Recv != nil {
|
|
||||||
typ := fn.Recv.List[0].Type
|
|
||||||
return fmt.Sprintf("(%s).%s", recvString(typ), fn.Name)
|
|
||||||
}
|
|
||||||
return fn.Name.Name
|
|
||||||
}
|
|
||||||
|
|
||||||
// recvString returns a string representation of recv of the
|
|
||||||
// form "T", "*T", or "BADRECV" (if not a proper receiver type).
|
|
||||||
func recvString(recv ast.Expr) string {
|
|
||||||
switch t := recv.(type) {
|
|
||||||
case *ast.Ident:
|
|
||||||
return t.Name
|
|
||||||
case *ast.StarExpr:
|
|
||||||
return "*" + recvString(t.X)
|
|
||||||
}
|
|
||||||
return "BADRECV"
|
|
||||||
}
|
|
||||||
|
|
||||||
// complexity calculates the cyclomatic complexity of a function.
|
|
||||||
func complexity(fn *ast.FuncDecl) int {
|
|
||||||
v := complexityVisitor{}
|
|
||||||
ast.Walk(&v, fn)
|
|
||||||
return v.Complexity
|
|
||||||
}
|
|
||||||
|
|
||||||
type complexityVisitor struct {
|
|
||||||
// Complexity is the cyclomatic complexity
|
|
||||||
Complexity int
|
|
||||||
}
|
|
||||||
|
|
||||||
// Visit implements the ast.Visitor interface.
|
|
||||||
func (v *complexityVisitor) Visit(n ast.Node) ast.Visitor {
|
|
||||||
switch n := n.(type) {
|
|
||||||
case *ast.FuncDecl, *ast.IfStmt, *ast.ForStmt, *ast.RangeStmt, *ast.CaseClause, *ast.CommClause:
|
|
||||||
v.Complexity++
|
|
||||||
case *ast.BinaryExpr:
|
|
||||||
if n.Op == token.LAND || n.Op == token.LOR {
|
|
||||||
v.Complexity++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return v
|
|
||||||
}
|
|
||||||
|
|
@ -1,21 +0,0 @@
|
||||||
MIT License
|
|
||||||
|
|
||||||
Copyright (c) 2017 Alex Kohler
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
||||||
|
|
@ -1,310 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
/*
|
|
||||||
|
|
||||||
This file holds a direct copy of the import path matching code of
|
|
||||||
https://github.com/golang/go/blob/master/src/cmd/go/main.go. It can be
|
|
||||||
replaced when https://golang.org/issue/8768 is resolved.
|
|
||||||
|
|
||||||
It has been updated to follow upstream changes in a few ways.
|
|
||||||
|
|
||||||
*/
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"go/build"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path"
|
|
||||||
"path/filepath"
|
|
||||||
"regexp"
|
|
||||||
"runtime"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
var buildContext = build.Default
|
|
||||||
|
|
||||||
var (
|
|
||||||
goroot = filepath.Clean(runtime.GOROOT())
|
|
||||||
gorootSrc = filepath.Join(goroot, "src")
|
|
||||||
)
|
|
||||||
|
|
||||||
// importPathsNoDotExpansion returns the import paths to use for the given
|
|
||||||
// command line, but it does no ... expansion.
|
|
||||||
func importPathsNoDotExpansion(args []string) []string {
|
|
||||||
if len(args) == 0 {
|
|
||||||
return []string{"."}
|
|
||||||
}
|
|
||||||
var out []string
|
|
||||||
for _, a := range args {
|
|
||||||
// Arguments are supposed to be import paths, but
|
|
||||||
// as a courtesy to Windows developers, rewrite \ to /
|
|
||||||
// in command-line arguments. Handles .\... and so on.
|
|
||||||
if filepath.Separator == '\\' {
|
|
||||||
a = strings.Replace(a, `\`, `/`, -1)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Put argument in canonical form, but preserve leading ./.
|
|
||||||
if strings.HasPrefix(a, "./") {
|
|
||||||
a = "./" + path.Clean(a)
|
|
||||||
if a == "./." {
|
|
||||||
a = "."
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
a = path.Clean(a)
|
|
||||||
}
|
|
||||||
if a == "all" || a == "std" {
|
|
||||||
out = append(out, allPackages(a)...)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
out = append(out, a)
|
|
||||||
}
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// importPaths returns the import paths to use for the given command line.
|
|
||||||
func importPaths(args []string) []string {
|
|
||||||
args = importPathsNoDotExpansion(args)
|
|
||||||
var out []string
|
|
||||||
for _, a := range args {
|
|
||||||
if strings.Contains(a, "...") {
|
|
||||||
if build.IsLocalImport(a) {
|
|
||||||
out = append(out, allPackagesInFS(a)...)
|
|
||||||
} else {
|
|
||||||
out = append(out, allPackages(a)...)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
out = append(out, a)
|
|
||||||
}
|
|
||||||
return out
|
|
||||||
}
|
|
||||||
|
|
||||||
// matchPattern(pattern)(name) reports whether
|
|
||||||
// name matches pattern. Pattern is a limited glob
|
|
||||||
// pattern in which '...' means 'any string' and there
|
|
||||||
// is no other special syntax.
|
|
||||||
func matchPattern(pattern string) func(name string) bool {
|
|
||||||
re := regexp.QuoteMeta(pattern)
|
|
||||||
re = strings.Replace(re, `\.\.\.`, `.*`, -1)
|
|
||||||
// Special case: foo/... matches foo too.
|
|
||||||
if strings.HasSuffix(re, `/.*`) {
|
|
||||||
re = re[:len(re)-len(`/.*`)] + `(/.*)?`
|
|
||||||
}
|
|
||||||
reg := regexp.MustCompile(`^` + re + `$`)
|
|
||||||
return func(name string) bool {
|
|
||||||
return reg.MatchString(name)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// hasPathPrefix reports whether the path s begins with the
|
|
||||||
// elements in prefix.
|
|
||||||
func hasPathPrefix(s, prefix string) bool {
|
|
||||||
switch {
|
|
||||||
default:
|
|
||||||
return false
|
|
||||||
case len(s) == len(prefix):
|
|
||||||
return s == prefix
|
|
||||||
case len(s) > len(prefix):
|
|
||||||
if prefix != "" && prefix[len(prefix)-1] == '/' {
|
|
||||||
return strings.HasPrefix(s, prefix)
|
|
||||||
}
|
|
||||||
return s[len(prefix)] == '/' && s[:len(prefix)] == prefix
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// treeCanMatchPattern(pattern)(name) reports whether
|
|
||||||
// name or children of name can possibly match pattern.
|
|
||||||
// Pattern is the same limited glob accepted by matchPattern.
|
|
||||||
func treeCanMatchPattern(pattern string) func(name string) bool {
|
|
||||||
wildCard := false
|
|
||||||
if i := strings.Index(pattern, "..."); i >= 0 {
|
|
||||||
wildCard = true
|
|
||||||
pattern = pattern[:i]
|
|
||||||
}
|
|
||||||
return func(name string) bool {
|
|
||||||
return len(name) <= len(pattern) && hasPathPrefix(pattern, name) ||
|
|
||||||
wildCard && strings.HasPrefix(name, pattern)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// allPackages returns all the packages that can be found
|
|
||||||
// under the $GOPATH directories and $GOROOT matching pattern.
|
|
||||||
// The pattern is either "all" (all packages), "std" (standard packages)
|
|
||||||
// or a path including "...".
|
|
||||||
func allPackages(pattern string) []string {
|
|
||||||
pkgs := matchPackages(pattern)
|
|
||||||
if len(pkgs) == 0 {
|
|
||||||
fmt.Fprintf(os.Stderr, "warning: %q matched no packages\n", pattern)
|
|
||||||
}
|
|
||||||
return pkgs
|
|
||||||
}
|
|
||||||
|
|
||||||
func matchPackages(pattern string) []string {
|
|
||||||
match := func(string) bool { return true }
|
|
||||||
treeCanMatch := func(string) bool { return true }
|
|
||||||
if pattern != "all" && pattern != "std" {
|
|
||||||
match = matchPattern(pattern)
|
|
||||||
treeCanMatch = treeCanMatchPattern(pattern)
|
|
||||||
}
|
|
||||||
|
|
||||||
have := map[string]bool{
|
|
||||||
"builtin": true, // ignore pseudo-package that exists only for documentation
|
|
||||||
}
|
|
||||||
if !buildContext.CgoEnabled {
|
|
||||||
have["runtime/cgo"] = true // ignore during walk
|
|
||||||
}
|
|
||||||
var pkgs []string
|
|
||||||
|
|
||||||
// Commands
|
|
||||||
cmd := filepath.Join(goroot, "src/cmd") + string(filepath.Separator)
|
|
||||||
filepath.Walk(cmd, func(path string, fi os.FileInfo, err error) error {
|
|
||||||
if err != nil || !fi.IsDir() || path == cmd {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
name := path[len(cmd):]
|
|
||||||
if !treeCanMatch(name) {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
// Commands are all in cmd/, not in subdirectories.
|
|
||||||
if strings.Contains(name, string(filepath.Separator)) {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
|
|
||||||
// We use, e.g., cmd/gofmt as the pseudo import path for gofmt.
|
|
||||||
name = "cmd/" + name
|
|
||||||
if have[name] {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
have[name] = true
|
|
||||||
if !match(name) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
_, err = buildContext.ImportDir(path, 0)
|
|
||||||
if err != nil {
|
|
||||||
if _, noGo := err.(*build.NoGoError); !noGo {
|
|
||||||
log.Print(err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
pkgs = append(pkgs, name)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
|
|
||||||
for _, src := range buildContext.SrcDirs() {
|
|
||||||
if (pattern == "std" || pattern == "cmd") && src != gorootSrc {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
src = filepath.Clean(src) + string(filepath.Separator)
|
|
||||||
root := src
|
|
||||||
if pattern == "cmd" {
|
|
||||||
root += "cmd" + string(filepath.Separator)
|
|
||||||
}
|
|
||||||
filepath.Walk(root, func(path string, fi os.FileInfo, err error) error {
|
|
||||||
if err != nil || !fi.IsDir() || path == src {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Avoid .foo, _foo, testdata and vendor directory trees.
|
|
||||||
_, elem := filepath.Split(path)
|
|
||||||
if strings.HasPrefix(elem, ".") || strings.HasPrefix(elem, "_") || elem == "testdata" || elem == "vendor" {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
|
|
||||||
name := filepath.ToSlash(path[len(src):])
|
|
||||||
if pattern == "std" && (strings.Contains(name, ".") || name == "cmd") {
|
|
||||||
// The name "std" is only the standard library.
|
|
||||||
// If the name is cmd, it's the root of the command tree.
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
if !treeCanMatch(name) {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
if have[name] {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
have[name] = true
|
|
||||||
if !match(name) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
_, err = buildContext.ImportDir(path, 0)
|
|
||||||
if err != nil {
|
|
||||||
if _, noGo := err.(*build.NoGoError); noGo {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
pkgs = append(pkgs, name)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
return pkgs
|
|
||||||
}
|
|
||||||
|
|
||||||
// allPackagesInFS is like allPackages but is passed a pattern
|
|
||||||
// beginning ./ or ../, meaning it should scan the tree rooted
|
|
||||||
// at the given directory. There are ... in the pattern too.
|
|
||||||
func allPackagesInFS(pattern string) []string {
|
|
||||||
pkgs := matchPackagesInFS(pattern)
|
|
||||||
if len(pkgs) == 0 {
|
|
||||||
fmt.Fprintf(os.Stderr, "warning: %q matched no packages\n", pattern)
|
|
||||||
}
|
|
||||||
return pkgs
|
|
||||||
}
|
|
||||||
|
|
||||||
func matchPackagesInFS(pattern string) []string {
|
|
||||||
// Find directory to begin the scan.
|
|
||||||
// Could be smarter but this one optimization
|
|
||||||
// is enough for now, since ... is usually at the
|
|
||||||
// end of a path.
|
|
||||||
i := strings.Index(pattern, "...")
|
|
||||||
dir, _ := path.Split(pattern[:i])
|
|
||||||
|
|
||||||
// pattern begins with ./ or ../.
|
|
||||||
// path.Clean will discard the ./ but not the ../.
|
|
||||||
// We need to preserve the ./ for pattern matching
|
|
||||||
// and in the returned import paths.
|
|
||||||
prefix := ""
|
|
||||||
if strings.HasPrefix(pattern, "./") {
|
|
||||||
prefix = "./"
|
|
||||||
}
|
|
||||||
match := matchPattern(pattern)
|
|
||||||
|
|
||||||
var pkgs []string
|
|
||||||
filepath.Walk(dir, func(path string, fi os.FileInfo, err error) error {
|
|
||||||
if err != nil || !fi.IsDir() {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if path == dir {
|
|
||||||
// filepath.Walk starts at dir and recurses. For the recursive case,
|
|
||||||
// the path is the result of filepath.Join, which calls filepath.Clean.
|
|
||||||
// The initial case is not Cleaned, though, so we do this explicitly.
|
|
||||||
//
|
|
||||||
// This converts a path like "./io/" to "io". Without this step, running
|
|
||||||
// "cd $GOROOT/src/pkg; go list ./io/..." would incorrectly skip the io
|
|
||||||
// package, because prepending the prefix "./" to the unclean path would
|
|
||||||
// result in "././io", and match("././io") returns false.
|
|
||||||
path = filepath.Clean(path)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Avoid .foo, _foo, testdata and vendor directory trees, but do not avoid "." or "..".
|
|
||||||
_, elem := filepath.Split(path)
|
|
||||||
dot := strings.HasPrefix(elem, ".") && elem != "." && elem != ".."
|
|
||||||
if dot || strings.HasPrefix(elem, "_") || elem == "testdata" || elem == "vendor" {
|
|
||||||
return filepath.SkipDir
|
|
||||||
}
|
|
||||||
|
|
||||||
name := prefix + filepath.ToSlash(path)
|
|
||||||
if !match(name) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
if _, err = build.ImportDir(path, 0); err != nil {
|
|
||||||
if _, noGo := err.(*build.NoGoError); !noGo {
|
|
||||||
log.Print(err)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
pkgs = append(pkgs, name)
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
return pkgs
|
|
||||||
}
|
|
||||||
|
|
@ -1,213 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"go/build"
|
|
||||||
"go/parser"
|
|
||||||
"go/token"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
pwd = "./"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
//TODO allow build tags
|
|
||||||
build.Default.UseAllFiles = true
|
|
||||||
}
|
|
||||||
|
|
||||||
func usage() {
|
|
||||||
log.Printf("Usage of %s:\n", os.Args[0])
|
|
||||||
log.Printf("\nnakedret [flags] # runs on package in current directory\n")
|
|
||||||
log.Printf("\nnakedret [flags] [packages]\n")
|
|
||||||
log.Printf("Flags:\n")
|
|
||||||
flag.PrintDefaults()
|
|
||||||
}
|
|
||||||
|
|
||||||
type returnsVisitor struct {
|
|
||||||
f *token.FileSet
|
|
||||||
maxLength uint
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
|
|
||||||
// Remove log timestamp
|
|
||||||
log.SetFlags(0)
|
|
||||||
|
|
||||||
maxLength := flag.Uint("l", 5, "maximum number of lines for a naked return function")
|
|
||||||
flag.Usage = usage
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
if err := checkNakedReturns(flag.Args(), maxLength); err != nil {
|
|
||||||
log.Println(err)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func checkNakedReturns(args []string, maxLength *uint) error {
|
|
||||||
|
|
||||||
fset := token.NewFileSet()
|
|
||||||
|
|
||||||
files, err := parseInput(args, fset)
|
|
||||||
if err != nil {
|
|
||||||
return fmt.Errorf("could not parse input %v", err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if maxLength == nil {
|
|
||||||
return errors.New("max length nil")
|
|
||||||
}
|
|
||||||
|
|
||||||
retVis := &returnsVisitor{
|
|
||||||
f: fset,
|
|
||||||
maxLength: *maxLength,
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, f := range files {
|
|
||||||
ast.Walk(retVis, f)
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func parseInput(args []string, fset *token.FileSet) ([]*ast.File, error) {
|
|
||||||
var directoryList []string
|
|
||||||
var fileMode bool
|
|
||||||
files := make([]*ast.File, 0)
|
|
||||||
|
|
||||||
if len(args) == 0 {
|
|
||||||
directoryList = append(directoryList, pwd)
|
|
||||||
} else {
|
|
||||||
for _, arg := range args {
|
|
||||||
if strings.HasSuffix(arg, "/...") && isDir(arg[:len(arg)-len("/...")]) {
|
|
||||||
|
|
||||||
for _, dirname := range allPackagesInFS(arg) {
|
|
||||||
directoryList = append(directoryList, dirname)
|
|
||||||
}
|
|
||||||
|
|
||||||
} else if isDir(arg) {
|
|
||||||
directoryList = append(directoryList, arg)
|
|
||||||
|
|
||||||
} else if exists(arg) {
|
|
||||||
if strings.HasSuffix(arg, ".go") {
|
|
||||||
fileMode = true
|
|
||||||
f, err := parser.ParseFile(fset, arg, nil, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
files = append(files, f)
|
|
||||||
} else {
|
|
||||||
return nil, fmt.Errorf("invalid file %v specified", arg)
|
|
||||||
}
|
|
||||||
} else {
|
|
||||||
|
|
||||||
//TODO clean this up a bit
|
|
||||||
imPaths := importPaths([]string{arg})
|
|
||||||
for _, importPath := range imPaths {
|
|
||||||
pkg, err := build.Import(importPath, ".", 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
var stringFiles []string
|
|
||||||
stringFiles = append(stringFiles, pkg.GoFiles...)
|
|
||||||
// files = append(files, pkg.CgoFiles...)
|
|
||||||
stringFiles = append(stringFiles, pkg.TestGoFiles...)
|
|
||||||
if pkg.Dir != "." {
|
|
||||||
for i, f := range stringFiles {
|
|
||||||
stringFiles[i] = filepath.Join(pkg.Dir, f)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
fileMode = true
|
|
||||||
for _, stringFile := range stringFiles {
|
|
||||||
f, err := parser.ParseFile(fset, stringFile, nil, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
files = append(files, f)
|
|
||||||
}
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// if we're not in file mode, then we need to grab each and every package in each directory
|
|
||||||
// we can to grab all the files
|
|
||||||
if !fileMode {
|
|
||||||
for _, fpath := range directoryList {
|
|
||||||
pkgs, err := parser.ParseDir(fset, fpath, nil, 0)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, pkg := range pkgs {
|
|
||||||
for _, f := range pkg.Files {
|
|
||||||
files = append(files, f)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return files, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func isDir(filename string) bool {
|
|
||||||
fi, err := os.Stat(filename)
|
|
||||||
return err == nil && fi.IsDir()
|
|
||||||
}
|
|
||||||
|
|
||||||
func exists(filename string) bool {
|
|
||||||
_, err := os.Stat(filename)
|
|
||||||
return err == nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (v *returnsVisitor) Visit(node ast.Node) ast.Visitor {
|
|
||||||
var namedReturns []*ast.Ident
|
|
||||||
|
|
||||||
funcDecl, ok := node.(*ast.FuncDecl)
|
|
||||||
if !ok {
|
|
||||||
return v
|
|
||||||
}
|
|
||||||
var functionLineLength int
|
|
||||||
// We've found a function
|
|
||||||
if funcDecl.Type != nil && funcDecl.Type.Results != nil {
|
|
||||||
for _, field := range funcDecl.Type.Results.List {
|
|
||||||
for _, ident := range field.Names {
|
|
||||||
if ident != nil {
|
|
||||||
namedReturns = append(namedReturns, ident)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
file := v.f.File(funcDecl.Pos())
|
|
||||||
functionLineLength = file.Position(funcDecl.End()).Line - file.Position(funcDecl.Pos()).Line
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(namedReturns) > 0 && funcDecl.Body != nil {
|
|
||||||
// Scan the body for usage of the named returns
|
|
||||||
for _, stmt := range funcDecl.Body.List {
|
|
||||||
|
|
||||||
switch s := stmt.(type) {
|
|
||||||
case *ast.ReturnStmt:
|
|
||||||
if len(s.Results) == 0 {
|
|
||||||
file := v.f.File(s.Pos())
|
|
||||||
if file != nil && uint(functionLineLength) > v.maxLength {
|
|
||||||
if funcDecl.Name != nil {
|
|
||||||
log.Printf("%v:%v %v naked returns on %v line function \n", file.Name(), file.Position(s.Pos()).Line, funcDecl.Name.Name, functionLineLength)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return v
|
|
||||||
}
|
|
||||||
|
|
@ -1,22 +0,0 @@
|
||||||
The MIT License (MIT)
|
|
||||||
|
|
||||||
Copyright (c) 2015-2017 Nick Galbreath
|
|
||||||
|
|
||||||
Permission is hereby granted, free of charge, to any person obtaining a copy
|
|
||||||
of this software and associated documentation files (the "Software"), to deal
|
|
||||||
in the Software without restriction, including without limitation the rights
|
|
||||||
to use, copy, modify, merge, publish, distribute, sublicense, and/or sell
|
|
||||||
copies of the Software, and to permit persons to whom the Software is
|
|
||||||
furnished to do so, subject to the following conditions:
|
|
||||||
|
|
||||||
The above copyright notice and this permission notice shall be included in all
|
|
||||||
copies or substantial portions of the Software.
|
|
||||||
|
|
||||||
THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR
|
|
||||||
IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY,
|
|
||||||
FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE
|
|
||||||
AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER
|
|
||||||
LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM,
|
|
||||||
OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE
|
|
||||||
SOFTWARE.
|
|
||||||
|
|
||||||
|
|
@ -1,62 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
// ByteToUpper converts an ascii byte to upper cases
|
|
||||||
// Uses a branchless algorithm
|
|
||||||
func ByteToUpper(x byte) byte {
|
|
||||||
b := byte(0x80) | x
|
|
||||||
c := b - byte(0x61)
|
|
||||||
d := ^(b - byte(0x7b))
|
|
||||||
e := (c & d) & (^x & 0x7f)
|
|
||||||
return x - (e >> 2)
|
|
||||||
}
|
|
||||||
|
|
||||||
// ByteToLower converts an ascii byte to lower case
|
|
||||||
// uses a branchless algorithm
|
|
||||||
func ByteToLower(eax byte) byte {
|
|
||||||
ebx := eax&byte(0x7f) + byte(0x25)
|
|
||||||
ebx = ebx&byte(0x7f) + byte(0x1a)
|
|
||||||
ebx = ((ebx & ^eax) >> 2) & byte(0x20)
|
|
||||||
return eax + ebx
|
|
||||||
}
|
|
||||||
|
|
||||||
// ByteEqualFold does ascii compare, case insensitive
|
|
||||||
func ByteEqualFold(a, b byte) bool {
|
|
||||||
return a == b || ByteToLower(a) == ByteToLower(b)
|
|
||||||
}
|
|
||||||
|
|
||||||
// StringEqualFold ASCII case-insensitive comparison
|
|
||||||
// golang toUpper/toLower for both bytes and strings
|
|
||||||
// appears to be Unicode based which is super slow
|
|
||||||
// based from https://codereview.appspot.com/5180044/patch/14007/21002
|
|
||||||
func StringEqualFold(s1, s2 string) bool {
|
|
||||||
if len(s1) != len(s2) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
for i := 0; i < len(s1); i++ {
|
|
||||||
c1 := s1[i]
|
|
||||||
c2 := s2[i]
|
|
||||||
// c1 & c2
|
|
||||||
if c1 != c2 {
|
|
||||||
c1 |= 'a' - 'A'
|
|
||||||
c2 |= 'a' - 'A'
|
|
||||||
if c1 != c2 || c1 < 'a' || c1 > 'z' {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
// StringHasPrefixFold is similar to strings.HasPrefix but comparison
|
|
||||||
// is done ignoring ASCII case.
|
|
||||||
// /
|
|
||||||
func StringHasPrefixFold(s1, s2 string) bool {
|
|
||||||
// prefix is bigger than input --> false
|
|
||||||
if len(s1) < len(s2) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
if len(s1) == len(s2) {
|
|
||||||
return StringEqualFold(s1, s2)
|
|
||||||
}
|
|
||||||
return StringEqualFold(s1[:len(s2)], s2)
|
|
||||||
}
|
|
||||||
|
|
@ -1,59 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// WordCase is an enum of various word casing styles
|
|
||||||
type WordCase int
|
|
||||||
|
|
||||||
// Various WordCase types.. likely to be not correct
|
|
||||||
const (
|
|
||||||
CaseUnknown WordCase = iota
|
|
||||||
CaseLower
|
|
||||||
CaseUpper
|
|
||||||
CaseTitle
|
|
||||||
)
|
|
||||||
|
|
||||||
// CaseStyle returns what case style a word is in
|
|
||||||
func CaseStyle(word string) WordCase {
|
|
||||||
upperCount := 0
|
|
||||||
lowerCount := 0
|
|
||||||
|
|
||||||
// this iterates over RUNES not BYTES
|
|
||||||
for i := 0; i < len(word); i++ {
|
|
||||||
ch := word[i]
|
|
||||||
switch {
|
|
||||||
case ch >= 'a' && ch <= 'z':
|
|
||||||
lowerCount++
|
|
||||||
case ch >= 'A' && ch <= 'Z':
|
|
||||||
upperCount++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case upperCount != 0 && lowerCount == 0:
|
|
||||||
return CaseUpper
|
|
||||||
case upperCount == 0 && lowerCount != 0:
|
|
||||||
return CaseLower
|
|
||||||
case upperCount == 1 && lowerCount > 0 && word[0] >= 'A' && word[0] <= 'Z':
|
|
||||||
return CaseTitle
|
|
||||||
}
|
|
||||||
return CaseUnknown
|
|
||||||
}
|
|
||||||
|
|
||||||
// CaseVariations returns
|
|
||||||
// If AllUpper or First-Letter-Only is upcased: add the all upper case version
|
|
||||||
// If AllLower, add the original, the title and upcase forms
|
|
||||||
// If Mixed, return the original, and the all upcase form
|
|
||||||
//
|
|
||||||
func CaseVariations(word string, style WordCase) []string {
|
|
||||||
switch style {
|
|
||||||
case CaseLower:
|
|
||||||
return []string{word, strings.ToUpper(word[0:1]) + word[1:], strings.ToUpper(word)}
|
|
||||||
case CaseUpper:
|
|
||||||
return []string{strings.ToUpper(word)}
|
|
||||||
default:
|
|
||||||
return []string{word, strings.ToUpper(word)}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,325 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"io/ioutil"
|
|
||||||
"log"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"runtime"
|
|
||||||
"strings"
|
|
||||||
"text/template"
|
|
||||||
"time"
|
|
||||||
|
|
||||||
"github.com/client9/misspell"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
defaultWrite *template.Template
|
|
||||||
defaultRead *template.Template
|
|
||||||
|
|
||||||
stdout *log.Logger
|
|
||||||
debug *log.Logger
|
|
||||||
|
|
||||||
version = "dev"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
// Note for gometalinter it must be "File:Line:Column: Msg"
|
|
||||||
// note space beteen ": Msg"
|
|
||||||
defaultWriteTmpl = `{{ .Filename }}:{{ .Line }}:{{ .Column }}: corrected "{{ .Original }}" to "{{ .Corrected }}"`
|
|
||||||
defaultReadTmpl = `{{ .Filename }}:{{ .Line }}:{{ .Column }}: "{{ .Original }}" is a misspelling of "{{ .Corrected }}"`
|
|
||||||
csvTmpl = `{{ printf "%q" .Filename }},{{ .Line }},{{ .Column }},{{ .Original }},{{ .Corrected }}`
|
|
||||||
csvHeader = `file,line,column,typo,corrected`
|
|
||||||
sqliteTmpl = `INSERT INTO misspell VALUES({{ printf "%q" .Filename }},{{ .Line }},{{ .Column }},{{ printf "%q" .Original }},{{ printf "%q" .Corrected }});`
|
|
||||||
sqliteHeader = `PRAGMA foreign_keys=OFF;
|
|
||||||
BEGIN TRANSACTION;
|
|
||||||
CREATE TABLE misspell(
|
|
||||||
"file" TEXT, "line" INTEGER, "column" INTEGER, "typo" TEXT, "corrected" TEXT
|
|
||||||
);`
|
|
||||||
sqliteFooter = "COMMIT;"
|
|
||||||
)
|
|
||||||
|
|
||||||
func worker(writeit bool, r *misspell.Replacer, mode string, files <-chan string, results chan<- int) {
|
|
||||||
count := 0
|
|
||||||
for filename := range files {
|
|
||||||
orig, err := misspell.ReadTextFile(filename)
|
|
||||||
if err != nil {
|
|
||||||
log.Println(err)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
if len(orig) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
debug.Printf("Processing %s", filename)
|
|
||||||
|
|
||||||
var updated string
|
|
||||||
var changes []misspell.Diff
|
|
||||||
|
|
||||||
if mode == "go" {
|
|
||||||
updated, changes = r.ReplaceGo(orig)
|
|
||||||
} else {
|
|
||||||
updated, changes = r.Replace(orig)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(changes) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
count += len(changes)
|
|
||||||
for _, diff := range changes {
|
|
||||||
// add in filename
|
|
||||||
diff.Filename = filename
|
|
||||||
|
|
||||||
// output can be done by doing multiple goroutines
|
|
||||||
// and can clobber os.Stdout.
|
|
||||||
//
|
|
||||||
// the log package can be used simultaneously from multiple goroutines
|
|
||||||
var output bytes.Buffer
|
|
||||||
if writeit {
|
|
||||||
defaultWrite.Execute(&output, diff)
|
|
||||||
} else {
|
|
||||||
defaultRead.Execute(&output, diff)
|
|
||||||
}
|
|
||||||
|
|
||||||
// goroutine-safe print to os.Stdout
|
|
||||||
stdout.Println(output.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
if writeit {
|
|
||||||
ioutil.WriteFile(filename, []byte(updated), 0)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
results <- count
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
t := time.Now()
|
|
||||||
var (
|
|
||||||
workers = flag.Int("j", 0, "Number of workers, 0 = number of CPUs")
|
|
||||||
writeit = flag.Bool("w", false, "Overwrite file with corrections (default is just to display)")
|
|
||||||
quietFlag = flag.Bool("q", false, "Do not emit misspelling output")
|
|
||||||
outFlag = flag.String("o", "stdout", "output file or [stderr|stdout|]")
|
|
||||||
format = flag.String("f", "", "'csv', 'sqlite3' or custom Golang template for output")
|
|
||||||
ignores = flag.String("i", "", "ignore the following corrections, comma separated")
|
|
||||||
locale = flag.String("locale", "", "Correct spellings using locale perferances for US or UK. Default is to use a neutral variety of English. Setting locale to US will correct the British spelling of 'colour' to 'color'")
|
|
||||||
mode = flag.String("source", "auto", "Source mode: auto=guess, go=golang source, text=plain or markdown-like text")
|
|
||||||
debugFlag = flag.Bool("debug", false, "Debug matching, very slow")
|
|
||||||
exitError = flag.Bool("error", false, "Exit with 2 if misspelling found")
|
|
||||||
showVersion = flag.Bool("v", false, "Show version and exit")
|
|
||||||
|
|
||||||
showLegal = flag.Bool("legal", false, "Show legal information and exit")
|
|
||||||
)
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
if *showVersion {
|
|
||||||
fmt.Println(version)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *showLegal {
|
|
||||||
fmt.Println(misspell.Legal)
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if *debugFlag {
|
|
||||||
debug = log.New(os.Stderr, "DEBUG ", 0)
|
|
||||||
} else {
|
|
||||||
debug = log.New(ioutil.Discard, "", 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
r := misspell.Replacer{
|
|
||||||
Replacements: misspell.DictMain,
|
|
||||||
Debug: *debugFlag,
|
|
||||||
}
|
|
||||||
//
|
|
||||||
// Figure out regional variations
|
|
||||||
//
|
|
||||||
switch strings.ToUpper(*locale) {
|
|
||||||
case "":
|
|
||||||
// nothing
|
|
||||||
case "US":
|
|
||||||
r.AddRuleList(misspell.DictAmerican)
|
|
||||||
case "UK", "GB":
|
|
||||||
r.AddRuleList(misspell.DictBritish)
|
|
||||||
case "NZ", "AU", "CA":
|
|
||||||
log.Fatalf("Help wanted. https://github.com/client9/misspell/issues/6")
|
|
||||||
default:
|
|
||||||
log.Fatalf("Unknown locale: %q", *locale)
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Stuff to ignore
|
|
||||||
//
|
|
||||||
if len(*ignores) > 0 {
|
|
||||||
r.RemoveRule(strings.Split(*ignores, ","))
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Source input mode
|
|
||||||
//
|
|
||||||
switch *mode {
|
|
||||||
case "auto":
|
|
||||||
case "go":
|
|
||||||
case "text":
|
|
||||||
default:
|
|
||||||
log.Fatalf("Mode must be one of auto=guess, go=golang source, text=plain or markdown-like text")
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Custom output
|
|
||||||
//
|
|
||||||
switch {
|
|
||||||
case *format == "csv":
|
|
||||||
tmpl := template.Must(template.New("csv").Parse(csvTmpl))
|
|
||||||
defaultWrite = tmpl
|
|
||||||
defaultRead = tmpl
|
|
||||||
stdout.Println(csvHeader)
|
|
||||||
case *format == "sqlite" || *format == "sqlite3":
|
|
||||||
tmpl := template.Must(template.New("sqlite3").Parse(sqliteTmpl))
|
|
||||||
defaultWrite = tmpl
|
|
||||||
defaultRead = tmpl
|
|
||||||
stdout.Println(sqliteHeader)
|
|
||||||
case len(*format) > 0:
|
|
||||||
t, err := template.New("custom").Parse(*format)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("Unable to compile log format: %s", err)
|
|
||||||
}
|
|
||||||
defaultWrite = t
|
|
||||||
defaultRead = t
|
|
||||||
default: // format == ""
|
|
||||||
defaultWrite = template.Must(template.New("defaultWrite").Parse(defaultWriteTmpl))
|
|
||||||
defaultRead = template.Must(template.New("defaultRead").Parse(defaultReadTmpl))
|
|
||||||
}
|
|
||||||
|
|
||||||
// we cant't just write to os.Stdout directly since we have multiple goroutine
|
|
||||||
// all writing at the same time causing broken output. Log is routine safe.
|
|
||||||
// we see it so it doesn't use a prefix or include a time stamp.
|
|
||||||
switch {
|
|
||||||
case *quietFlag || *outFlag == "/dev/null":
|
|
||||||
stdout = log.New(ioutil.Discard, "", 0)
|
|
||||||
case *outFlag == "/dev/stderr" || *outFlag == "stderr":
|
|
||||||
stdout = log.New(os.Stderr, "", 0)
|
|
||||||
case *outFlag == "/dev/stdout" || *outFlag == "stdout":
|
|
||||||
stdout = log.New(os.Stdout, "", 0)
|
|
||||||
case *outFlag == "" || *outFlag == "-":
|
|
||||||
stdout = log.New(os.Stdout, "", 0)
|
|
||||||
default:
|
|
||||||
fo, err := os.Create(*outFlag)
|
|
||||||
if err != nil {
|
|
||||||
log.Fatalf("unable to create outfile %q: %s", *outFlag, err)
|
|
||||||
}
|
|
||||||
defer fo.Close()
|
|
||||||
stdout = log.New(fo, "", 0)
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Number of Workers / CPU to use
|
|
||||||
//
|
|
||||||
if *workers < 0 {
|
|
||||||
log.Fatalf("-j must >= 0")
|
|
||||||
}
|
|
||||||
if *workers == 0 {
|
|
||||||
*workers = runtime.NumCPU()
|
|
||||||
}
|
|
||||||
if *debugFlag {
|
|
||||||
*workers = 1
|
|
||||||
}
|
|
||||||
|
|
||||||
//
|
|
||||||
// Done with Flags.
|
|
||||||
// Compile the Replacer and process files
|
|
||||||
//
|
|
||||||
r.Compile()
|
|
||||||
|
|
||||||
args := flag.Args()
|
|
||||||
debug.Printf("initialization complete in %v", time.Since(t))
|
|
||||||
|
|
||||||
// stdin/stdout
|
|
||||||
if len(args) == 0 {
|
|
||||||
// if we are working with pipes/stdin/stdout
|
|
||||||
// there is no concurrency, so we can directly
|
|
||||||
// send data to the writers
|
|
||||||
var fileout io.Writer
|
|
||||||
var errout io.Writer
|
|
||||||
switch *writeit {
|
|
||||||
case true:
|
|
||||||
// if we ARE writing the corrected stream
|
|
||||||
// the corrected stream goes to stdout
|
|
||||||
// and the misspelling errors goes to stderr
|
|
||||||
// so we can do something like this:
|
|
||||||
// curl something | misspell -w | gzip > afile.gz
|
|
||||||
fileout = os.Stdout
|
|
||||||
errout = os.Stderr
|
|
||||||
case false:
|
|
||||||
// if we are not writing out the corrected stream
|
|
||||||
// then work just like files. Misspelling errors
|
|
||||||
// are sent to stdout
|
|
||||||
fileout = ioutil.Discard
|
|
||||||
errout = os.Stdout
|
|
||||||
}
|
|
||||||
count := 0
|
|
||||||
next := func(diff misspell.Diff) {
|
|
||||||
count++
|
|
||||||
|
|
||||||
// don't even evaluate the output templates
|
|
||||||
if *quietFlag {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
diff.Filename = "stdin"
|
|
||||||
if *writeit {
|
|
||||||
defaultWrite.Execute(errout, diff)
|
|
||||||
} else {
|
|
||||||
defaultRead.Execute(errout, diff)
|
|
||||||
}
|
|
||||||
errout.Write([]byte{'\n'})
|
|
||||||
|
|
||||||
}
|
|
||||||
err := r.ReplaceReader(os.Stdin, fileout, next)
|
|
||||||
if err != nil {
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
switch *format {
|
|
||||||
case "sqlite", "sqlite3":
|
|
||||||
fileout.Write([]byte(sqliteFooter))
|
|
||||||
}
|
|
||||||
if count != 0 && *exitError {
|
|
||||||
// error
|
|
||||||
os.Exit(2)
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
c := make(chan string, 64)
|
|
||||||
results := make(chan int, *workers)
|
|
||||||
|
|
||||||
for i := 0; i < *workers; i++ {
|
|
||||||
go worker(*writeit, &r, *mode, c, results)
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, filename := range args {
|
|
||||||
filepath.Walk(filename, func(path string, info os.FileInfo, err error) error {
|
|
||||||
if err == nil && !info.IsDir() {
|
|
||||||
c <- path
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
})
|
|
||||||
}
|
|
||||||
close(c)
|
|
||||||
|
|
||||||
count := 0
|
|
||||||
for i := 0; i < *workers; i++ {
|
|
||||||
changed := <-results
|
|
||||||
count += changed
|
|
||||||
}
|
|
||||||
|
|
||||||
switch *format {
|
|
||||||
case "sqlite", "sqlite3":
|
|
||||||
stdout.Println(sqliteFooter)
|
|
||||||
}
|
|
||||||
|
|
||||||
if count != 0 && *exitError {
|
|
||||||
os.Exit(2)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,118 +0,0 @@
|
||||||
package ignore
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
|
|
||||||
"github.com/gobwas/glob"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Matcher defines an interface for filematchers
|
|
||||||
//
|
|
||||||
type Matcher interface {
|
|
||||||
Match(string) bool
|
|
||||||
True() bool
|
|
||||||
MarshalText() ([]byte, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// MultiMatch has matching on a list of matchers
|
|
||||||
type MultiMatch struct {
|
|
||||||
matchers []Matcher
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewMultiMatch creates a new MultiMatch instance
|
|
||||||
func NewMultiMatch(matchers []Matcher) *MultiMatch {
|
|
||||||
return &MultiMatch{matchers: matchers}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Match satifies the Matcher iterface
|
|
||||||
func (mm *MultiMatch) Match(arg string) bool {
|
|
||||||
// Normal: OR
|
|
||||||
// false, false -> false
|
|
||||||
// false, true -> true
|
|
||||||
// true, false -> true
|
|
||||||
// true, true -> true
|
|
||||||
|
|
||||||
// Invert:
|
|
||||||
// false, false -> false
|
|
||||||
// false, true -> false
|
|
||||||
// true, false -> true
|
|
||||||
// true, true -> false
|
|
||||||
use := false
|
|
||||||
for _, m := range mm.matchers {
|
|
||||||
if m.Match(arg) {
|
|
||||||
use = m.True()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return use
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
// True returns true
|
|
||||||
func (mm *MultiMatch) True() bool { return true }
|
|
||||||
|
|
||||||
// MarshalText satifies the ?? interface
|
|
||||||
func (mm *MultiMatch) MarshalText() ([]byte, error) {
|
|
||||||
return []byte("multi"), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// GlobMatch handle glob matching
|
|
||||||
type GlobMatch struct {
|
|
||||||
orig string
|
|
||||||
matcher glob.Glob
|
|
||||||
normal bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewGlobMatch creates a new GlobMatch instance or error
|
|
||||||
func NewGlobMatch(arg []byte) (*GlobMatch, error) {
|
|
||||||
truth := true
|
|
||||||
if len(arg) > 0 && arg[0] == '!' {
|
|
||||||
truth = false
|
|
||||||
arg = arg[1:]
|
|
||||||
}
|
|
||||||
if bytes.IndexByte(arg, '/') == -1 {
|
|
||||||
return NewBaseGlobMatch(string(arg), truth)
|
|
||||||
}
|
|
||||||
return NewPathGlobMatch(string(arg), truth)
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewBaseGlobMatch compiles a new matcher.
|
|
||||||
// Arg true should be set to false if the output is inverted.
|
|
||||||
func NewBaseGlobMatch(arg string, truth bool) (*GlobMatch, error) {
|
|
||||||
g, err := glob.Compile(arg)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &GlobMatch{orig: arg, matcher: g, normal: truth}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewPathGlobMatch compiles a new matcher.
|
|
||||||
// Arg true should be set to false if the output is inverted.
|
|
||||||
func NewPathGlobMatch(arg string, truth bool) (*GlobMatch, error) {
|
|
||||||
// if starts with "/" then glob only applies to top level
|
|
||||||
if len(arg) > 0 && arg[0] == '/' {
|
|
||||||
arg = arg[1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
// create path-aware glob
|
|
||||||
g, err := glob.Compile(arg, '/')
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return &GlobMatch{orig: arg, matcher: g, normal: truth}, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// True returns true if this should be evaluated normally ("true is true")
|
|
||||||
// and false if the result should be inverted ("false is true")
|
|
||||||
//
|
|
||||||
func (g *GlobMatch) True() bool { return g.normal }
|
|
||||||
|
|
||||||
// MarshalText is really a debug function
|
|
||||||
func (g *GlobMatch) MarshalText() ([]byte, error) {
|
|
||||||
return []byte(fmt.Sprintf("\"%s: %v %s\"", "GlobMatch", g.normal, g.orig)), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// Match satisfies the Matcher interface
|
|
||||||
func (g *GlobMatch) Match(file string) bool {
|
|
||||||
return g.matcher.Match(file)
|
|
||||||
}
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
package ignore
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Parse reads in a gitignore file and returns a Matcher
|
|
||||||
func Parse(src []byte) (Matcher, error) {
|
|
||||||
matchers := []Matcher{}
|
|
||||||
lines := bytes.Split(src, []byte{'\n'})
|
|
||||||
for _, line := range lines {
|
|
||||||
if len(line) == 0 || len(bytes.TrimSpace(line)) == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if line[0] == '#' {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: line starts with '!'
|
|
||||||
// TODO: line ends with '\ '
|
|
||||||
|
|
||||||
// if starts with \# or \! then escaped
|
|
||||||
if len(line) > 1 && line[0] == '\\' && (line[1] == '#' || line[1] == '!') {
|
|
||||||
line = line[1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
m, err := NewGlobMatch(line)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
matchers = append(matchers, m)
|
|
||||||
}
|
|
||||||
return NewMultiMatch(matchers), nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,47 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
// Legal provides licensing info.
|
|
||||||
const Legal = `
|
|
||||||
Execept where noted below, the source code for misspell is
|
|
||||||
copyright Nick Galbreath and distribution is allowed under a
|
|
||||||
MIT license. See the following for details:
|
|
||||||
|
|
||||||
* https://github.com/client9/misspell/blob/master/LICENSE
|
|
||||||
* https://tldrlegal.com/license/mit-license
|
|
||||||
|
|
||||||
Misspell makes uses of the Golang standard library and
|
|
||||||
contains a modified version of Golang's strings.Replacer
|
|
||||||
which are covered under a BSD License.
|
|
||||||
|
|
||||||
* https://golang.org/pkg/strings/#Replacer
|
|
||||||
* https://golang.org/src/strings/replace.go
|
|
||||||
* https://github.com/golang/go/blob/master/LICENSE
|
|
||||||
|
|
||||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
|
||||||
|
|
||||||
Redistribution and use in source and binary forms, with or without
|
|
||||||
modification, are permitted provided that the following conditions are
|
|
||||||
met:
|
|
||||||
|
|
||||||
* Redistributions of source code must retain the above copyright
|
|
||||||
notice, this list of conditions and the following disclaimer.
|
|
||||||
* Redistributions in binary form must reproduce the above
|
|
||||||
copyright notice, this list of conditions and the following disclaimer
|
|
||||||
in the documentation and/or other materials provided with the
|
|
||||||
distribution.
|
|
||||||
* Neither the name of Google Inc. nor the names of its
|
|
||||||
contributors may be used to endorse or promote products derived from
|
|
||||||
this software without specific prior written permission.
|
|
||||||
|
|
||||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
||||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
||||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
||||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
||||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
||||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
||||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
||||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
`
|
|
||||||
|
|
@ -1,210 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"io"
|
|
||||||
"io/ioutil"
|
|
||||||
"net/http"
|
|
||||||
"os"
|
|
||||||
"path/filepath"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// The number of possible binary formats is very large
|
|
||||||
// items that might be checked into a repo or be an
|
|
||||||
// artifact of a build. Additions welcome.
|
|
||||||
//
|
|
||||||
// Golang's internal table is very small and can't be
|
|
||||||
// relied on. Even then things like ".js" have a mime
|
|
||||||
// type of "application/javascipt" which isn't very helpful.
|
|
||||||
// "[x]" means we have sniff test and suffix test should be eliminated
|
|
||||||
var binary = map[string]bool{
|
|
||||||
".a": true, // [ ] archive
|
|
||||||
".bin": true, // [ ] binary
|
|
||||||
".bz2": true, // [ ] compression
|
|
||||||
".class": true, // [x] Java class file
|
|
||||||
".dll": true, // [ ] shared library
|
|
||||||
".exe": true, // [ ] binary
|
|
||||||
".gif": true, // [ ] image
|
|
||||||
".gpg": true, // [x] text, but really all base64
|
|
||||||
".gz": true, // [ ] compression
|
|
||||||
".ico": true, // [ ] image
|
|
||||||
".jar": true, // [x] archive
|
|
||||||
".jpeg": true, // [ ] image
|
|
||||||
".jpg": true, // [ ] image
|
|
||||||
".mp3": true, // [ ] audio
|
|
||||||
".mp4": true, // [ ] video
|
|
||||||
".mpeg": true, // [ ] video
|
|
||||||
".o": true, // [ ] object file
|
|
||||||
".pdf": true, // [x] pdf
|
|
||||||
".png": true, // [x] image
|
|
||||||
".pyc": true, // [ ] Python bytecode
|
|
||||||
".pyo": true, // [ ] Python bytecode
|
|
||||||
".so": true, // [x] shared library
|
|
||||||
".swp": true, // [ ] vim swap file
|
|
||||||
".tar": true, // [ ] archive
|
|
||||||
".tiff": true, // [ ] image
|
|
||||||
".woff": true, // [ ] font
|
|
||||||
".woff2": true, // [ ] font
|
|
||||||
".xz": true, // [ ] compression
|
|
||||||
".z": true, // [ ] compression
|
|
||||||
".zip": true, // [x] archive
|
|
||||||
}
|
|
||||||
|
|
||||||
// isBinaryFilename returns true if the file is likely to be binary
|
|
||||||
//
|
|
||||||
// Better heuristics could be done here, in particular a binary
|
|
||||||
// file is unlikely to be UTF-8 encoded. However this is cheap
|
|
||||||
// and will solve the immediate need of making sure common
|
|
||||||
// binary formats are not corrupted by mistake.
|
|
||||||
func isBinaryFilename(s string) bool {
|
|
||||||
return binary[strings.ToLower(filepath.Ext(s))]
|
|
||||||
}
|
|
||||||
|
|
||||||
var scm = map[string]bool{
|
|
||||||
".bzr": true,
|
|
||||||
".git": true,
|
|
||||||
".hg": true,
|
|
||||||
".svn": true,
|
|
||||||
"CVS": true,
|
|
||||||
}
|
|
||||||
|
|
||||||
// isSCMPath returns true if the path is likely part of a (private) SCM
|
|
||||||
// directory. E.g. ./git/something = true
|
|
||||||
func isSCMPath(s string) bool {
|
|
||||||
// hack for .git/COMMIT_EDITMSG and .git/TAG_EDITMSG
|
|
||||||
// normally we don't look at anything in .git
|
|
||||||
// but COMMIT_EDITMSG and TAG_EDITMSG are used as
|
|
||||||
// temp files for git commits. Allowing misspell to inspect
|
|
||||||
// these files allows for commit-msg hooks
|
|
||||||
// https://git-scm.com/book/en/v2/Customizing-Git-Git-Hooks
|
|
||||||
if strings.Contains(filepath.Base(s), "EDITMSG") {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
parts := strings.Split(filepath.Clean(s), string(filepath.Separator))
|
|
||||||
for _, dir := range parts {
|
|
||||||
if scm[dir] {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
var magicHeaders = [][]byte{
|
|
||||||
// Issue #68
|
|
||||||
// PGP messages and signatures are "text" but really just
|
|
||||||
// blobs of base64-text and should not be misspell-checked
|
|
||||||
[]byte("-----BEGIN PGP MESSAGE-----"),
|
|
||||||
[]byte("-----BEGIN PGP SIGNATURE-----"),
|
|
||||||
|
|
||||||
// ELF
|
|
||||||
{0x7f, 0x45, 0x4c, 0x46},
|
|
||||||
|
|
||||||
// Postscript
|
|
||||||
{0x25, 0x21, 0x50, 0x53},
|
|
||||||
|
|
||||||
// PDF
|
|
||||||
{0x25, 0x50, 0x44, 0x46},
|
|
||||||
|
|
||||||
// Java class file
|
|
||||||
// https://en.wikipedia.org/wiki/Java_class_file
|
|
||||||
{0xCA, 0xFE, 0xBA, 0xBE},
|
|
||||||
|
|
||||||
// PNG
|
|
||||||
// https://en.wikipedia.org/wiki/Portable_Network_Graphics
|
|
||||||
{0x89, 0x50, 0x4e, 0x47, 0x0d, 0x0a, 0x1a, 0x0a},
|
|
||||||
|
|
||||||
// ZIP, JAR, ODF, OOXML
|
|
||||||
{0x50, 0x4B, 0x03, 0x04},
|
|
||||||
{0x50, 0x4B, 0x05, 0x06},
|
|
||||||
{0x50, 0x4B, 0x07, 0x08},
|
|
||||||
}
|
|
||||||
|
|
||||||
func isTextFile(raw []byte) bool {
|
|
||||||
for _, magic := range magicHeaders {
|
|
||||||
if bytes.HasPrefix(raw, magic) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// allow any text/ type with utf-8 encoding
|
|
||||||
// DetectContentType sometimes returns charset=utf-16 for XML stuff
|
|
||||||
// in which case ignore.
|
|
||||||
mime := http.DetectContentType(raw)
|
|
||||||
return strings.HasPrefix(mime, "text/") && strings.HasSuffix(mime, "charset=utf-8")
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReadTextFile returns the contents of a file, first testing if it is a text file
|
|
||||||
// returns ("", nil) if not a text file
|
|
||||||
// returns ("", error) if error
|
|
||||||
// returns (string, nil) if text
|
|
||||||
//
|
|
||||||
// unfortunately, in worse case, this does
|
|
||||||
// 1 stat
|
|
||||||
// 1 open,read,close of 512 bytes
|
|
||||||
// 1 more stat,open, read everything, close (via ioutil.ReadAll)
|
|
||||||
// This could be kinder to the filesystem.
|
|
||||||
//
|
|
||||||
// This uses some heuristics of the file's extension (e.g. .zip, .txt) and
|
|
||||||
// uses a sniffer to determine if the file is text or not.
|
|
||||||
// Using file extensions isn't great, but probably
|
|
||||||
// good enough for real-world use.
|
|
||||||
// Golang's built in sniffer is problematic for differnet reasons. It's
|
|
||||||
// optimized for HTML, and is very limited in detection. It would be good
|
|
||||||
// to explicitly add some tests for ELF/DWARF formats to make sure we never
|
|
||||||
// corrupt binary files.
|
|
||||||
func ReadTextFile(filename string) (string, error) {
|
|
||||||
if isBinaryFilename(filename) {
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if isSCMPath(filename) {
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
fstat, err := os.Stat(filename)
|
|
||||||
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("Unable to stat %q: %s", filename, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
// directory: nothing to do.
|
|
||||||
if fstat.IsDir() {
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// avoid reading in multi-gig files
|
|
||||||
// if input is large, read the first 512 bytes to sniff type
|
|
||||||
// if not-text, then exit
|
|
||||||
isText := false
|
|
||||||
if fstat.Size() > 50000 {
|
|
||||||
fin, err := os.Open(filename)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("Unable to open large file %q: %s", filename, err)
|
|
||||||
}
|
|
||||||
defer fin.Close()
|
|
||||||
buf := make([]byte, 512)
|
|
||||||
_, err = io.ReadFull(fin, buf)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("Unable to read 512 bytes from %q: %s", filename, err)
|
|
||||||
}
|
|
||||||
if !isTextFile(buf) {
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// set so we don't double check this file
|
|
||||||
isText = true
|
|
||||||
}
|
|
||||||
|
|
||||||
// read in whole file
|
|
||||||
raw, err := ioutil.ReadFile(filename)
|
|
||||||
if err != nil {
|
|
||||||
return "", fmt.Errorf("Unable to read all %q: %s", filename, err)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !isText && !isTextFile(raw) {
|
|
||||||
return "", nil
|
|
||||||
}
|
|
||||||
return string(raw), nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,85 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"regexp"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
reEmail = regexp.MustCompile(`[a-zA-Z0-9_.%+-]+@[a-zA-Z0-9-.]+\.[a-zA-Z]{2,6}[^a-zA-Z]`)
|
|
||||||
reHost = regexp.MustCompile(`[a-zA-Z0-9-.]+\.[a-zA-Z]+`)
|
|
||||||
reBackslash = regexp.MustCompile(`\\[a-z]`)
|
|
||||||
)
|
|
||||||
|
|
||||||
// RemovePath attempts to strip away embedded file system paths, e.g.
|
|
||||||
// /foo/bar or /static/myimg.png
|
|
||||||
//
|
|
||||||
// TODO: windows style
|
|
||||||
//
|
|
||||||
func RemovePath(s string) string {
|
|
||||||
out := bytes.Buffer{}
|
|
||||||
var idx int
|
|
||||||
for len(s) > 0 {
|
|
||||||
if idx = strings.IndexByte(s, '/'); idx == -1 {
|
|
||||||
out.WriteString(s)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if idx > 0 {
|
|
||||||
idx--
|
|
||||||
}
|
|
||||||
|
|
||||||
var chclass string
|
|
||||||
switch s[idx] {
|
|
||||||
case '/', ' ', '\n', '\t', '\r':
|
|
||||||
chclass = " \n\r\t"
|
|
||||||
case '[':
|
|
||||||
chclass = "]\n"
|
|
||||||
case '(':
|
|
||||||
chclass = ")\n"
|
|
||||||
default:
|
|
||||||
out.WriteString(s[:idx+2])
|
|
||||||
s = s[idx+2:]
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
endx := strings.IndexAny(s[idx+1:], chclass)
|
|
||||||
if endx != -1 {
|
|
||||||
out.WriteString(s[:idx+1])
|
|
||||||
out.Write(bytes.Repeat([]byte{' '}, endx))
|
|
||||||
s = s[idx+endx+1:]
|
|
||||||
} else {
|
|
||||||
out.WriteString(s)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return out.String()
|
|
||||||
}
|
|
||||||
|
|
||||||
// replaceWithBlanks returns a string with the same number of spaces as the input
|
|
||||||
func replaceWithBlanks(s string) string {
|
|
||||||
return strings.Repeat(" ", len(s))
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoveEmail remove email-like strings, e.g. "nickg+junk@xfoobar.com", "nickg@xyz.abc123.biz"
|
|
||||||
func RemoveEmail(s string) string {
|
|
||||||
return reEmail.ReplaceAllStringFunc(s, replaceWithBlanks)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoveHost removes host-like strings "foobar.com" "abc123.fo1231.biz"
|
|
||||||
func RemoveHost(s string) string {
|
|
||||||
return reHost.ReplaceAllStringFunc(s, replaceWithBlanks)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoveBackslashEscapes removes characters that are preceeded by a backslash
|
|
||||||
// commonly found in printf format stringd "\nto"
|
|
||||||
func removeBackslashEscapes(s string) string {
|
|
||||||
return reBackslash.ReplaceAllStringFunc(s, replaceWithBlanks)
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoveNotWords blanks out all the not words
|
|
||||||
func RemoveNotWords(s string) string {
|
|
||||||
// do most selective/specific first
|
|
||||||
return removeBackslashEscapes(RemoveHost(RemoveEmail(RemovePath(StripURL(s)))))
|
|
||||||
}
|
|
||||||
|
|
@ -1,246 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bufio"
|
|
||||||
"bytes"
|
|
||||||
"io"
|
|
||||||
"regexp"
|
|
||||||
"strings"
|
|
||||||
"text/scanner"
|
|
||||||
)
|
|
||||||
|
|
||||||
func max(x, y int) int {
|
|
||||||
if x > y {
|
|
||||||
return x
|
|
||||||
}
|
|
||||||
return y
|
|
||||||
}
|
|
||||||
|
|
||||||
func inArray(haystack []string, needle string) bool {
|
|
||||||
for _, word := range haystack {
|
|
||||||
if needle == word {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
var wordRegexp = regexp.MustCompile(`[a-zA-Z0-9']+`)
|
|
||||||
|
|
||||||
// Diff is datastructure showing what changed in a single line
|
|
||||||
type Diff struct {
|
|
||||||
Filename string
|
|
||||||
FullLine string
|
|
||||||
Line int
|
|
||||||
Column int
|
|
||||||
Original string
|
|
||||||
Corrected string
|
|
||||||
}
|
|
||||||
|
|
||||||
// Replacer is the main struct for spelling correction
|
|
||||||
type Replacer struct {
|
|
||||||
Replacements []string
|
|
||||||
Debug bool
|
|
||||||
engine *StringReplacer
|
|
||||||
corrected map[string]string
|
|
||||||
}
|
|
||||||
|
|
||||||
// New creates a new default Replacer using the main rule list
|
|
||||||
func New() *Replacer {
|
|
||||||
r := Replacer{
|
|
||||||
Replacements: DictMain,
|
|
||||||
}
|
|
||||||
r.Compile()
|
|
||||||
return &r
|
|
||||||
}
|
|
||||||
|
|
||||||
// RemoveRule deletes existings rules.
|
|
||||||
// TODO: make inplace to save memory
|
|
||||||
func (r *Replacer) RemoveRule(ignore []string) {
|
|
||||||
newwords := make([]string, 0, len(r.Replacements))
|
|
||||||
for i := 0; i < len(r.Replacements); i += 2 {
|
|
||||||
if inArray(ignore, r.Replacements[i]) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
newwords = append(newwords, r.Replacements[i:i+2]...)
|
|
||||||
}
|
|
||||||
r.engine = nil
|
|
||||||
r.Replacements = newwords
|
|
||||||
}
|
|
||||||
|
|
||||||
// AddRuleList appends new rules.
|
|
||||||
// Input is in the same form as Strings.Replacer: [ old1, new1, old2, new2, ....]
|
|
||||||
// Note: does not check for duplictes
|
|
||||||
func (r *Replacer) AddRuleList(additions []string) {
|
|
||||||
r.engine = nil
|
|
||||||
r.Replacements = append(r.Replacements, additions...)
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compile compiles the rules. Required before using the Replace functions
|
|
||||||
func (r *Replacer) Compile() {
|
|
||||||
|
|
||||||
r.corrected = make(map[string]string, len(r.Replacements)/2)
|
|
||||||
for i := 0; i < len(r.Replacements); i += 2 {
|
|
||||||
r.corrected[r.Replacements[i]] = r.Replacements[i+1]
|
|
||||||
}
|
|
||||||
r.engine = NewStringReplacer(r.Replacements...)
|
|
||||||
}
|
|
||||||
|
|
||||||
/*
|
|
||||||
line1 and line2 are different
|
|
||||||
extract words from each line1
|
|
||||||
|
|
||||||
replace word -> newword
|
|
||||||
if word == new-word
|
|
||||||
continue
|
|
||||||
if new-word in list of replacements
|
|
||||||
continue
|
|
||||||
new word not original, and not in list of replacements
|
|
||||||
some substring got mixed up. UNdo
|
|
||||||
*/
|
|
||||||
func (r *Replacer) recheckLine(s string, lineNum int, buf io.Writer, next func(Diff)) {
|
|
||||||
first := 0
|
|
||||||
redacted := RemoveNotWords(s)
|
|
||||||
|
|
||||||
idx := wordRegexp.FindAllStringIndex(redacted, -1)
|
|
||||||
for _, ab := range idx {
|
|
||||||
word := s[ab[0]:ab[1]]
|
|
||||||
newword := r.engine.Replace(word)
|
|
||||||
if newword == word {
|
|
||||||
// no replacement done
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// ignore camelCase words
|
|
||||||
// https://github.com/client9/misspell/issues/113
|
|
||||||
if CaseStyle(word) == CaseUnknown {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if StringEqualFold(r.corrected[strings.ToLower(word)], newword) {
|
|
||||||
// word got corrected into something we know
|
|
||||||
io.WriteString(buf, s[first:ab[0]])
|
|
||||||
io.WriteString(buf, newword)
|
|
||||||
first = ab[1]
|
|
||||||
next(Diff{
|
|
||||||
FullLine: s,
|
|
||||||
Line: lineNum,
|
|
||||||
Original: word,
|
|
||||||
Corrected: newword,
|
|
||||||
Column: ab[0],
|
|
||||||
})
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// Word got corrected into something unknown. Ignore it
|
|
||||||
}
|
|
||||||
io.WriteString(buf, s[first:])
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReplaceGo is a specialized routine for correcting Golang source
|
|
||||||
// files. Currently only checks comments, not identifiers for
|
|
||||||
// spelling.
|
|
||||||
func (r *Replacer) ReplaceGo(input string) (string, []Diff) {
|
|
||||||
var s scanner.Scanner
|
|
||||||
s.Init(strings.NewReader(input))
|
|
||||||
s.Mode = scanner.ScanIdents | scanner.ScanFloats | scanner.ScanChars | scanner.ScanStrings | scanner.ScanRawStrings | scanner.ScanComments
|
|
||||||
lastPos := 0
|
|
||||||
output := ""
|
|
||||||
Loop:
|
|
||||||
for {
|
|
||||||
switch s.Scan() {
|
|
||||||
case scanner.Comment:
|
|
||||||
origComment := s.TokenText()
|
|
||||||
newComment := r.engine.Replace(origComment)
|
|
||||||
|
|
||||||
if origComment != newComment {
|
|
||||||
// s.Pos().Offset is the end of the current token
|
|
||||||
// subtract len(origComment) to get the start of the token
|
|
||||||
offset := s.Pos().Offset
|
|
||||||
output = output + input[lastPos:offset-len(origComment)] + newComment
|
|
||||||
lastPos = offset
|
|
||||||
}
|
|
||||||
case scanner.EOF:
|
|
||||||
break Loop
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if lastPos == 0 {
|
|
||||||
// no changes, no copies
|
|
||||||
return input, nil
|
|
||||||
}
|
|
||||||
if lastPos < len(input) {
|
|
||||||
output = output + input[lastPos:]
|
|
||||||
}
|
|
||||||
diffs := make([]Diff, 0, 8)
|
|
||||||
buf := bytes.NewBuffer(make([]byte, 0, max(len(input), len(output))+100))
|
|
||||||
// faster that making a bytes.Buffer and bufio.ReadString
|
|
||||||
outlines := strings.SplitAfter(output, "\n")
|
|
||||||
inlines := strings.SplitAfter(input, "\n")
|
|
||||||
for i := 0; i < len(inlines); i++ {
|
|
||||||
if inlines[i] == outlines[i] {
|
|
||||||
buf.WriteString(outlines[i])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
r.recheckLine(inlines[i], i+1, buf, func(d Diff) {
|
|
||||||
diffs = append(diffs, d)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return buf.String(), diffs
|
|
||||||
|
|
||||||
}
|
|
||||||
|
|
||||||
// Replace is corrects misspellings in input, returning corrected version
|
|
||||||
// along with a list of diffs.
|
|
||||||
func (r *Replacer) Replace(input string) (string, []Diff) {
|
|
||||||
output := r.engine.Replace(input)
|
|
||||||
if input == output {
|
|
||||||
return input, nil
|
|
||||||
}
|
|
||||||
diffs := make([]Diff, 0, 8)
|
|
||||||
buf := bytes.NewBuffer(make([]byte, 0, max(len(input), len(output))+100))
|
|
||||||
// faster that making a bytes.Buffer and bufio.ReadString
|
|
||||||
outlines := strings.SplitAfter(output, "\n")
|
|
||||||
inlines := strings.SplitAfter(input, "\n")
|
|
||||||
for i := 0; i < len(inlines); i++ {
|
|
||||||
if inlines[i] == outlines[i] {
|
|
||||||
buf.WriteString(outlines[i])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
r.recheckLine(inlines[i], i+1, buf, func(d Diff) {
|
|
||||||
diffs = append(diffs, d)
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
return buf.String(), diffs
|
|
||||||
}
|
|
||||||
|
|
||||||
// ReplaceReader applies spelling corrections to a reader stream. Diffs are
|
|
||||||
// emitted through a callback.
|
|
||||||
func (r *Replacer) ReplaceReader(raw io.Reader, w io.Writer, next func(Diff)) error {
|
|
||||||
var (
|
|
||||||
err error
|
|
||||||
line string
|
|
||||||
lineNum int
|
|
||||||
)
|
|
||||||
reader := bufio.NewReader(raw)
|
|
||||||
for err == nil {
|
|
||||||
lineNum++
|
|
||||||
line, err = reader.ReadString('\n')
|
|
||||||
|
|
||||||
// if it's EOF, then line has the last line
|
|
||||||
// don't like the check of err here and
|
|
||||||
// in for loop
|
|
||||||
if err != nil && err != io.EOF {
|
|
||||||
return err
|
|
||||||
}
|
|
||||||
// easily 5x faster than regexp+map
|
|
||||||
if line == r.engine.Replace(line) {
|
|
||||||
io.WriteString(w, line)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
// but it can be inaccurate, so we need to double check
|
|
||||||
r.recheckLine(line, lineNum, w, next)
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,336 +0,0 @@
|
||||||
// Copyright 2011 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"io"
|
|
||||||
// "log"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// StringReplacer replaces a list of strings with replacements.
|
|
||||||
// It is safe for concurrent use by multiple goroutines.
|
|
||||||
type StringReplacer struct {
|
|
||||||
r replacer
|
|
||||||
}
|
|
||||||
|
|
||||||
// replacer is the interface that a replacement algorithm needs to implement.
|
|
||||||
type replacer interface {
|
|
||||||
Replace(s string) string
|
|
||||||
WriteString(w io.Writer, s string) (n int, err error)
|
|
||||||
}
|
|
||||||
|
|
||||||
// NewStringReplacer returns a new Replacer from a list of old, new string pairs.
|
|
||||||
// Replacements are performed in order, without overlapping matches.
|
|
||||||
func NewStringReplacer(oldnew ...string) *StringReplacer {
|
|
||||||
if len(oldnew)%2 == 1 {
|
|
||||||
panic("strings.NewReplacer: odd argument count")
|
|
||||||
}
|
|
||||||
|
|
||||||
return &StringReplacer{r: makeGenericReplacer(oldnew)}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Replace returns a copy of s with all replacements performed.
|
|
||||||
func (r *StringReplacer) Replace(s string) string {
|
|
||||||
return r.r.Replace(s)
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteString writes s to w with all replacements performed.
|
|
||||||
func (r *StringReplacer) WriteString(w io.Writer, s string) (n int, err error) {
|
|
||||||
return r.r.WriteString(w, s)
|
|
||||||
}
|
|
||||||
|
|
||||||
// trieNode is a node in a lookup trie for prioritized key/value pairs. Keys
|
|
||||||
// and values may be empty. For example, the trie containing keys "ax", "ay",
|
|
||||||
// "bcbc", "x" and "xy" could have eight nodes:
|
|
||||||
//
|
|
||||||
// n0 -
|
|
||||||
// n1 a-
|
|
||||||
// n2 .x+
|
|
||||||
// n3 .y+
|
|
||||||
// n4 b-
|
|
||||||
// n5 .cbc+
|
|
||||||
// n6 x+
|
|
||||||
// n7 .y+
|
|
||||||
//
|
|
||||||
// n0 is the root node, and its children are n1, n4 and n6; n1's children are
|
|
||||||
// n2 and n3; n4's child is n5; n6's child is n7. Nodes n0, n1 and n4 (marked
|
|
||||||
// with a trailing "-") are partial keys, and nodes n2, n3, n5, n6 and n7
|
|
||||||
// (marked with a trailing "+") are complete keys.
|
|
||||||
type trieNode struct {
|
|
||||||
// value is the value of the trie node's key/value pair. It is empty if
|
|
||||||
// this node is not a complete key.
|
|
||||||
value string
|
|
||||||
// priority is the priority (higher is more important) of the trie node's
|
|
||||||
// key/value pair; keys are not necessarily matched shortest- or longest-
|
|
||||||
// first. Priority is positive if this node is a complete key, and zero
|
|
||||||
// otherwise. In the example above, positive/zero priorities are marked
|
|
||||||
// with a trailing "+" or "-".
|
|
||||||
priority int
|
|
||||||
|
|
||||||
// A trie node may have zero, one or more child nodes:
|
|
||||||
// * if the remaining fields are zero, there are no children.
|
|
||||||
// * if prefix and next are non-zero, there is one child in next.
|
|
||||||
// * if table is non-zero, it defines all the children.
|
|
||||||
//
|
|
||||||
// Prefixes are preferred over tables when there is one child, but the
|
|
||||||
// root node always uses a table for lookup efficiency.
|
|
||||||
|
|
||||||
// prefix is the difference in keys between this trie node and the next.
|
|
||||||
// In the example above, node n4 has prefix "cbc" and n4's next node is n5.
|
|
||||||
// Node n5 has no children and so has zero prefix, next and table fields.
|
|
||||||
prefix string
|
|
||||||
next *trieNode
|
|
||||||
|
|
||||||
// table is a lookup table indexed by the next byte in the key, after
|
|
||||||
// remapping that byte through genericReplacer.mapping to create a dense
|
|
||||||
// index. In the example above, the keys only use 'a', 'b', 'c', 'x' and
|
|
||||||
// 'y', which remap to 0, 1, 2, 3 and 4. All other bytes remap to 5, and
|
|
||||||
// genericReplacer.tableSize will be 5. Node n0's table will be
|
|
||||||
// []*trieNode{ 0:n1, 1:n4, 3:n6 }, where the 0, 1 and 3 are the remapped
|
|
||||||
// 'a', 'b' and 'x'.
|
|
||||||
table []*trieNode
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t *trieNode) add(key, val string, priority int, r *genericReplacer) {
|
|
||||||
if key == "" {
|
|
||||||
if t.priority == 0 {
|
|
||||||
t.value = val
|
|
||||||
t.priority = priority
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if t.prefix != "" {
|
|
||||||
// Need to split the prefix among multiple nodes.
|
|
||||||
var n int // length of the longest common prefix
|
|
||||||
for ; n < len(t.prefix) && n < len(key); n++ {
|
|
||||||
if t.prefix[n] != key[n] {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if n == len(t.prefix) {
|
|
||||||
t.next.add(key[n:], val, priority, r)
|
|
||||||
} else if n == 0 {
|
|
||||||
// First byte differs, start a new lookup table here. Looking up
|
|
||||||
// what is currently t.prefix[0] will lead to prefixNode, and
|
|
||||||
// looking up key[0] will lead to keyNode.
|
|
||||||
var prefixNode *trieNode
|
|
||||||
if len(t.prefix) == 1 {
|
|
||||||
prefixNode = t.next
|
|
||||||
} else {
|
|
||||||
prefixNode = &trieNode{
|
|
||||||
prefix: t.prefix[1:],
|
|
||||||
next: t.next,
|
|
||||||
}
|
|
||||||
}
|
|
||||||
keyNode := new(trieNode)
|
|
||||||
t.table = make([]*trieNode, r.tableSize)
|
|
||||||
t.table[r.mapping[t.prefix[0]]] = prefixNode
|
|
||||||
t.table[r.mapping[key[0]]] = keyNode
|
|
||||||
t.prefix = ""
|
|
||||||
t.next = nil
|
|
||||||
keyNode.add(key[1:], val, priority, r)
|
|
||||||
} else {
|
|
||||||
// Insert new node after the common section of the prefix.
|
|
||||||
next := &trieNode{
|
|
||||||
prefix: t.prefix[n:],
|
|
||||||
next: t.next,
|
|
||||||
}
|
|
||||||
t.prefix = t.prefix[:n]
|
|
||||||
t.next = next
|
|
||||||
next.add(key[n:], val, priority, r)
|
|
||||||
}
|
|
||||||
} else if t.table != nil {
|
|
||||||
// Insert into existing table.
|
|
||||||
m := r.mapping[key[0]]
|
|
||||||
if t.table[m] == nil {
|
|
||||||
t.table[m] = new(trieNode)
|
|
||||||
}
|
|
||||||
t.table[m].add(key[1:], val, priority, r)
|
|
||||||
} else {
|
|
||||||
t.prefix = key
|
|
||||||
t.next = new(trieNode)
|
|
||||||
t.next.add("", val, priority, r)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *genericReplacer) lookup(s string, ignoreRoot bool) (val string, keylen int, found bool) {
|
|
||||||
// Iterate down the trie to the end, and grab the value and keylen with
|
|
||||||
// the highest priority.
|
|
||||||
bestPriority := 0
|
|
||||||
node := &r.root
|
|
||||||
n := 0
|
|
||||||
for node != nil {
|
|
||||||
if node.priority > bestPriority && !(ignoreRoot && node == &r.root) {
|
|
||||||
bestPriority = node.priority
|
|
||||||
val = node.value
|
|
||||||
keylen = n
|
|
||||||
found = true
|
|
||||||
}
|
|
||||||
|
|
||||||
if s == "" {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if node.table != nil {
|
|
||||||
index := r.mapping[ByteToLower(s[0])]
|
|
||||||
if int(index) == r.tableSize {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
node = node.table[index]
|
|
||||||
s = s[1:]
|
|
||||||
n++
|
|
||||||
} else if node.prefix != "" && StringHasPrefixFold(s, node.prefix) {
|
|
||||||
n += len(node.prefix)
|
|
||||||
s = s[len(node.prefix):]
|
|
||||||
node = node.next
|
|
||||||
} else {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// genericReplacer is the fully generic algorithm.
|
|
||||||
// It's used as a fallback when nothing faster can be used.
|
|
||||||
type genericReplacer struct {
|
|
||||||
root trieNode
|
|
||||||
// tableSize is the size of a trie node's lookup table. It is the number
|
|
||||||
// of unique key bytes.
|
|
||||||
tableSize int
|
|
||||||
// mapping maps from key bytes to a dense index for trieNode.table.
|
|
||||||
mapping [256]byte
|
|
||||||
}
|
|
||||||
|
|
||||||
func makeGenericReplacer(oldnew []string) *genericReplacer {
|
|
||||||
r := new(genericReplacer)
|
|
||||||
// Find each byte used, then assign them each an index.
|
|
||||||
for i := 0; i < len(oldnew); i += 2 {
|
|
||||||
key := strings.ToLower(oldnew[i])
|
|
||||||
for j := 0; j < len(key); j++ {
|
|
||||||
r.mapping[key[j]] = 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, b := range r.mapping {
|
|
||||||
r.tableSize += int(b)
|
|
||||||
}
|
|
||||||
|
|
||||||
var index byte
|
|
||||||
for i, b := range r.mapping {
|
|
||||||
if b == 0 {
|
|
||||||
r.mapping[i] = byte(r.tableSize)
|
|
||||||
} else {
|
|
||||||
r.mapping[i] = index
|
|
||||||
index++
|
|
||||||
}
|
|
||||||
}
|
|
||||||
// Ensure root node uses a lookup table (for performance).
|
|
||||||
r.root.table = make([]*trieNode, r.tableSize)
|
|
||||||
|
|
||||||
for i := 0; i < len(oldnew); i += 2 {
|
|
||||||
r.root.add(strings.ToLower(oldnew[i]), oldnew[i+1], len(oldnew)-i, r)
|
|
||||||
}
|
|
||||||
return r
|
|
||||||
}
|
|
||||||
|
|
||||||
type appendSliceWriter []byte
|
|
||||||
|
|
||||||
// Write writes to the buffer to satisfy io.Writer.
|
|
||||||
func (w *appendSliceWriter) Write(p []byte) (int, error) {
|
|
||||||
*w = append(*w, p...)
|
|
||||||
return len(p), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// WriteString writes to the buffer without string->[]byte->string allocations.
|
|
||||||
func (w *appendSliceWriter) WriteString(s string) (int, error) {
|
|
||||||
*w = append(*w, s...)
|
|
||||||
return len(s), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
type stringWriterIface interface {
|
|
||||||
WriteString(string) (int, error)
|
|
||||||
}
|
|
||||||
|
|
||||||
type stringWriter struct {
|
|
||||||
w io.Writer
|
|
||||||
}
|
|
||||||
|
|
||||||
func (w stringWriter) WriteString(s string) (int, error) {
|
|
||||||
return w.w.Write([]byte(s))
|
|
||||||
}
|
|
||||||
|
|
||||||
func getStringWriter(w io.Writer) stringWriterIface {
|
|
||||||
sw, ok := w.(stringWriterIface)
|
|
||||||
if !ok {
|
|
||||||
sw = stringWriter{w}
|
|
||||||
}
|
|
||||||
return sw
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *genericReplacer) Replace(s string) string {
|
|
||||||
buf := make(appendSliceWriter, 0, len(s))
|
|
||||||
r.WriteString(&buf, s)
|
|
||||||
return string(buf)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (r *genericReplacer) WriteString(w io.Writer, s string) (n int, err error) {
|
|
||||||
sw := getStringWriter(w)
|
|
||||||
var last, wn int
|
|
||||||
var prevMatchEmpty bool
|
|
||||||
for i := 0; i <= len(s); {
|
|
||||||
// Fast path: s[i] is not a prefix of any pattern.
|
|
||||||
if i != len(s) && r.root.priority == 0 {
|
|
||||||
index := int(r.mapping[ByteToLower(s[i])])
|
|
||||||
if index == r.tableSize || r.root.table[index] == nil {
|
|
||||||
i++
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// Ignore the empty match iff the previous loop found the empty match.
|
|
||||||
val, keylen, match := r.lookup(s[i:], prevMatchEmpty)
|
|
||||||
prevMatchEmpty = match && keylen == 0
|
|
||||||
if match {
|
|
||||||
orig := s[i : i+keylen]
|
|
||||||
switch CaseStyle(orig) {
|
|
||||||
case CaseUnknown:
|
|
||||||
// pretend we didn't match
|
|
||||||
// i++
|
|
||||||
// continue
|
|
||||||
case CaseUpper:
|
|
||||||
val = strings.ToUpper(val)
|
|
||||||
case CaseLower:
|
|
||||||
val = strings.ToLower(val)
|
|
||||||
case CaseTitle:
|
|
||||||
if len(val) < 2 {
|
|
||||||
val = strings.ToUpper(val)
|
|
||||||
} else {
|
|
||||||
val = strings.ToUpper(val[:1]) + strings.ToLower(val[1:])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
wn, err = sw.WriteString(s[last:i])
|
|
||||||
n += wn
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
//log.Printf("%d: Going to correct %q with %q", i, s[i:i+keylen], val)
|
|
||||||
wn, err = sw.WriteString(val)
|
|
||||||
n += wn
|
|
||||||
if err != nil {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
i += keylen
|
|
||||||
last = i
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
i++
|
|
||||||
}
|
|
||||||
if last != len(s) {
|
|
||||||
wn, err = sw.WriteString(s[last:])
|
|
||||||
n += wn
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
@ -1,17 +0,0 @@
|
||||||
package misspell
|
|
||||||
|
|
||||||
import (
|
|
||||||
"regexp"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Regexp for URL https://mathiasbynens.be/demo/url-regex
|
|
||||||
//
|
|
||||||
// original @imme_emosol (54 chars) has trouble with dashes in hostname
|
|
||||||
// @(https?|ftp)://(-\.)?([^\s/?\.#-]+\.?)+(/[^\s]*)?$@iS
|
|
||||||
var reURL = regexp.MustCompile(`(?i)(https?|ftp)://(-\.)?([^\s/?\.#]+\.?)+(/[^\s]*)?`)
|
|
||||||
|
|
||||||
// StripURL attemps to replace URLs with blank spaces, e.g.
|
|
||||||
// "xxx http://foo.com/ yyy -> "xxx yyyy"
|
|
||||||
func StripURL(s string) string {
|
|
||||||
return reURL.ReplaceAllStringFunc(s, replaceWithBlanks)
|
|
||||||
}
|
|
||||||
|
|
@ -1,44 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob"
|
|
||||||
"github.com/gobwas/glob/match"
|
|
||||||
"github.com/gobwas/glob/match/debug"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
pattern := flag.String("p", "", "pattern to draw")
|
|
||||||
sep := flag.String("s", "", "comma separated list of separators characters")
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
if *pattern == "" {
|
|
||||||
flag.Usage()
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
var separators []rune
|
|
||||||
if len(*sep) > 0 {
|
|
||||||
for _, c := range strings.Split(*sep, ",") {
|
|
||||||
if r, w := utf8.DecodeRuneInString(c); len(c) > w {
|
|
||||||
fmt.Println("only single charactered separators are allowed")
|
|
||||||
os.Exit(1)
|
|
||||||
} else {
|
|
||||||
separators = append(separators, r)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
glob, err := glob.Compile(*pattern, separators...)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println("could not compile pattern:", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
matcher := glob.(match.Matcher)
|
|
||||||
fmt.Fprint(os.Stdout, debug.Graphviz(*pattern, matcher))
|
|
||||||
}
|
|
||||||
|
|
@ -1,82 +0,0 @@
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"flag"
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob"
|
|
||||||
"os"
|
|
||||||
"strings"
|
|
||||||
"testing"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
func benchString(r testing.BenchmarkResult) string {
|
|
||||||
nsop := r.NsPerOp()
|
|
||||||
ns := fmt.Sprintf("%10d ns/op", nsop)
|
|
||||||
allocs := "0"
|
|
||||||
if r.N > 0 {
|
|
||||||
if nsop < 100 {
|
|
||||||
// The format specifiers here make sure that
|
|
||||||
// the ones digits line up for all three possible formats.
|
|
||||||
if nsop < 10 {
|
|
||||||
ns = fmt.Sprintf("%13.2f ns/op", float64(r.T.Nanoseconds())/float64(r.N))
|
|
||||||
} else {
|
|
||||||
ns = fmt.Sprintf("%12.1f ns/op", float64(r.T.Nanoseconds())/float64(r.N))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
allocs = fmt.Sprintf("%d", r.MemAllocs/uint64(r.N))
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Sprintf("%8d\t%s\t%s allocs", r.N, ns, allocs)
|
|
||||||
}
|
|
||||||
|
|
||||||
func main() {
|
|
||||||
pattern := flag.String("p", "", "pattern to draw")
|
|
||||||
sep := flag.String("s", "", "comma separated list of separators")
|
|
||||||
fixture := flag.String("f", "", "fixture")
|
|
||||||
verbose := flag.Bool("v", false, "verbose")
|
|
||||||
flag.Parse()
|
|
||||||
|
|
||||||
if *pattern == "" {
|
|
||||||
flag.Usage()
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
var separators []rune
|
|
||||||
for _, c := range strings.Split(*sep, ",") {
|
|
||||||
if r, w := utf8.DecodeRuneInString(c); len(c) > w {
|
|
||||||
fmt.Println("only single charactered separators are allowed")
|
|
||||||
os.Exit(1)
|
|
||||||
} else {
|
|
||||||
separators = append(separators, r)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
g, err := glob.Compile(*pattern, separators...)
|
|
||||||
if err != nil {
|
|
||||||
fmt.Println("could not compile pattern:", err)
|
|
||||||
os.Exit(1)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !*verbose {
|
|
||||||
fmt.Println(g.Match(*fixture))
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
fmt.Printf("result: %t\n", g.Match(*fixture))
|
|
||||||
|
|
||||||
cb := testing.Benchmark(func(b *testing.B) {
|
|
||||||
for i := 0; i < b.N; i++ {
|
|
||||||
glob.Compile(*pattern, separators...)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
fmt.Println("compile:", benchString(cb))
|
|
||||||
|
|
||||||
mb := testing.Benchmark(func(b *testing.B) {
|
|
||||||
for i := 0; i < b.N; i++ {
|
|
||||||
g.Match(*fixture)
|
|
||||||
}
|
|
||||||
})
|
|
||||||
fmt.Println("match: ", benchString(mb))
|
|
||||||
}
|
|
||||||
|
|
@ -1,519 +0,0 @@
|
||||||
package compiler
|
|
||||||
|
|
||||||
// TODO use constructor with all matchers, and to their structs private
|
|
||||||
// TODO glue multiple Text nodes (like after QuoteMeta)
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"reflect"
|
|
||||||
|
|
||||||
"github.com/gobwas/glob/match"
|
|
||||||
"github.com/gobwas/glob/syntax/ast"
|
|
||||||
"github.com/gobwas/glob/util/runes"
|
|
||||||
)
|
|
||||||
|
|
||||||
func optimizeMatcher(matcher match.Matcher) match.Matcher {
|
|
||||||
switch m := matcher.(type) {
|
|
||||||
|
|
||||||
case match.Any:
|
|
||||||
if len(m.Separators) == 0 {
|
|
||||||
return match.NewSuper()
|
|
||||||
}
|
|
||||||
|
|
||||||
case match.AnyOf:
|
|
||||||
if len(m.Matchers) == 1 {
|
|
||||||
return m.Matchers[0]
|
|
||||||
}
|
|
||||||
|
|
||||||
return m
|
|
||||||
|
|
||||||
case match.List:
|
|
||||||
if m.Not == false && len(m.List) == 1 {
|
|
||||||
return match.NewText(string(m.List))
|
|
||||||
}
|
|
||||||
|
|
||||||
return m
|
|
||||||
|
|
||||||
case match.BTree:
|
|
||||||
m.Left = optimizeMatcher(m.Left)
|
|
||||||
m.Right = optimizeMatcher(m.Right)
|
|
||||||
|
|
||||||
r, ok := m.Value.(match.Text)
|
|
||||||
if !ok {
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
|
|
||||||
leftNil := m.Left == nil
|
|
||||||
rightNil := m.Right == nil
|
|
||||||
|
|
||||||
if leftNil && rightNil {
|
|
||||||
return match.NewText(r.Str)
|
|
||||||
}
|
|
||||||
|
|
||||||
_, leftSuper := m.Left.(match.Super)
|
|
||||||
lp, leftPrefix := m.Left.(match.Prefix)
|
|
||||||
|
|
||||||
_, rightSuper := m.Right.(match.Super)
|
|
||||||
rs, rightSuffix := m.Right.(match.Suffix)
|
|
||||||
|
|
||||||
if leftSuper && rightSuper {
|
|
||||||
return match.NewContains(r.Str, false)
|
|
||||||
}
|
|
||||||
|
|
||||||
if leftSuper && rightNil {
|
|
||||||
return match.NewSuffix(r.Str)
|
|
||||||
}
|
|
||||||
|
|
||||||
if rightSuper && leftNil {
|
|
||||||
return match.NewPrefix(r.Str)
|
|
||||||
}
|
|
||||||
|
|
||||||
if leftNil && rightSuffix {
|
|
||||||
return match.NewPrefixSuffix(r.Str, rs.Suffix)
|
|
||||||
}
|
|
||||||
|
|
||||||
if rightNil && leftPrefix {
|
|
||||||
return match.NewPrefixSuffix(lp.Prefix, r.Str)
|
|
||||||
}
|
|
||||||
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
|
|
||||||
return matcher
|
|
||||||
}
|
|
||||||
|
|
||||||
func compileMatchers(matchers []match.Matcher) (match.Matcher, error) {
|
|
||||||
if len(matchers) == 0 {
|
|
||||||
return nil, fmt.Errorf("compile error: need at least one matcher")
|
|
||||||
}
|
|
||||||
if len(matchers) == 1 {
|
|
||||||
return matchers[0], nil
|
|
||||||
}
|
|
||||||
if m := glueMatchers(matchers); m != nil {
|
|
||||||
return m, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
idx := -1
|
|
||||||
maxLen := -1
|
|
||||||
var val match.Matcher
|
|
||||||
for i, matcher := range matchers {
|
|
||||||
if l := matcher.Len(); l != -1 && l >= maxLen {
|
|
||||||
maxLen = l
|
|
||||||
idx = i
|
|
||||||
val = matcher
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if val == nil { // not found matcher with static length
|
|
||||||
r, err := compileMatchers(matchers[1:])
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return match.NewBTree(matchers[0], nil, r), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
left := matchers[:idx]
|
|
||||||
var right []match.Matcher
|
|
||||||
if len(matchers) > idx+1 {
|
|
||||||
right = matchers[idx+1:]
|
|
||||||
}
|
|
||||||
|
|
||||||
var l, r match.Matcher
|
|
||||||
var err error
|
|
||||||
if len(left) > 0 {
|
|
||||||
l, err = compileMatchers(left)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(right) > 0 {
|
|
||||||
r, err = compileMatchers(right)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return match.NewBTree(val, l, r), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func glueMatchers(matchers []match.Matcher) match.Matcher {
|
|
||||||
if m := glueMatchersAsEvery(matchers); m != nil {
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
if m := glueMatchersAsRow(matchers); m != nil {
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func glueMatchersAsRow(matchers []match.Matcher) match.Matcher {
|
|
||||||
if len(matchers) <= 1 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
c []match.Matcher
|
|
||||||
l int
|
|
||||||
)
|
|
||||||
for _, matcher := range matchers {
|
|
||||||
if ml := matcher.Len(); ml == -1 {
|
|
||||||
return nil
|
|
||||||
} else {
|
|
||||||
c = append(c, matcher)
|
|
||||||
l += ml
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return match.NewRow(l, c...)
|
|
||||||
}
|
|
||||||
|
|
||||||
func glueMatchersAsEvery(matchers []match.Matcher) match.Matcher {
|
|
||||||
if len(matchers) <= 1 {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
hasAny bool
|
|
||||||
hasSuper bool
|
|
||||||
hasSingle bool
|
|
||||||
min int
|
|
||||||
separator []rune
|
|
||||||
)
|
|
||||||
|
|
||||||
for i, matcher := range matchers {
|
|
||||||
var sep []rune
|
|
||||||
|
|
||||||
switch m := matcher.(type) {
|
|
||||||
case match.Super:
|
|
||||||
sep = []rune{}
|
|
||||||
hasSuper = true
|
|
||||||
|
|
||||||
case match.Any:
|
|
||||||
sep = m.Separators
|
|
||||||
hasAny = true
|
|
||||||
|
|
||||||
case match.Single:
|
|
||||||
sep = m.Separators
|
|
||||||
hasSingle = true
|
|
||||||
min++
|
|
||||||
|
|
||||||
case match.List:
|
|
||||||
if !m.Not {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
sep = m.List
|
|
||||||
hasSingle = true
|
|
||||||
min++
|
|
||||||
|
|
||||||
default:
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// initialize
|
|
||||||
if i == 0 {
|
|
||||||
separator = sep
|
|
||||||
}
|
|
||||||
|
|
||||||
if runes.Equal(sep, separator) {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if hasSuper && !hasAny && !hasSingle {
|
|
||||||
return match.NewSuper()
|
|
||||||
}
|
|
||||||
|
|
||||||
if hasAny && !hasSuper && !hasSingle {
|
|
||||||
return match.NewAny(separator)
|
|
||||||
}
|
|
||||||
|
|
||||||
if (hasAny || hasSuper) && min > 0 && len(separator) == 0 {
|
|
||||||
return match.NewMin(min)
|
|
||||||
}
|
|
||||||
|
|
||||||
every := match.NewEveryOf()
|
|
||||||
|
|
||||||
if min > 0 {
|
|
||||||
every.Add(match.NewMin(min))
|
|
||||||
|
|
||||||
if !hasAny && !hasSuper {
|
|
||||||
every.Add(match.NewMax(min))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(separator) > 0 {
|
|
||||||
every.Add(match.NewContains(string(separator), true))
|
|
||||||
}
|
|
||||||
|
|
||||||
return every
|
|
||||||
}
|
|
||||||
|
|
||||||
func minimizeMatchers(matchers []match.Matcher) []match.Matcher {
|
|
||||||
var done match.Matcher
|
|
||||||
var left, right, count int
|
|
||||||
|
|
||||||
for l := 0; l < len(matchers); l++ {
|
|
||||||
for r := len(matchers); r > l; r-- {
|
|
||||||
if glued := glueMatchers(matchers[l:r]); glued != nil {
|
|
||||||
var swap bool
|
|
||||||
|
|
||||||
if done == nil {
|
|
||||||
swap = true
|
|
||||||
} else {
|
|
||||||
cl, gl := done.Len(), glued.Len()
|
|
||||||
swap = cl > -1 && gl > -1 && gl > cl
|
|
||||||
swap = swap || count < r-l
|
|
||||||
}
|
|
||||||
|
|
||||||
if swap {
|
|
||||||
done = glued
|
|
||||||
left = l
|
|
||||||
right = r
|
|
||||||
count = r - l
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if done == nil {
|
|
||||||
return matchers
|
|
||||||
}
|
|
||||||
|
|
||||||
next := append(append([]match.Matcher{}, matchers[:left]...), done)
|
|
||||||
if right < len(matchers) {
|
|
||||||
next = append(next, matchers[right:]...)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(next) == len(matchers) {
|
|
||||||
return next
|
|
||||||
}
|
|
||||||
|
|
||||||
return minimizeMatchers(next)
|
|
||||||
}
|
|
||||||
|
|
||||||
// minimizeAnyOf tries to apply some heuristics to minimize number of nodes in given tree
|
|
||||||
func minimizeTree(tree *ast.Node) *ast.Node {
|
|
||||||
switch tree.Kind {
|
|
||||||
case ast.KindAnyOf:
|
|
||||||
return minimizeTreeAnyOf(tree)
|
|
||||||
default:
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// minimizeAnyOf tries to find common children of given node of AnyOf pattern
|
|
||||||
// it searches for common children from left and from right
|
|
||||||
// if any common children are found – then it returns new optimized ast tree
|
|
||||||
// else it returns nil
|
|
||||||
func minimizeTreeAnyOf(tree *ast.Node) *ast.Node {
|
|
||||||
if !areOfSameKind(tree.Children, ast.KindPattern) {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
commonLeft, commonRight := commonChildren(tree.Children)
|
|
||||||
commonLeftCount, commonRightCount := len(commonLeft), len(commonRight)
|
|
||||||
if commonLeftCount == 0 && commonRightCount == 0 { // there are no common parts
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
var result []*ast.Node
|
|
||||||
if commonLeftCount > 0 {
|
|
||||||
result = append(result, ast.NewNode(ast.KindPattern, nil, commonLeft...))
|
|
||||||
}
|
|
||||||
|
|
||||||
var anyOf []*ast.Node
|
|
||||||
for _, child := range tree.Children {
|
|
||||||
reuse := child.Children[commonLeftCount : len(child.Children)-commonRightCount]
|
|
||||||
var node *ast.Node
|
|
||||||
if len(reuse) == 0 {
|
|
||||||
// this pattern is completely reduced by commonLeft and commonRight patterns
|
|
||||||
// so it become nothing
|
|
||||||
node = ast.NewNode(ast.KindNothing, nil)
|
|
||||||
} else {
|
|
||||||
node = ast.NewNode(ast.KindPattern, nil, reuse...)
|
|
||||||
}
|
|
||||||
anyOf = appendIfUnique(anyOf, node)
|
|
||||||
}
|
|
||||||
switch {
|
|
||||||
case len(anyOf) == 1 && anyOf[0].Kind != ast.KindNothing:
|
|
||||||
result = append(result, anyOf[0])
|
|
||||||
case len(anyOf) > 1:
|
|
||||||
result = append(result, ast.NewNode(ast.KindAnyOf, nil, anyOf...))
|
|
||||||
}
|
|
||||||
|
|
||||||
if commonRightCount > 0 {
|
|
||||||
result = append(result, ast.NewNode(ast.KindPattern, nil, commonRight...))
|
|
||||||
}
|
|
||||||
|
|
||||||
return ast.NewNode(ast.KindPattern, nil, result...)
|
|
||||||
}
|
|
||||||
|
|
||||||
func commonChildren(nodes []*ast.Node) (commonLeft, commonRight []*ast.Node) {
|
|
||||||
if len(nodes) <= 1 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// find node that has least number of children
|
|
||||||
idx := leastChildren(nodes)
|
|
||||||
if idx == -1 {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
tree := nodes[idx]
|
|
||||||
treeLength := len(tree.Children)
|
|
||||||
|
|
||||||
// allocate max able size for rightCommon slice
|
|
||||||
// to get ability insert elements in reverse order (from end to start)
|
|
||||||
// without sorting
|
|
||||||
commonRight = make([]*ast.Node, treeLength)
|
|
||||||
lastRight := treeLength // will use this to get results as commonRight[lastRight:]
|
|
||||||
|
|
||||||
var (
|
|
||||||
breakLeft bool
|
|
||||||
breakRight bool
|
|
||||||
commonTotal int
|
|
||||||
)
|
|
||||||
for i, j := 0, treeLength-1; commonTotal < treeLength && j >= 0 && !(breakLeft && breakRight); i, j = i+1, j-1 {
|
|
||||||
treeLeft := tree.Children[i]
|
|
||||||
treeRight := tree.Children[j]
|
|
||||||
|
|
||||||
for k := 0; k < len(nodes) && !(breakLeft && breakRight); k++ {
|
|
||||||
// skip least children node
|
|
||||||
if k == idx {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
restLeft := nodes[k].Children[i]
|
|
||||||
restRight := nodes[k].Children[j+len(nodes[k].Children)-treeLength]
|
|
||||||
|
|
||||||
breakLeft = breakLeft || !treeLeft.Equal(restLeft)
|
|
||||||
|
|
||||||
// disable searching for right common parts, if left part is already overlapping
|
|
||||||
breakRight = breakRight || (!breakLeft && j <= i)
|
|
||||||
breakRight = breakRight || !treeRight.Equal(restRight)
|
|
||||||
}
|
|
||||||
|
|
||||||
if !breakLeft {
|
|
||||||
commonTotal++
|
|
||||||
commonLeft = append(commonLeft, treeLeft)
|
|
||||||
}
|
|
||||||
if !breakRight {
|
|
||||||
commonTotal++
|
|
||||||
lastRight = j
|
|
||||||
commonRight[j] = treeRight
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
commonRight = commonRight[lastRight:]
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func appendIfUnique(target []*ast.Node, val *ast.Node) []*ast.Node {
|
|
||||||
for _, n := range target {
|
|
||||||
if reflect.DeepEqual(n, val) {
|
|
||||||
return target
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return append(target, val)
|
|
||||||
}
|
|
||||||
|
|
||||||
func areOfSameKind(nodes []*ast.Node, kind ast.Kind) bool {
|
|
||||||
for _, n := range nodes {
|
|
||||||
if n.Kind != kind {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func leastChildren(nodes []*ast.Node) int {
|
|
||||||
min := -1
|
|
||||||
idx := -1
|
|
||||||
for i, n := range nodes {
|
|
||||||
if idx == -1 || (len(n.Children) < min) {
|
|
||||||
min = len(n.Children)
|
|
||||||
idx = i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return idx
|
|
||||||
}
|
|
||||||
|
|
||||||
func compileTreeChildren(tree *ast.Node, sep []rune) ([]match.Matcher, error) {
|
|
||||||
var matchers []match.Matcher
|
|
||||||
for _, desc := range tree.Children {
|
|
||||||
m, err := compile(desc, sep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
matchers = append(matchers, optimizeMatcher(m))
|
|
||||||
}
|
|
||||||
return matchers, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func compile(tree *ast.Node, sep []rune) (m match.Matcher, err error) {
|
|
||||||
switch tree.Kind {
|
|
||||||
case ast.KindAnyOf:
|
|
||||||
// todo this could be faster on pattern_alternatives_combine_lite (see glob_test.go)
|
|
||||||
if n := minimizeTree(tree); n != nil {
|
|
||||||
return compile(n, sep)
|
|
||||||
}
|
|
||||||
matchers, err := compileTreeChildren(tree, sep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
return match.NewAnyOf(matchers...), nil
|
|
||||||
|
|
||||||
case ast.KindPattern:
|
|
||||||
if len(tree.Children) == 0 {
|
|
||||||
return match.NewNothing(), nil
|
|
||||||
}
|
|
||||||
matchers, err := compileTreeChildren(tree, sep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
m, err = compileMatchers(minimizeMatchers(matchers))
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
case ast.KindAny:
|
|
||||||
m = match.NewAny(sep)
|
|
||||||
|
|
||||||
case ast.KindSuper:
|
|
||||||
m = match.NewSuper()
|
|
||||||
|
|
||||||
case ast.KindSingle:
|
|
||||||
m = match.NewSingle(sep)
|
|
||||||
|
|
||||||
case ast.KindNothing:
|
|
||||||
m = match.NewNothing()
|
|
||||||
|
|
||||||
case ast.KindList:
|
|
||||||
l := tree.Value.(ast.List)
|
|
||||||
m = match.NewList([]rune(l.Chars), l.Not)
|
|
||||||
|
|
||||||
case ast.KindRange:
|
|
||||||
r := tree.Value.(ast.Range)
|
|
||||||
m = match.NewRange(r.Lo, r.Hi, r.Not)
|
|
||||||
|
|
||||||
case ast.KindText:
|
|
||||||
t := tree.Value.(ast.Text)
|
|
||||||
m = match.NewText(t.Text)
|
|
||||||
|
|
||||||
default:
|
|
||||||
return nil, fmt.Errorf("could not compile tree: unknown node type")
|
|
||||||
}
|
|
||||||
|
|
||||||
return optimizeMatcher(m), nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func Compile(tree *ast.Node, sep []rune) (match.Matcher, error) {
|
|
||||||
m, err := compile(tree, sep)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return m, nil
|
|
||||||
}
|
|
||||||
|
|
@ -1,80 +0,0 @@
|
||||||
package glob
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/gobwas/glob/compiler"
|
|
||||||
"github.com/gobwas/glob/syntax"
|
|
||||||
)
|
|
||||||
|
|
||||||
// Glob represents compiled glob pattern.
|
|
||||||
type Glob interface {
|
|
||||||
Match(string) bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// Compile creates Glob for given pattern and strings (if any present after pattern) as separators.
|
|
||||||
// The pattern syntax is:
|
|
||||||
//
|
|
||||||
// pattern:
|
|
||||||
// { term }
|
|
||||||
//
|
|
||||||
// term:
|
|
||||||
// `*` matches any sequence of non-separator characters
|
|
||||||
// `**` matches any sequence of characters
|
|
||||||
// `?` matches any single non-separator character
|
|
||||||
// `[` [ `!` ] { character-range } `]`
|
|
||||||
// character class (must be non-empty)
|
|
||||||
// `{` pattern-list `}`
|
|
||||||
// pattern alternatives
|
|
||||||
// c matches character c (c != `*`, `**`, `?`, `\`, `[`, `{`, `}`)
|
|
||||||
// `\` c matches character c
|
|
||||||
//
|
|
||||||
// character-range:
|
|
||||||
// c matches character c (c != `\\`, `-`, `]`)
|
|
||||||
// `\` c matches character c
|
|
||||||
// lo `-` hi matches character c for lo <= c <= hi
|
|
||||||
//
|
|
||||||
// pattern-list:
|
|
||||||
// pattern { `,` pattern }
|
|
||||||
// comma-separated (without spaces) patterns
|
|
||||||
//
|
|
||||||
func Compile(pattern string, separators ...rune) (Glob, error) {
|
|
||||||
ast, err := syntax.Parse(pattern)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
matcher, err := compiler.Compile(ast, separators)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
|
|
||||||
return matcher, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
// MustCompile is the same as Compile, except that if Compile returns error, this will panic
|
|
||||||
func MustCompile(pattern string, separators ...rune) Glob {
|
|
||||||
g, err := Compile(pattern, separators...)
|
|
||||||
if err != nil {
|
|
||||||
panic(err)
|
|
||||||
}
|
|
||||||
|
|
||||||
return g
|
|
||||||
}
|
|
||||||
|
|
||||||
// QuoteMeta returns a string that quotes all glob pattern meta characters
|
|
||||||
// inside the argument text; For example, QuoteMeta(`{foo*}`) returns `\[foo\*\]`.
|
|
||||||
func QuoteMeta(s string) string {
|
|
||||||
b := make([]byte, 2*len(s))
|
|
||||||
|
|
||||||
// a byte loop is correct because all meta characters are ASCII
|
|
||||||
j := 0
|
|
||||||
for i := 0; i < len(s); i++ {
|
|
||||||
if syntax.Special(s[i]) {
|
|
||||||
b[j] = '\\'
|
|
||||||
j++
|
|
||||||
}
|
|
||||||
b[j] = s[i]
|
|
||||||
j++
|
|
||||||
}
|
|
||||||
|
|
||||||
return string(b[0:j])
|
|
||||||
}
|
|
||||||
|
|
@ -1,45 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/util/strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Any struct {
|
|
||||||
Separators []rune
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewAny(s []rune) Any {
|
|
||||||
return Any{s}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Any) Match(s string) bool {
|
|
||||||
return strings.IndexAnyRunes(s, self.Separators) == -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Any) Index(s string) (int, []int) {
|
|
||||||
found := strings.IndexAnyRunes(s, self.Separators)
|
|
||||||
switch found {
|
|
||||||
case -1:
|
|
||||||
case 0:
|
|
||||||
return 0, segments0
|
|
||||||
default:
|
|
||||||
s = s[:found]
|
|
||||||
}
|
|
||||||
|
|
||||||
segments := acquireSegments(len(s))
|
|
||||||
for i := range s {
|
|
||||||
segments = append(segments, i)
|
|
||||||
}
|
|
||||||
segments = append(segments, len(s))
|
|
||||||
|
|
||||||
return 0, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Any) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Any) String() string {
|
|
||||||
return fmt.Sprintf("<any:![%s]>", string(self.Separators))
|
|
||||||
}
|
|
||||||
|
|
@ -1,84 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type AnyOf struct {
|
|
||||||
Matchers Matchers
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewAnyOf(m ...Matcher) AnyOf {
|
|
||||||
return AnyOf{Matchers(m)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self *AnyOf) Add(m Matcher) error {
|
|
||||||
self.Matchers = append(self.Matchers, m)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self AnyOf) Match(s string) bool {
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
if m.Match(s) {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self AnyOf) Index(s string) (int, []int) {
|
|
||||||
index := -1
|
|
||||||
|
|
||||||
segments := acquireSegments(len(s))
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
idx, seg := m.Index(s)
|
|
||||||
if idx == -1 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if index == -1 || idx < index {
|
|
||||||
index = idx
|
|
||||||
segments = append(segments[:0], seg...)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if idx > index {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
// here idx == index
|
|
||||||
segments = appendMerge(segments, seg)
|
|
||||||
}
|
|
||||||
|
|
||||||
if index == -1 {
|
|
||||||
releaseSegments(segments)
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return index, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self AnyOf) Len() (l int) {
|
|
||||||
l = -1
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
ml := m.Len()
|
|
||||||
switch {
|
|
||||||
case l == -1:
|
|
||||||
l = ml
|
|
||||||
continue
|
|
||||||
|
|
||||||
case ml == -1:
|
|
||||||
return -1
|
|
||||||
|
|
||||||
case l != ml:
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self AnyOf) String() string {
|
|
||||||
return fmt.Sprintf("<any_of:[%s]>", self.Matchers)
|
|
||||||
}
|
|
||||||
|
|
@ -1,146 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type BTree struct {
|
|
||||||
Value Matcher
|
|
||||||
Left Matcher
|
|
||||||
Right Matcher
|
|
||||||
ValueLengthRunes int
|
|
||||||
LeftLengthRunes int
|
|
||||||
RightLengthRunes int
|
|
||||||
LengthRunes int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewBTree(Value, Left, Right Matcher) (tree BTree) {
|
|
||||||
tree.Value = Value
|
|
||||||
tree.Left = Left
|
|
||||||
tree.Right = Right
|
|
||||||
|
|
||||||
lenOk := true
|
|
||||||
if tree.ValueLengthRunes = Value.Len(); tree.ValueLengthRunes == -1 {
|
|
||||||
lenOk = false
|
|
||||||
}
|
|
||||||
|
|
||||||
if Left != nil {
|
|
||||||
if tree.LeftLengthRunes = Left.Len(); tree.LeftLengthRunes == -1 {
|
|
||||||
lenOk = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if Right != nil {
|
|
||||||
if tree.RightLengthRunes = Right.Len(); tree.RightLengthRunes == -1 {
|
|
||||||
lenOk = false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if lenOk {
|
|
||||||
tree.LengthRunes = tree.LeftLengthRunes + tree.ValueLengthRunes + tree.RightLengthRunes
|
|
||||||
} else {
|
|
||||||
tree.LengthRunes = -1
|
|
||||||
}
|
|
||||||
|
|
||||||
return tree
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self BTree) Len() int {
|
|
||||||
return self.LengthRunes
|
|
||||||
}
|
|
||||||
|
|
||||||
// todo?
|
|
||||||
func (self BTree) Index(s string) (int, []int) {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self BTree) Match(s string) bool {
|
|
||||||
inputLen := len(s)
|
|
||||||
|
|
||||||
// self.Length, self.RLen and self.LLen are values meaning the length of runes for each part
|
|
||||||
// here we manipulating byte length for better optimizations
|
|
||||||
// but these checks still works, cause minLen of 1-rune string is 1 byte.
|
|
||||||
if self.LengthRunes != -1 && self.LengthRunes > inputLen {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// try to cut unnecessary parts
|
|
||||||
// by knowledge of length of right and left part
|
|
||||||
var offset, limit int
|
|
||||||
if self.LeftLengthRunes >= 0 {
|
|
||||||
offset = self.LeftLengthRunes
|
|
||||||
}
|
|
||||||
if self.RightLengthRunes >= 0 {
|
|
||||||
limit = inputLen - self.RightLengthRunes
|
|
||||||
} else {
|
|
||||||
limit = inputLen
|
|
||||||
}
|
|
||||||
|
|
||||||
for offset < limit {
|
|
||||||
// search for matching part in substring
|
|
||||||
index, segments := self.Value.Index(s[offset:limit])
|
|
||||||
if index == -1 {
|
|
||||||
releaseSegments(segments)
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
l := s[:offset+index]
|
|
||||||
var left bool
|
|
||||||
if self.Left != nil {
|
|
||||||
left = self.Left.Match(l)
|
|
||||||
} else {
|
|
||||||
left = l == ""
|
|
||||||
}
|
|
||||||
|
|
||||||
if left {
|
|
||||||
for i := len(segments) - 1; i >= 0; i-- {
|
|
||||||
length := segments[i]
|
|
||||||
|
|
||||||
var right bool
|
|
||||||
var r string
|
|
||||||
// if there is no string for the right branch
|
|
||||||
if inputLen <= offset+index+length {
|
|
||||||
r = ""
|
|
||||||
} else {
|
|
||||||
r = s[offset+index+length:]
|
|
||||||
}
|
|
||||||
|
|
||||||
if self.Right != nil {
|
|
||||||
right = self.Right.Match(r)
|
|
||||||
} else {
|
|
||||||
right = r == ""
|
|
||||||
}
|
|
||||||
|
|
||||||
if right {
|
|
||||||
releaseSegments(segments)
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
_, step := utf8.DecodeRuneInString(s[offset+index:])
|
|
||||||
offset += index + step
|
|
||||||
|
|
||||||
releaseSegments(segments)
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self BTree) String() string {
|
|
||||||
const n string = "<nil>"
|
|
||||||
var l, r string
|
|
||||||
if self.Left == nil {
|
|
||||||
l = n
|
|
||||||
} else {
|
|
||||||
l = self.Left.String()
|
|
||||||
}
|
|
||||||
if self.Right == nil {
|
|
||||||
r = n
|
|
||||||
} else {
|
|
||||||
r = self.Right.String()
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Sprintf("<btree:[%s<-%s->%s]>", l, self.Value, r)
|
|
||||||
}
|
|
||||||
|
|
@ -1,58 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Contains struct {
|
|
||||||
Needle string
|
|
||||||
Not bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewContains(needle string, not bool) Contains {
|
|
||||||
return Contains{needle, not}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Contains) Match(s string) bool {
|
|
||||||
return strings.Contains(s, self.Needle) != self.Not
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Contains) Index(s string) (int, []int) {
|
|
||||||
var offset int
|
|
||||||
|
|
||||||
idx := strings.Index(s, self.Needle)
|
|
||||||
|
|
||||||
if !self.Not {
|
|
||||||
if idx == -1 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
offset = idx + len(self.Needle)
|
|
||||||
if len(s) <= offset {
|
|
||||||
return 0, []int{offset}
|
|
||||||
}
|
|
||||||
s = s[offset:]
|
|
||||||
} else if idx != -1 {
|
|
||||||
s = s[:idx]
|
|
||||||
}
|
|
||||||
|
|
||||||
segments := acquireSegments(len(s) + 1)
|
|
||||||
for i, _ := range s {
|
|
||||||
segments = append(segments, offset+i)
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, append(segments, offset+len(s))
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Contains) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Contains) String() string {
|
|
||||||
var not string
|
|
||||||
if self.Not {
|
|
||||||
not = "!"
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("<contains:%s[%s]>", not, self.Needle)
|
|
||||||
}
|
|
||||||
|
|
@ -1,55 +0,0 @@
|
||||||
package debug
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/match"
|
|
||||||
"math/rand"
|
|
||||||
)
|
|
||||||
|
|
||||||
func Graphviz(pattern string, m match.Matcher) string {
|
|
||||||
return fmt.Sprintf(`digraph G {graph[label="%s"];%s}`, pattern, graphviz_internal(m, fmt.Sprintf("%x", rand.Int63())))
|
|
||||||
}
|
|
||||||
|
|
||||||
func graphviz_internal(m match.Matcher, id string) string {
|
|
||||||
buf := &bytes.Buffer{}
|
|
||||||
|
|
||||||
switch matcher := m.(type) {
|
|
||||||
case match.BTree:
|
|
||||||
fmt.Fprintf(buf, `"%s"[label="%s"];`, id, matcher.Value.String())
|
|
||||||
for _, m := range []match.Matcher{matcher.Left, matcher.Right} {
|
|
||||||
switch n := m.(type) {
|
|
||||||
case nil:
|
|
||||||
rnd := rand.Int63()
|
|
||||||
fmt.Fprintf(buf, `"%x"[label="<nil>"];`, rnd)
|
|
||||||
fmt.Fprintf(buf, `"%s"->"%x";`, id, rnd)
|
|
||||||
|
|
||||||
default:
|
|
||||||
sub := fmt.Sprintf("%x", rand.Int63())
|
|
||||||
fmt.Fprintf(buf, `"%s"->"%s";`, id, sub)
|
|
||||||
fmt.Fprintf(buf, graphviz_internal(n, sub))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
case match.AnyOf:
|
|
||||||
fmt.Fprintf(buf, `"%s"[label="AnyOf"];`, id)
|
|
||||||
for _, m := range matcher.Matchers {
|
|
||||||
rnd := rand.Int63()
|
|
||||||
fmt.Fprintf(buf, graphviz_internal(m, fmt.Sprintf("%x", rnd)))
|
|
||||||
fmt.Fprintf(buf, `"%s"->"%x";`, id, rnd)
|
|
||||||
}
|
|
||||||
|
|
||||||
case match.EveryOf:
|
|
||||||
fmt.Fprintf(buf, `"%s"[label="EveryOf"];`, id)
|
|
||||||
for _, m := range matcher.Matchers {
|
|
||||||
rnd := rand.Int63()
|
|
||||||
fmt.Fprintf(buf, graphviz_internal(m, fmt.Sprintf("%x", rnd)))
|
|
||||||
fmt.Fprintf(buf, `"%s"->"%x";`, id, rnd)
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
fmt.Fprintf(buf, `"%s"[label="%s"];`, id, m.String())
|
|
||||||
}
|
|
||||||
|
|
||||||
return buf.String()
|
|
||||||
}
|
|
||||||
|
|
@ -1,99 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type EveryOf struct {
|
|
||||||
Matchers Matchers
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewEveryOf(m ...Matcher) EveryOf {
|
|
||||||
return EveryOf{Matchers(m)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self *EveryOf) Add(m Matcher) error {
|
|
||||||
self.Matchers = append(self.Matchers, m)
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self EveryOf) Len() (l int) {
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
if ml := m.Len(); l > 0 {
|
|
||||||
l += ml
|
|
||||||
} else {
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self EveryOf) Index(s string) (int, []int) {
|
|
||||||
var index int
|
|
||||||
var offset int
|
|
||||||
|
|
||||||
// make `in` with cap as len(s),
|
|
||||||
// cause it is the maximum size of output segments values
|
|
||||||
next := acquireSegments(len(s))
|
|
||||||
current := acquireSegments(len(s))
|
|
||||||
|
|
||||||
sub := s
|
|
||||||
for i, m := range self.Matchers {
|
|
||||||
idx, seg := m.Index(sub)
|
|
||||||
if idx == -1 {
|
|
||||||
releaseSegments(next)
|
|
||||||
releaseSegments(current)
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if i == 0 {
|
|
||||||
// we use copy here instead of `current = seg`
|
|
||||||
// cause seg is a slice from reusable buffer `in`
|
|
||||||
// and it could be overwritten in next iteration
|
|
||||||
current = append(current, seg...)
|
|
||||||
} else {
|
|
||||||
// clear the next
|
|
||||||
next = next[:0]
|
|
||||||
|
|
||||||
delta := index - (idx + offset)
|
|
||||||
for _, ex := range current {
|
|
||||||
for _, n := range seg {
|
|
||||||
if ex+delta == n {
|
|
||||||
next = append(next, n)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(next) == 0 {
|
|
||||||
releaseSegments(next)
|
|
||||||
releaseSegments(current)
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
current = append(current[:0], next...)
|
|
||||||
}
|
|
||||||
|
|
||||||
index = idx + offset
|
|
||||||
sub = s[index:]
|
|
||||||
offset += idx
|
|
||||||
}
|
|
||||||
|
|
||||||
releaseSegments(next)
|
|
||||||
|
|
||||||
return index, current
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self EveryOf) Match(s string) bool {
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
if !m.Match(s) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self EveryOf) String() string {
|
|
||||||
return fmt.Sprintf("<every_of:[%s]>", self.Matchers)
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/util/runes"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type List struct {
|
|
||||||
List []rune
|
|
||||||
Not bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewList(list []rune, not bool) List {
|
|
||||||
return List{list, not}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self List) Match(s string) bool {
|
|
||||||
r, w := utf8.DecodeRuneInString(s)
|
|
||||||
if len(s) > w {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
inList := runes.IndexRune(self.List, r) != -1
|
|
||||||
return inList == !self.Not
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self List) Len() int {
|
|
||||||
return lenOne
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self List) Index(s string) (int, []int) {
|
|
||||||
for i, r := range s {
|
|
||||||
if self.Not == (runes.IndexRune(self.List, r) == -1) {
|
|
||||||
return i, segmentsByRuneLength[utf8.RuneLen(r)]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self List) String() string {
|
|
||||||
var not string
|
|
||||||
if self.Not {
|
|
||||||
not = "!"
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Sprintf("<list:%s[%s]>", not, string(self.List))
|
|
||||||
}
|
|
||||||
|
|
@ -1,81 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
// todo common table of rune's length
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
const lenOne = 1
|
|
||||||
const lenZero = 0
|
|
||||||
const lenNo = -1
|
|
||||||
|
|
||||||
type Matcher interface {
|
|
||||||
Match(string) bool
|
|
||||||
Index(string) (int, []int)
|
|
||||||
Len() int
|
|
||||||
String() string
|
|
||||||
}
|
|
||||||
|
|
||||||
type Matchers []Matcher
|
|
||||||
|
|
||||||
func (m Matchers) String() string {
|
|
||||||
var s []string
|
|
||||||
for _, matcher := range m {
|
|
||||||
s = append(s, fmt.Sprint(matcher))
|
|
||||||
}
|
|
||||||
|
|
||||||
return fmt.Sprintf("%s", strings.Join(s, ","))
|
|
||||||
}
|
|
||||||
|
|
||||||
// appendMerge merges and sorts given already SORTED and UNIQUE segments.
|
|
||||||
func appendMerge(target, sub []int) []int {
|
|
||||||
lt, ls := len(target), len(sub)
|
|
||||||
out := make([]int, 0, lt+ls)
|
|
||||||
|
|
||||||
for x, y := 0, 0; x < lt || y < ls; {
|
|
||||||
if x >= lt {
|
|
||||||
out = append(out, sub[y:]...)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if y >= ls {
|
|
||||||
out = append(out, target[x:]...)
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
xValue := target[x]
|
|
||||||
yValue := sub[y]
|
|
||||||
|
|
||||||
switch {
|
|
||||||
|
|
||||||
case xValue == yValue:
|
|
||||||
out = append(out, xValue)
|
|
||||||
x++
|
|
||||||
y++
|
|
||||||
|
|
||||||
case xValue < yValue:
|
|
||||||
out = append(out, xValue)
|
|
||||||
x++
|
|
||||||
|
|
||||||
case yValue < xValue:
|
|
||||||
out = append(out, yValue)
|
|
||||||
y++
|
|
||||||
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
target = append(target[:0], out...)
|
|
||||||
|
|
||||||
return target
|
|
||||||
}
|
|
||||||
|
|
||||||
func reverseSegments(input []int) {
|
|
||||||
l := len(input)
|
|
||||||
m := l / 2
|
|
||||||
|
|
||||||
for i := 0; i < m; i++ {
|
|
||||||
input[i], input[l-i-1] = input[l-i-1], input[i]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Max struct {
|
|
||||||
Limit int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewMax(l int) Max {
|
|
||||||
return Max{l}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Max) Match(s string) bool {
|
|
||||||
var l int
|
|
||||||
for _ = range s {
|
|
||||||
l += 1
|
|
||||||
if l > self.Limit {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Max) Index(s string) (int, []int) {
|
|
||||||
segments := acquireSegments(self.Limit + 1)
|
|
||||||
segments = append(segments, 0)
|
|
||||||
var count int
|
|
||||||
for i, r := range s {
|
|
||||||
count++
|
|
||||||
if count > self.Limit {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
segments = append(segments, i+utf8.RuneLen(r))
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Max) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Max) String() string {
|
|
||||||
return fmt.Sprintf("<max:%d>", self.Limit)
|
|
||||||
}
|
|
||||||
|
|
@ -1,57 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Min struct {
|
|
||||||
Limit int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewMin(l int) Min {
|
|
||||||
return Min{l}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Min) Match(s string) bool {
|
|
||||||
var l int
|
|
||||||
for _ = range s {
|
|
||||||
l += 1
|
|
||||||
if l >= self.Limit {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Min) Index(s string) (int, []int) {
|
|
||||||
var count int
|
|
||||||
|
|
||||||
c := len(s) - self.Limit + 1
|
|
||||||
if c <= 0 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
segments := acquireSegments(c)
|
|
||||||
for i, r := range s {
|
|
||||||
count++
|
|
||||||
if count >= self.Limit {
|
|
||||||
segments = append(segments, i+utf8.RuneLen(r))
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(segments) == 0 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Min) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Min) String() string {
|
|
||||||
return fmt.Sprintf("<min:%d>", self.Limit)
|
|
||||||
}
|
|
||||||
|
|
@ -1,27 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Nothing struct{}
|
|
||||||
|
|
||||||
func NewNothing() Nothing {
|
|
||||||
return Nothing{}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Nothing) Match(s string) bool {
|
|
||||||
return len(s) == 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Nothing) Index(s string) (int, []int) {
|
|
||||||
return 0, segments0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Nothing) Len() int {
|
|
||||||
return lenZero
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Nothing) String() string {
|
|
||||||
return fmt.Sprintf("<nothing>")
|
|
||||||
}
|
|
||||||
|
|
@ -1,50 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Prefix struct {
|
|
||||||
Prefix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewPrefix(p string) Prefix {
|
|
||||||
return Prefix{p}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Prefix) Index(s string) (int, []int) {
|
|
||||||
idx := strings.Index(s, self.Prefix)
|
|
||||||
if idx == -1 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
length := len(self.Prefix)
|
|
||||||
var sub string
|
|
||||||
if len(s) > idx+length {
|
|
||||||
sub = s[idx+length:]
|
|
||||||
} else {
|
|
||||||
sub = ""
|
|
||||||
}
|
|
||||||
|
|
||||||
segments := acquireSegments(len(sub) + 1)
|
|
||||||
segments = append(segments, length)
|
|
||||||
for i, r := range sub {
|
|
||||||
segments = append(segments, length+i+utf8.RuneLen(r))
|
|
||||||
}
|
|
||||||
|
|
||||||
return idx, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Prefix) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Prefix) Match(s string) bool {
|
|
||||||
return strings.HasPrefix(s, self.Prefix)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Prefix) String() string {
|
|
||||||
return fmt.Sprintf("<prefix:%s>", self.Prefix)
|
|
||||||
}
|
|
||||||
|
|
@ -1,62 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type PrefixSuffix struct {
|
|
||||||
Prefix, Suffix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewPrefixSuffix(p, s string) PrefixSuffix {
|
|
||||||
return PrefixSuffix{p, s}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self PrefixSuffix) Index(s string) (int, []int) {
|
|
||||||
prefixIdx := strings.Index(s, self.Prefix)
|
|
||||||
if prefixIdx == -1 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
suffixLen := len(self.Suffix)
|
|
||||||
if suffixLen <= 0 {
|
|
||||||
return prefixIdx, []int{len(s) - prefixIdx}
|
|
||||||
}
|
|
||||||
|
|
||||||
if (len(s) - prefixIdx) <= 0 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
segments := acquireSegments(len(s) - prefixIdx)
|
|
||||||
for sub := s[prefixIdx:]; ; {
|
|
||||||
suffixIdx := strings.LastIndex(sub, self.Suffix)
|
|
||||||
if suffixIdx == -1 {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
segments = append(segments, suffixIdx+suffixLen)
|
|
||||||
sub = sub[:suffixIdx]
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(segments) == 0 {
|
|
||||||
releaseSegments(segments)
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
reverseSegments(segments)
|
|
||||||
|
|
||||||
return prefixIdx, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self PrefixSuffix) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self PrefixSuffix) Match(s string) bool {
|
|
||||||
return strings.HasPrefix(s, self.Prefix) && strings.HasSuffix(s, self.Suffix)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self PrefixSuffix) String() string {
|
|
||||||
return fmt.Sprintf("<prefix_suffix:[%s,%s]>", self.Prefix, self.Suffix)
|
|
||||||
}
|
|
||||||
|
|
@ -1,48 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Range struct {
|
|
||||||
Lo, Hi rune
|
|
||||||
Not bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewRange(lo, hi rune, not bool) Range {
|
|
||||||
return Range{lo, hi, not}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Range) Len() int {
|
|
||||||
return lenOne
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Range) Match(s string) bool {
|
|
||||||
r, w := utf8.DecodeRuneInString(s)
|
|
||||||
if len(s) > w {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
inRange := r >= self.Lo && r <= self.Hi
|
|
||||||
|
|
||||||
return inRange == !self.Not
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Range) Index(s string) (int, []int) {
|
|
||||||
for i, r := range s {
|
|
||||||
if self.Not != (r >= self.Lo && r <= self.Hi) {
|
|
||||||
return i, segmentsByRuneLength[utf8.RuneLen(r)]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Range) String() string {
|
|
||||||
var not string
|
|
||||||
if self.Not {
|
|
||||||
not = "!"
|
|
||||||
}
|
|
||||||
return fmt.Sprintf("<range:%s[%s,%s]>", not, string(self.Lo), string(self.Hi))
|
|
||||||
}
|
|
||||||
|
|
@ -1,77 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Row struct {
|
|
||||||
Matchers Matchers
|
|
||||||
RunesLength int
|
|
||||||
Segments []int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewRow(len int, m ...Matcher) Row {
|
|
||||||
return Row{
|
|
||||||
Matchers: Matchers(m),
|
|
||||||
RunesLength: len,
|
|
||||||
Segments: []int{len},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) matchAll(s string) bool {
|
|
||||||
var idx int
|
|
||||||
for _, m := range self.Matchers {
|
|
||||||
length := m.Len()
|
|
||||||
|
|
||||||
var next, i int
|
|
||||||
for next = range s[idx:] {
|
|
||||||
i++
|
|
||||||
if i == length {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if i < length || !m.Match(s[idx:idx+next+1]) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
idx += next + 1
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) lenOk(s string) bool {
|
|
||||||
var i int
|
|
||||||
for _ = range s {
|
|
||||||
i++
|
|
||||||
if i > self.RunesLength {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return self.RunesLength == i
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) Match(s string) bool {
|
|
||||||
return self.lenOk(s) && self.matchAll(s)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) Len() (l int) {
|
|
||||||
return self.RunesLength
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) Index(s string) (int, []int) {
|
|
||||||
for i := range s {
|
|
||||||
if len(s[i:]) < self.RunesLength {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if self.matchAll(s[i:]) {
|
|
||||||
return i, self.Segments
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Row) String() string {
|
|
||||||
return fmt.Sprintf("<row_%d:[%s]>", self.RunesLength, self.Matchers)
|
|
||||||
}
|
|
||||||
|
|
@ -1,91 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"sync"
|
|
||||||
)
|
|
||||||
|
|
||||||
type SomePool interface {
|
|
||||||
Get() []int
|
|
||||||
Put([]int)
|
|
||||||
}
|
|
||||||
|
|
||||||
var segmentsPools [1024]sync.Pool
|
|
||||||
|
|
||||||
func toPowerOfTwo(v int) int {
|
|
||||||
v--
|
|
||||||
v |= v >> 1
|
|
||||||
v |= v >> 2
|
|
||||||
v |= v >> 4
|
|
||||||
v |= v >> 8
|
|
||||||
v |= v >> 16
|
|
||||||
v++
|
|
||||||
|
|
||||||
return v
|
|
||||||
}
|
|
||||||
|
|
||||||
const (
|
|
||||||
cacheFrom = 16
|
|
||||||
cacheToAndHigher = 1024
|
|
||||||
cacheFromIndex = 15
|
|
||||||
cacheToAndHigherIndex = 1023
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
segments0 = []int{0}
|
|
||||||
segments1 = []int{1}
|
|
||||||
segments2 = []int{2}
|
|
||||||
segments3 = []int{3}
|
|
||||||
segments4 = []int{4}
|
|
||||||
)
|
|
||||||
|
|
||||||
var segmentsByRuneLength [5][]int = [5][]int{
|
|
||||||
0: segments0,
|
|
||||||
1: segments1,
|
|
||||||
2: segments2,
|
|
||||||
3: segments3,
|
|
||||||
4: segments4,
|
|
||||||
}
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
for i := cacheToAndHigher; i >= cacheFrom; i >>= 1 {
|
|
||||||
func(i int) {
|
|
||||||
segmentsPools[i-1] = sync.Pool{New: func() interface{} {
|
|
||||||
return make([]int, 0, i)
|
|
||||||
}}
|
|
||||||
}(i)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func getTableIndex(c int) int {
|
|
||||||
p := toPowerOfTwo(c)
|
|
||||||
switch {
|
|
||||||
case p >= cacheToAndHigher:
|
|
||||||
return cacheToAndHigherIndex
|
|
||||||
case p <= cacheFrom:
|
|
||||||
return cacheFromIndex
|
|
||||||
default:
|
|
||||||
return p - 1
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func acquireSegments(c int) []int {
|
|
||||||
// make []int with less capacity than cacheFrom
|
|
||||||
// is faster than acquiring it from pool
|
|
||||||
if c < cacheFrom {
|
|
||||||
return make([]int, 0, c)
|
|
||||||
}
|
|
||||||
|
|
||||||
return segmentsPools[getTableIndex(c)].Get().([]int)[:0]
|
|
||||||
}
|
|
||||||
|
|
||||||
func releaseSegments(s []int) {
|
|
||||||
c := cap(s)
|
|
||||||
|
|
||||||
// make []int with less capacity than cacheFrom
|
|
||||||
// is faster than acquiring it from pool
|
|
||||||
if c < cacheFrom {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
segmentsPools[getTableIndex(c)].Put(s)
|
|
||||||
}
|
|
||||||
|
|
@ -1,43 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/util/runes"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
// single represents ?
|
|
||||||
type Single struct {
|
|
||||||
Separators []rune
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewSingle(s []rune) Single {
|
|
||||||
return Single{s}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Single) Match(s string) bool {
|
|
||||||
r, w := utf8.DecodeRuneInString(s)
|
|
||||||
if len(s) > w {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
return runes.IndexRune(self.Separators, r) == -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Single) Len() int {
|
|
||||||
return lenOne
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Single) Index(s string) (int, []int) {
|
|
||||||
for i, r := range s {
|
|
||||||
if runes.IndexRune(self.Separators, r) == -1 {
|
|
||||||
return i, segmentsByRuneLength[utf8.RuneLen(r)]
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Single) String() string {
|
|
||||||
return fmt.Sprintf("<single:![%s]>", string(self.Separators))
|
|
||||||
}
|
|
||||||
|
|
@ -1,35 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Suffix struct {
|
|
||||||
Suffix string
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewSuffix(s string) Suffix {
|
|
||||||
return Suffix{s}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Suffix) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Suffix) Match(s string) bool {
|
|
||||||
return strings.HasSuffix(s, self.Suffix)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Suffix) Index(s string) (int, []int) {
|
|
||||||
idx := strings.Index(s, self.Suffix)
|
|
||||||
if idx == -1 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return 0, []int{idx + len(self.Suffix)}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Suffix) String() string {
|
|
||||||
return fmt.Sprintf("<suffix:%s>", self.Suffix)
|
|
||||||
}
|
|
||||||
|
|
@ -1,33 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Super struct{}
|
|
||||||
|
|
||||||
func NewSuper() Super {
|
|
||||||
return Super{}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Super) Match(s string) bool {
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Super) Len() int {
|
|
||||||
return lenNo
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Super) Index(s string) (int, []int) {
|
|
||||||
segments := acquireSegments(len(s) + 1)
|
|
||||||
for i := range s {
|
|
||||||
segments = append(segments, i)
|
|
||||||
}
|
|
||||||
segments = append(segments, len(s))
|
|
||||||
|
|
||||||
return 0, segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Super) String() string {
|
|
||||||
return fmt.Sprintf("<super>")
|
|
||||||
}
|
|
||||||
|
|
@ -1,45 +0,0 @@
|
||||||
package match
|
|
||||||
|
|
||||||
import (
|
|
||||||
"fmt"
|
|
||||||
"strings"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
// raw represents raw string to match
|
|
||||||
type Text struct {
|
|
||||||
Str string
|
|
||||||
RunesLength int
|
|
||||||
BytesLength int
|
|
||||||
Segments []int
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewText(s string) Text {
|
|
||||||
return Text{
|
|
||||||
Str: s,
|
|
||||||
RunesLength: utf8.RuneCountInString(s),
|
|
||||||
BytesLength: len(s),
|
|
||||||
Segments: []int{len(s)},
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Text) Match(s string) bool {
|
|
||||||
return self.Str == s
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Text) Len() int {
|
|
||||||
return self.RunesLength
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Text) Index(s string) (int, []int) {
|
|
||||||
index := strings.Index(s, self.Str)
|
|
||||||
if index == -1 {
|
|
||||||
return -1, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
return index, self.Segments
|
|
||||||
}
|
|
||||||
|
|
||||||
func (self Text) String() string {
|
|
||||||
return fmt.Sprintf("<text:`%v`>", self.Str)
|
|
||||||
}
|
|
||||||
|
|
@ -1,72 +0,0 @@
|
||||||
package ast
|
|
||||||
|
|
||||||
type Node struct {
|
|
||||||
Parent *Node
|
|
||||||
Children []*Node
|
|
||||||
Value interface{}
|
|
||||||
Kind Kind
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewNode(k Kind, v interface{}, ch ...*Node) *Node {
|
|
||||||
n := &Node{
|
|
||||||
Kind: k,
|
|
||||||
Value: v,
|
|
||||||
}
|
|
||||||
for _, c := range ch {
|
|
||||||
Insert(n, c)
|
|
||||||
}
|
|
||||||
return n
|
|
||||||
}
|
|
||||||
|
|
||||||
func (a *Node) Equal(b *Node) bool {
|
|
||||||
if a.Kind != b.Kind {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
if a.Value != b.Value {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
if len(a.Children) != len(b.Children) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
for i, c := range a.Children {
|
|
||||||
if !c.Equal(b.Children[i]) {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
func Insert(parent *Node, children ...*Node) {
|
|
||||||
parent.Children = append(parent.Children, children...)
|
|
||||||
for _, ch := range children {
|
|
||||||
ch.Parent = parent
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type List struct {
|
|
||||||
Not bool
|
|
||||||
Chars string
|
|
||||||
}
|
|
||||||
|
|
||||||
type Range struct {
|
|
||||||
Not bool
|
|
||||||
Lo, Hi rune
|
|
||||||
}
|
|
||||||
|
|
||||||
type Text struct {
|
|
||||||
Text string
|
|
||||||
}
|
|
||||||
|
|
||||||
type Kind int
|
|
||||||
|
|
||||||
const (
|
|
||||||
KindNothing Kind = iota
|
|
||||||
KindPattern
|
|
||||||
KindList
|
|
||||||
KindRange
|
|
||||||
KindText
|
|
||||||
KindAny
|
|
||||||
KindSuper
|
|
||||||
KindSingle
|
|
||||||
KindAnyOf
|
|
||||||
)
|
|
||||||
|
|
@ -1,157 +0,0 @@
|
||||||
package ast
|
|
||||||
|
|
||||||
import (
|
|
||||||
"errors"
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/syntax/lexer"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
type Lexer interface {
|
|
||||||
Next() lexer.Token
|
|
||||||
}
|
|
||||||
|
|
||||||
type parseFn func(*Node, Lexer) (parseFn, *Node, error)
|
|
||||||
|
|
||||||
func Parse(lexer Lexer) (*Node, error) {
|
|
||||||
var parser parseFn
|
|
||||||
|
|
||||||
root := NewNode(KindPattern, nil)
|
|
||||||
|
|
||||||
var (
|
|
||||||
tree *Node
|
|
||||||
err error
|
|
||||||
)
|
|
||||||
for parser, tree = parserMain, root; parser != nil; {
|
|
||||||
parser, tree, err = parser(tree, lexer)
|
|
||||||
if err != nil {
|
|
||||||
return nil, err
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return root, nil
|
|
||||||
}
|
|
||||||
|
|
||||||
func parserMain(tree *Node, lex Lexer) (parseFn, *Node, error) {
|
|
||||||
for {
|
|
||||||
token := lex.Next()
|
|
||||||
switch token.Type {
|
|
||||||
case lexer.EOF:
|
|
||||||
return nil, tree, nil
|
|
||||||
|
|
||||||
case lexer.Error:
|
|
||||||
return nil, tree, errors.New(token.Raw)
|
|
||||||
|
|
||||||
case lexer.Text:
|
|
||||||
Insert(tree, NewNode(KindText, Text{token.Raw}))
|
|
||||||
return parserMain, tree, nil
|
|
||||||
|
|
||||||
case lexer.Any:
|
|
||||||
Insert(tree, NewNode(KindAny, nil))
|
|
||||||
return parserMain, tree, nil
|
|
||||||
|
|
||||||
case lexer.Super:
|
|
||||||
Insert(tree, NewNode(KindSuper, nil))
|
|
||||||
return parserMain, tree, nil
|
|
||||||
|
|
||||||
case lexer.Single:
|
|
||||||
Insert(tree, NewNode(KindSingle, nil))
|
|
||||||
return parserMain, tree, nil
|
|
||||||
|
|
||||||
case lexer.RangeOpen:
|
|
||||||
return parserRange, tree, nil
|
|
||||||
|
|
||||||
case lexer.TermsOpen:
|
|
||||||
a := NewNode(KindAnyOf, nil)
|
|
||||||
Insert(tree, a)
|
|
||||||
|
|
||||||
p := NewNode(KindPattern, nil)
|
|
||||||
Insert(a, p)
|
|
||||||
|
|
||||||
return parserMain, p, nil
|
|
||||||
|
|
||||||
case lexer.Separator:
|
|
||||||
p := NewNode(KindPattern, nil)
|
|
||||||
Insert(tree.Parent, p)
|
|
||||||
|
|
||||||
return parserMain, p, nil
|
|
||||||
|
|
||||||
case lexer.TermsClose:
|
|
||||||
return parserMain, tree.Parent.Parent, nil
|
|
||||||
|
|
||||||
default:
|
|
||||||
return nil, tree, fmt.Errorf("unexpected token: %s", token)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return nil, tree, fmt.Errorf("unknown error")
|
|
||||||
}
|
|
||||||
|
|
||||||
func parserRange(tree *Node, lex Lexer) (parseFn, *Node, error) {
|
|
||||||
var (
|
|
||||||
not bool
|
|
||||||
lo rune
|
|
||||||
hi rune
|
|
||||||
chars string
|
|
||||||
)
|
|
||||||
for {
|
|
||||||
token := lex.Next()
|
|
||||||
switch token.Type {
|
|
||||||
case lexer.EOF:
|
|
||||||
return nil, tree, errors.New("unexpected end")
|
|
||||||
|
|
||||||
case lexer.Error:
|
|
||||||
return nil, tree, errors.New(token.Raw)
|
|
||||||
|
|
||||||
case lexer.Not:
|
|
||||||
not = true
|
|
||||||
|
|
||||||
case lexer.RangeLo:
|
|
||||||
r, w := utf8.DecodeRuneInString(token.Raw)
|
|
||||||
if len(token.Raw) > w {
|
|
||||||
return nil, tree, fmt.Errorf("unexpected length of lo character")
|
|
||||||
}
|
|
||||||
lo = r
|
|
||||||
|
|
||||||
case lexer.RangeBetween:
|
|
||||||
//
|
|
||||||
|
|
||||||
case lexer.RangeHi:
|
|
||||||
r, w := utf8.DecodeRuneInString(token.Raw)
|
|
||||||
if len(token.Raw) > w {
|
|
||||||
return nil, tree, fmt.Errorf("unexpected length of lo character")
|
|
||||||
}
|
|
||||||
|
|
||||||
hi = r
|
|
||||||
|
|
||||||
if hi < lo {
|
|
||||||
return nil, tree, fmt.Errorf("hi character '%s' should be greater than lo '%s'", string(hi), string(lo))
|
|
||||||
}
|
|
||||||
|
|
||||||
case lexer.Text:
|
|
||||||
chars = token.Raw
|
|
||||||
|
|
||||||
case lexer.RangeClose:
|
|
||||||
isRange := lo != 0 && hi != 0
|
|
||||||
isChars := chars != ""
|
|
||||||
|
|
||||||
if isChars == isRange {
|
|
||||||
return nil, tree, fmt.Errorf("could not parse range")
|
|
||||||
}
|
|
||||||
|
|
||||||
if isRange {
|
|
||||||
Insert(tree, NewNode(KindRange, Range{
|
|
||||||
Lo: lo,
|
|
||||||
Hi: hi,
|
|
||||||
Not: not,
|
|
||||||
}))
|
|
||||||
} else {
|
|
||||||
Insert(tree, NewNode(KindList, List{
|
|
||||||
Chars: chars,
|
|
||||||
Not: not,
|
|
||||||
}))
|
|
||||||
}
|
|
||||||
|
|
||||||
return parserMain, tree, nil
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,273 +0,0 @@
|
||||||
package lexer
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"github.com/gobwas/glob/util/runes"
|
|
||||||
"unicode/utf8"
|
|
||||||
)
|
|
||||||
|
|
||||||
const (
|
|
||||||
char_any = '*'
|
|
||||||
char_comma = ','
|
|
||||||
char_single = '?'
|
|
||||||
char_escape = '\\'
|
|
||||||
char_range_open = '['
|
|
||||||
char_range_close = ']'
|
|
||||||
char_terms_open = '{'
|
|
||||||
char_terms_close = '}'
|
|
||||||
char_range_not = '!'
|
|
||||||
char_range_between = '-'
|
|
||||||
)
|
|
||||||
|
|
||||||
var specials = []byte{
|
|
||||||
char_any,
|
|
||||||
char_single,
|
|
||||||
char_escape,
|
|
||||||
char_range_open,
|
|
||||||
char_range_close,
|
|
||||||
char_terms_open,
|
|
||||||
char_terms_close,
|
|
||||||
}
|
|
||||||
|
|
||||||
func Special(c byte) bool {
|
|
||||||
return bytes.IndexByte(specials, c) != -1
|
|
||||||
}
|
|
||||||
|
|
||||||
type tokens []Token
|
|
||||||
|
|
||||||
func (i *tokens) shift() (ret Token) {
|
|
||||||
ret = (*i)[0]
|
|
||||||
copy(*i, (*i)[1:])
|
|
||||||
*i = (*i)[:len(*i)-1]
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (i *tokens) push(v Token) {
|
|
||||||
*i = append(*i, v)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (i *tokens) empty() bool {
|
|
||||||
return len(*i) == 0
|
|
||||||
}
|
|
||||||
|
|
||||||
var eof rune = 0
|
|
||||||
|
|
||||||
type lexer struct {
|
|
||||||
data string
|
|
||||||
pos int
|
|
||||||
err error
|
|
||||||
|
|
||||||
tokens tokens
|
|
||||||
termsLevel int
|
|
||||||
|
|
||||||
lastRune rune
|
|
||||||
lastRuneSize int
|
|
||||||
hasRune bool
|
|
||||||
}
|
|
||||||
|
|
||||||
func NewLexer(source string) *lexer {
|
|
||||||
l := &lexer{
|
|
||||||
data: source,
|
|
||||||
tokens: tokens(make([]Token, 0, 4)),
|
|
||||||
}
|
|
||||||
return l
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) Next() Token {
|
|
||||||
if l.err != nil {
|
|
||||||
return Token{Error, l.err.Error()}
|
|
||||||
}
|
|
||||||
if !l.tokens.empty() {
|
|
||||||
return l.tokens.shift()
|
|
||||||
}
|
|
||||||
|
|
||||||
l.fetchItem()
|
|
||||||
return l.Next()
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) peek() (r rune, w int) {
|
|
||||||
if l.pos == len(l.data) {
|
|
||||||
return eof, 0
|
|
||||||
}
|
|
||||||
|
|
||||||
r, w = utf8.DecodeRuneInString(l.data[l.pos:])
|
|
||||||
if r == utf8.RuneError {
|
|
||||||
l.errorf("could not read rune")
|
|
||||||
r = eof
|
|
||||||
w = 0
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) read() rune {
|
|
||||||
if l.hasRune {
|
|
||||||
l.hasRune = false
|
|
||||||
l.seek(l.lastRuneSize)
|
|
||||||
return l.lastRune
|
|
||||||
}
|
|
||||||
|
|
||||||
r, s := l.peek()
|
|
||||||
l.seek(s)
|
|
||||||
|
|
||||||
l.lastRune = r
|
|
||||||
l.lastRuneSize = s
|
|
||||||
|
|
||||||
return r
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) seek(w int) {
|
|
||||||
l.pos += w
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) unread() {
|
|
||||||
if l.hasRune {
|
|
||||||
l.errorf("could not unread rune")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
l.seek(-l.lastRuneSize)
|
|
||||||
l.hasRune = true
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) errorf(f string, v ...interface{}) {
|
|
||||||
l.err = fmt.Errorf(f, v...)
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) inTerms() bool {
|
|
||||||
return l.termsLevel > 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) termsEnter() {
|
|
||||||
l.termsLevel++
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) termsLeave() {
|
|
||||||
l.termsLevel--
|
|
||||||
}
|
|
||||||
|
|
||||||
var inTextBreakers = []rune{char_single, char_any, char_range_open, char_terms_open}
|
|
||||||
var inTermsBreakers = append(inTextBreakers, char_terms_close, char_comma)
|
|
||||||
|
|
||||||
func (l *lexer) fetchItem() {
|
|
||||||
r := l.read()
|
|
||||||
switch {
|
|
||||||
case r == eof:
|
|
||||||
l.tokens.push(Token{EOF, ""})
|
|
||||||
|
|
||||||
case r == char_terms_open:
|
|
||||||
l.termsEnter()
|
|
||||||
l.tokens.push(Token{TermsOpen, string(r)})
|
|
||||||
|
|
||||||
case r == char_comma && l.inTerms():
|
|
||||||
l.tokens.push(Token{Separator, string(r)})
|
|
||||||
|
|
||||||
case r == char_terms_close && l.inTerms():
|
|
||||||
l.tokens.push(Token{TermsClose, string(r)})
|
|
||||||
l.termsLeave()
|
|
||||||
|
|
||||||
case r == char_range_open:
|
|
||||||
l.tokens.push(Token{RangeOpen, string(r)})
|
|
||||||
l.fetchRange()
|
|
||||||
|
|
||||||
case r == char_single:
|
|
||||||
l.tokens.push(Token{Single, string(r)})
|
|
||||||
|
|
||||||
case r == char_any:
|
|
||||||
if l.read() == char_any {
|
|
||||||
l.tokens.push(Token{Super, string(r) + string(r)})
|
|
||||||
} else {
|
|
||||||
l.unread()
|
|
||||||
l.tokens.push(Token{Any, string(r)})
|
|
||||||
}
|
|
||||||
|
|
||||||
default:
|
|
||||||
l.unread()
|
|
||||||
|
|
||||||
var breakers []rune
|
|
||||||
if l.inTerms() {
|
|
||||||
breakers = inTermsBreakers
|
|
||||||
} else {
|
|
||||||
breakers = inTextBreakers
|
|
||||||
}
|
|
||||||
l.fetchText(breakers)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) fetchRange() {
|
|
||||||
var wantHi bool
|
|
||||||
var wantClose bool
|
|
||||||
var seenNot bool
|
|
||||||
for {
|
|
||||||
r := l.read()
|
|
||||||
if r == eof {
|
|
||||||
l.errorf("unexpected end of input")
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if wantClose {
|
|
||||||
if r != char_range_close {
|
|
||||||
l.errorf("expected close range character")
|
|
||||||
} else {
|
|
||||||
l.tokens.push(Token{RangeClose, string(r)})
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
if wantHi {
|
|
||||||
l.tokens.push(Token{RangeHi, string(r)})
|
|
||||||
wantClose = true
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if !seenNot && r == char_range_not {
|
|
||||||
l.tokens.push(Token{Not, string(r)})
|
|
||||||
seenNot = true
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if n, w := l.peek(); n == char_range_between {
|
|
||||||
l.seek(w)
|
|
||||||
l.tokens.push(Token{RangeLo, string(r)})
|
|
||||||
l.tokens.push(Token{RangeBetween, string(n)})
|
|
||||||
wantHi = true
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
l.unread() // unread first peek and fetch as text
|
|
||||||
l.fetchText([]rune{char_range_close})
|
|
||||||
wantClose = true
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
func (l *lexer) fetchText(breakers []rune) {
|
|
||||||
var data []rune
|
|
||||||
var escaped bool
|
|
||||||
|
|
||||||
reading:
|
|
||||||
for {
|
|
||||||
r := l.read()
|
|
||||||
if r == eof {
|
|
||||||
break
|
|
||||||
}
|
|
||||||
|
|
||||||
if !escaped {
|
|
||||||
if r == char_escape {
|
|
||||||
escaped = true
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if runes.IndexRune(breakers, r) != -1 {
|
|
||||||
l.unread()
|
|
||||||
break reading
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
escaped = false
|
|
||||||
data = append(data, r)
|
|
||||||
}
|
|
||||||
|
|
||||||
if len(data) > 0 {
|
|
||||||
l.tokens.push(Token{Text, string(data)})
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,88 +0,0 @@
|
||||||
package lexer
|
|
||||||
|
|
||||||
import "fmt"
|
|
||||||
|
|
||||||
type TokenType int
|
|
||||||
|
|
||||||
const (
|
|
||||||
EOF TokenType = iota
|
|
||||||
Error
|
|
||||||
Text
|
|
||||||
Char
|
|
||||||
Any
|
|
||||||
Super
|
|
||||||
Single
|
|
||||||
Not
|
|
||||||
Separator
|
|
||||||
RangeOpen
|
|
||||||
RangeClose
|
|
||||||
RangeLo
|
|
||||||
RangeHi
|
|
||||||
RangeBetween
|
|
||||||
TermsOpen
|
|
||||||
TermsClose
|
|
||||||
)
|
|
||||||
|
|
||||||
func (tt TokenType) String() string {
|
|
||||||
switch tt {
|
|
||||||
case EOF:
|
|
||||||
return "eof"
|
|
||||||
|
|
||||||
case Error:
|
|
||||||
return "error"
|
|
||||||
|
|
||||||
case Text:
|
|
||||||
return "text"
|
|
||||||
|
|
||||||
case Char:
|
|
||||||
return "char"
|
|
||||||
|
|
||||||
case Any:
|
|
||||||
return "any"
|
|
||||||
|
|
||||||
case Super:
|
|
||||||
return "super"
|
|
||||||
|
|
||||||
case Single:
|
|
||||||
return "single"
|
|
||||||
|
|
||||||
case Not:
|
|
||||||
return "not"
|
|
||||||
|
|
||||||
case Separator:
|
|
||||||
return "separator"
|
|
||||||
|
|
||||||
case RangeOpen:
|
|
||||||
return "range_open"
|
|
||||||
|
|
||||||
case RangeClose:
|
|
||||||
return "range_close"
|
|
||||||
|
|
||||||
case RangeLo:
|
|
||||||
return "range_lo"
|
|
||||||
|
|
||||||
case RangeHi:
|
|
||||||
return "range_hi"
|
|
||||||
|
|
||||||
case RangeBetween:
|
|
||||||
return "range_between"
|
|
||||||
|
|
||||||
case TermsOpen:
|
|
||||||
return "terms_open"
|
|
||||||
|
|
||||||
case TermsClose:
|
|
||||||
return "terms_close"
|
|
||||||
|
|
||||||
default:
|
|
||||||
return "undef"
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
type Token struct {
|
|
||||||
Type TokenType
|
|
||||||
Raw string
|
|
||||||
}
|
|
||||||
|
|
||||||
func (t Token) String() string {
|
|
||||||
return fmt.Sprintf("%v<%q>", t.Type, t.Raw)
|
|
||||||
}
|
|
||||||
|
|
@ -1,14 +0,0 @@
|
||||||
package syntax
|
|
||||||
|
|
||||||
import (
|
|
||||||
"github.com/gobwas/glob/syntax/ast"
|
|
||||||
"github.com/gobwas/glob/syntax/lexer"
|
|
||||||
)
|
|
||||||
|
|
||||||
func Parse(s string) (*ast.Node, error) {
|
|
||||||
return ast.Parse(lexer.NewLexer(s))
|
|
||||||
}
|
|
||||||
|
|
||||||
func Special(b byte) bool {
|
|
||||||
return lexer.Special(b)
|
|
||||||
}
|
|
||||||
|
|
@ -1,154 +0,0 @@
|
||||||
package runes
|
|
||||||
|
|
||||||
func Index(s, needle []rune) int {
|
|
||||||
ls, ln := len(s), len(needle)
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case ln == 0:
|
|
||||||
return 0
|
|
||||||
case ln == 1:
|
|
||||||
return IndexRune(s, needle[0])
|
|
||||||
case ln == ls:
|
|
||||||
if Equal(s, needle) {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
case ln > ls:
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
head:
|
|
||||||
for i := 0; i < ls && ls-i >= ln; i++ {
|
|
||||||
for y := 0; y < ln; y++ {
|
|
||||||
if s[i+y] != needle[y] {
|
|
||||||
continue head
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func LastIndex(s, needle []rune) int {
|
|
||||||
ls, ln := len(s), len(needle)
|
|
||||||
|
|
||||||
switch {
|
|
||||||
case ln == 0:
|
|
||||||
if ls == 0 {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
return ls
|
|
||||||
case ln == 1:
|
|
||||||
return IndexLastRune(s, needle[0])
|
|
||||||
case ln == ls:
|
|
||||||
if Equal(s, needle) {
|
|
||||||
return 0
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
case ln > ls:
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
head:
|
|
||||||
for i := ls - 1; i >= 0 && i >= ln; i-- {
|
|
||||||
for y := ln - 1; y >= 0; y-- {
|
|
||||||
if s[i-(ln-y-1)] != needle[y] {
|
|
||||||
continue head
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return i - ln + 1
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
// IndexAny returns the index of the first instance of any Unicode code point
|
|
||||||
// from chars in s, or -1 if no Unicode code point from chars is present in s.
|
|
||||||
func IndexAny(s, chars []rune) int {
|
|
||||||
if len(chars) > 0 {
|
|
||||||
for i, c := range s {
|
|
||||||
for _, m := range chars {
|
|
||||||
if c == m {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func Contains(s, needle []rune) bool {
|
|
||||||
return Index(s, needle) >= 0
|
|
||||||
}
|
|
||||||
|
|
||||||
func Max(s []rune) (max rune) {
|
|
||||||
for _, r := range s {
|
|
||||||
if r > max {
|
|
||||||
max = r
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
func Min(s []rune) rune {
|
|
||||||
min := rune(-1)
|
|
||||||
for _, r := range s {
|
|
||||||
if min == -1 {
|
|
||||||
min = r
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if r < min {
|
|
||||||
min = r
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return min
|
|
||||||
}
|
|
||||||
|
|
||||||
func IndexRune(s []rune, r rune) int {
|
|
||||||
for i, c := range s {
|
|
||||||
if c == r {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func IndexLastRune(s []rune, r rune) int {
|
|
||||||
for i := len(s) - 1; i >= 0; i-- {
|
|
||||||
if s[i] == r {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
|
|
||||||
func Equal(a, b []rune) bool {
|
|
||||||
if len(a) == len(b) {
|
|
||||||
for i := 0; i < len(a); i++ {
|
|
||||||
if a[i] != b[i] {
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return true
|
|
||||||
}
|
|
||||||
|
|
||||||
return false
|
|
||||||
}
|
|
||||||
|
|
||||||
// HasPrefix tests whether the string s begins with prefix.
|
|
||||||
func HasPrefix(s, prefix []rune) bool {
|
|
||||||
return len(s) >= len(prefix) && Equal(s[0:len(prefix)], prefix)
|
|
||||||
}
|
|
||||||
|
|
||||||
// HasSuffix tests whether the string s ends with suffix.
|
|
||||||
func HasSuffix(s, suffix []rune) bool {
|
|
||||||
return len(s) >= len(suffix) && Equal(s[len(s)-len(suffix):], suffix)
|
|
||||||
}
|
|
||||||
|
|
@ -1,13 +0,0 @@
|
||||||
package strings
|
|
||||||
|
|
||||||
import "strings"
|
|
||||||
|
|
||||||
func IndexAnyRunes(s string, rs []rune) int {
|
|
||||||
for _, r := range rs {
|
|
||||||
if i := strings.IndexRune(s, r); i != -1 {
|
|
||||||
return i
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
return -1
|
|
||||||
}
|
|
||||||
File diff suppressed because it is too large
Load diff
|
|
@ -1,27 +0,0 @@
|
||||||
Copyright (c) 2009 The Go Authors. All rights reserved.
|
|
||||||
|
|
||||||
Redistribution and use in source and binary forms, with or without
|
|
||||||
modification, are permitted provided that the following conditions are
|
|
||||||
met:
|
|
||||||
|
|
||||||
* Redistributions of source code must retain the above copyright
|
|
||||||
notice, this list of conditions and the following disclaimer.
|
|
||||||
* Redistributions in binary form must reproduce the above
|
|
||||||
copyright notice, this list of conditions and the following disclaimer
|
|
||||||
in the documentation and/or other materials provided with the
|
|
||||||
distribution.
|
|
||||||
* Neither the name of Google Inc. nor the names of its
|
|
||||||
contributors may be used to endorse or promote products derived from
|
|
||||||
this software without specific prior written permission.
|
|
||||||
|
|
||||||
THIS SOFTWARE IS PROVIDED BY THE COPYRIGHT HOLDERS AND CONTRIBUTORS
|
|
||||||
"AS IS" AND ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, THE IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR
|
|
||||||
A PARTICULAR PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE COPYRIGHT
|
|
||||||
OWNER OR CONTRIBUTORS BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL,
|
|
||||||
SPECIAL, EXEMPLARY, OR CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT
|
|
||||||
LIMITED TO, PROCUREMENT OF SUBSTITUTE GOODS OR SERVICES; LOSS OF USE,
|
|
||||||
DATA, OR PROFITS; OR BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY
|
|
||||||
THEORY OF LIABILITY, WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT
|
|
||||||
(INCLUDING NEGLIGENCE OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE
|
|
||||||
OF THIS SOFTWARE, EVEN IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
|
||||||
|
|
@ -1,682 +0,0 @@
|
||||||
// Copyright 2013 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
// Identify mismatches between assembly files and Go func declarations.
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"bytes"
|
|
||||||
"fmt"
|
|
||||||
"go/ast"
|
|
||||||
"go/token"
|
|
||||||
"regexp"
|
|
||||||
"strconv"
|
|
||||||
"strings"
|
|
||||||
)
|
|
||||||
|
|
||||||
// 'kind' is a kind of assembly variable.
|
|
||||||
// The kinds 1, 2, 4, 8 stand for values of that size.
|
|
||||||
type asmKind int
|
|
||||||
|
|
||||||
// These special kinds are not valid sizes.
|
|
||||||
const (
|
|
||||||
asmString asmKind = 100 + iota
|
|
||||||
asmSlice
|
|
||||||
asmInterface
|
|
||||||
asmEmptyInterface
|
|
||||||
)
|
|
||||||
|
|
||||||
// An asmArch describes assembly parameters for an architecture
|
|
||||||
type asmArch struct {
|
|
||||||
name string
|
|
||||||
ptrSize int
|
|
||||||
intSize int
|
|
||||||
maxAlign int
|
|
||||||
bigEndian bool
|
|
||||||
stack string
|
|
||||||
lr bool
|
|
||||||
}
|
|
||||||
|
|
||||||
// An asmFunc describes the expected variables for a function on a given architecture.
|
|
||||||
type asmFunc struct {
|
|
||||||
arch *asmArch
|
|
||||||
size int // size of all arguments
|
|
||||||
vars map[string]*asmVar
|
|
||||||
varByOffset map[int]*asmVar
|
|
||||||
}
|
|
||||||
|
|
||||||
// An asmVar describes a single assembly variable.
|
|
||||||
type asmVar struct {
|
|
||||||
name string
|
|
||||||
kind asmKind
|
|
||||||
typ string
|
|
||||||
off int
|
|
||||||
size int
|
|
||||||
inner []*asmVar
|
|
||||||
}
|
|
||||||
|
|
||||||
var (
|
|
||||||
asmArch386 = asmArch{"386", 4, 4, 4, false, "SP", false}
|
|
||||||
asmArchArm = asmArch{"arm", 4, 4, 4, false, "R13", true}
|
|
||||||
asmArchArm64 = asmArch{"arm64", 8, 8, 8, false, "RSP", true}
|
|
||||||
asmArchAmd64 = asmArch{"amd64", 8, 8, 8, false, "SP", false}
|
|
||||||
asmArchAmd64p32 = asmArch{"amd64p32", 4, 4, 8, false, "SP", false}
|
|
||||||
asmArchMips64 = asmArch{"mips64", 8, 8, 8, true, "R29", true}
|
|
||||||
asmArchMips64LE = asmArch{"mips64", 8, 8, 8, false, "R29", true}
|
|
||||||
asmArchPpc64 = asmArch{"ppc64", 8, 8, 8, true, "R1", true}
|
|
||||||
asmArchPpc64LE = asmArch{"ppc64le", 8, 8, 8, false, "R1", true}
|
|
||||||
|
|
||||||
arches = []*asmArch{
|
|
||||||
&asmArch386,
|
|
||||||
&asmArchArm,
|
|
||||||
&asmArchArm64,
|
|
||||||
&asmArchAmd64,
|
|
||||||
&asmArchAmd64p32,
|
|
||||||
&asmArchMips64,
|
|
||||||
&asmArchMips64LE,
|
|
||||||
&asmArchPpc64,
|
|
||||||
&asmArchPpc64LE,
|
|
||||||
}
|
|
||||||
)
|
|
||||||
|
|
||||||
var (
|
|
||||||
re = regexp.MustCompile
|
|
||||||
asmPlusBuild = re(`//\s+\+build\s+([^\n]+)`)
|
|
||||||
asmTEXT = re(`\bTEXT\b.*·([^\(]+)\(SB\)(?:\s*,\s*([0-9A-Z|+]+))?(?:\s*,\s*\$(-?[0-9]+)(?:-([0-9]+))?)?`)
|
|
||||||
asmDATA = re(`\b(DATA|GLOBL)\b`)
|
|
||||||
asmNamedFP = re(`([a-zA-Z0-9_\xFF-\x{10FFFF}]+)(?:\+([0-9]+))\(FP\)`)
|
|
||||||
asmUnnamedFP = re(`[^+\-0-9](([0-9]+)\(FP\))`)
|
|
||||||
asmSP = re(`[^+\-0-9](([0-9]+)\(([A-Z0-9]+)\))`)
|
|
||||||
asmOpcode = re(`^\s*(?:[A-Z0-9a-z_]+:)?\s*([A-Z]+)\s*([^,]*)(?:,\s*(.*))?`)
|
|
||||||
ppc64Suff = re(`([BHWD])(ZU|Z|U|BR)?$`)
|
|
||||||
)
|
|
||||||
|
|
||||||
func asmCheck(pkg *Package) {
|
|
||||||
if !vet("asmdecl") {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// No work if no assembly files.
|
|
||||||
if !pkg.hasFileWithSuffix(".s") {
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Gather declarations. knownFunc[name][arch] is func description.
|
|
||||||
knownFunc := make(map[string]map[string]*asmFunc)
|
|
||||||
|
|
||||||
for _, f := range pkg.files {
|
|
||||||
if f.file != nil {
|
|
||||||
for _, decl := range f.file.Decls {
|
|
||||||
if decl, ok := decl.(*ast.FuncDecl); ok && decl.Body == nil {
|
|
||||||
knownFunc[decl.Name.Name] = f.asmParseDecl(decl)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
Files:
|
|
||||||
for _, f := range pkg.files {
|
|
||||||
if !strings.HasSuffix(f.name, ".s") {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
Println("Checking file", f.name)
|
|
||||||
|
|
||||||
// Determine architecture from file name if possible.
|
|
||||||
var arch string
|
|
||||||
var archDef *asmArch
|
|
||||||
for _, a := range arches {
|
|
||||||
if strings.HasSuffix(f.name, "_"+a.name+".s") {
|
|
||||||
arch = a.name
|
|
||||||
archDef = a
|
|
||||||
break
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
lines := strings.SplitAfter(string(f.content), "\n")
|
|
||||||
var (
|
|
||||||
fn *asmFunc
|
|
||||||
fnName string
|
|
||||||
localSize, argSize int
|
|
||||||
wroteSP bool
|
|
||||||
haveRetArg bool
|
|
||||||
retLine []int
|
|
||||||
)
|
|
||||||
|
|
||||||
flushRet := func() {
|
|
||||||
if fn != nil && fn.vars["ret"] != nil && !haveRetArg && len(retLine) > 0 {
|
|
||||||
v := fn.vars["ret"]
|
|
||||||
for _, line := range retLine {
|
|
||||||
f.Badf(token.NoPos, "%s:%d: [%s] %s: RET without writing to %d-byte ret+%d(FP)", f.name, line, arch, fnName, v.size, v.off)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
retLine = nil
|
|
||||||
}
|
|
||||||
for lineno, line := range lines {
|
|
||||||
lineno++
|
|
||||||
|
|
||||||
badf := func(format string, args ...interface{}) {
|
|
||||||
f.Badf(token.NoPos, "%s:%d: [%s] %s: %s", f.name, lineno, arch, fnName, fmt.Sprintf(format, args...))
|
|
||||||
}
|
|
||||||
|
|
||||||
if arch == "" {
|
|
||||||
// Determine architecture from +build line if possible.
|
|
||||||
if m := asmPlusBuild.FindStringSubmatch(line); m != nil {
|
|
||||||
Fields:
|
|
||||||
for _, fld := range strings.Fields(m[1]) {
|
|
||||||
for _, a := range arches {
|
|
||||||
if a.name == fld {
|
|
||||||
arch = a.name
|
|
||||||
archDef = a
|
|
||||||
break Fields
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if m := asmTEXT.FindStringSubmatch(line); m != nil {
|
|
||||||
flushRet()
|
|
||||||
if arch == "" {
|
|
||||||
f.Warnf(token.NoPos, "%s: cannot determine architecture for assembly file", f.name)
|
|
||||||
continue Files
|
|
||||||
}
|
|
||||||
fnName = m[1]
|
|
||||||
fn = knownFunc[m[1]][arch]
|
|
||||||
if fn != nil {
|
|
||||||
size, _ := strconv.Atoi(m[4])
|
|
||||||
if size != fn.size && (m[2] != "7" && !strings.Contains(m[2], "NOSPLIT") || size != 0) {
|
|
||||||
badf("wrong argument size %d; expected $...-%d", size, fn.size)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
localSize, _ = strconv.Atoi(m[3])
|
|
||||||
localSize += archDef.intSize
|
|
||||||
if archDef.lr {
|
|
||||||
// Account for caller's saved LR
|
|
||||||
localSize += archDef.intSize
|
|
||||||
}
|
|
||||||
argSize, _ = strconv.Atoi(m[4])
|
|
||||||
if fn == nil && !strings.Contains(fnName, "<>") {
|
|
||||||
badf("function %s missing Go declaration", fnName)
|
|
||||||
}
|
|
||||||
wroteSP = false
|
|
||||||
haveRetArg = false
|
|
||||||
continue
|
|
||||||
} else if strings.Contains(line, "TEXT") && strings.Contains(line, "SB") {
|
|
||||||
// function, but not visible from Go (didn't match asmTEXT), so stop checking
|
|
||||||
flushRet()
|
|
||||||
fn = nil
|
|
||||||
fnName = ""
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.Contains(line, "RET") {
|
|
||||||
retLine = append(retLine, lineno)
|
|
||||||
}
|
|
||||||
|
|
||||||
if fnName == "" {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if asmDATA.FindStringSubmatch(line) != nil {
|
|
||||||
fn = nil
|
|
||||||
}
|
|
||||||
|
|
||||||
if archDef == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
if strings.Contains(line, ", "+archDef.stack) || strings.Contains(line, ",\t"+archDef.stack) {
|
|
||||||
wroteSP = true
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, m := range asmSP.FindAllStringSubmatch(line, -1) {
|
|
||||||
if m[3] != archDef.stack || wroteSP {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
off := 0
|
|
||||||
if m[1] != "" {
|
|
||||||
off, _ = strconv.Atoi(m[2])
|
|
||||||
}
|
|
||||||
if off >= localSize {
|
|
||||||
if fn != nil {
|
|
||||||
v := fn.varByOffset[off-localSize]
|
|
||||||
if v != nil {
|
|
||||||
badf("%s should be %s+%d(FP)", m[1], v.name, off-localSize)
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if off >= localSize+argSize {
|
|
||||||
badf("use of %s points beyond argument frame", m[1])
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
badf("use of %s to access argument frame", m[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
if fn == nil {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, m := range asmUnnamedFP.FindAllStringSubmatch(line, -1) {
|
|
||||||
off, _ := strconv.Atoi(m[2])
|
|
||||||
v := fn.varByOffset[off]
|
|
||||||
if v != nil {
|
|
||||||
badf("use of unnamed argument %s; offset %d is %s+%d(FP)", m[1], off, v.name, v.off)
|
|
||||||
} else {
|
|
||||||
badf("use of unnamed argument %s", m[1])
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
for _, m := range asmNamedFP.FindAllStringSubmatch(line, -1) {
|
|
||||||
name := m[1]
|
|
||||||
off := 0
|
|
||||||
if m[2] != "" {
|
|
||||||
off, _ = strconv.Atoi(m[2])
|
|
||||||
}
|
|
||||||
if name == "ret" || strings.HasPrefix(name, "ret_") {
|
|
||||||
haveRetArg = true
|
|
||||||
}
|
|
||||||
v := fn.vars[name]
|
|
||||||
if v == nil {
|
|
||||||
// Allow argframe+0(FP).
|
|
||||||
if name == "argframe" && off == 0 {
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
v = fn.varByOffset[off]
|
|
||||||
if v != nil {
|
|
||||||
badf("unknown variable %s; offset %d is %s+%d(FP)", name, off, v.name, v.off)
|
|
||||||
} else {
|
|
||||||
badf("unknown variable %s", name)
|
|
||||||
}
|
|
||||||
continue
|
|
||||||
}
|
|
||||||
asmCheckVar(badf, fn, line, m[0], off, v)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
flushRet()
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
// asmParseDecl parses a function decl for expected assembly variables.
|
|
||||||
func (f *File) asmParseDecl(decl *ast.FuncDecl) map[string]*asmFunc {
|
|
||||||
var (
|
|
||||||
arch *asmArch
|
|
||||||
fn *asmFunc
|
|
||||||
offset int
|
|
||||||
failed bool
|
|
||||||
)
|
|
||||||
|
|
||||||
addVar := func(outer string, v asmVar) {
|
|
||||||
if vo := fn.vars[outer]; vo != nil {
|
|
||||||
vo.inner = append(vo.inner, &v)
|
|
||||||
}
|
|
||||||
fn.vars[v.name] = &v
|
|
||||||
for i := 0; i < v.size; i++ {
|
|
||||||
fn.varByOffset[v.off+i] = &v
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
addParams := func(list []*ast.Field) {
|
|
||||||
for i, fld := range list {
|
|
||||||
// Determine alignment, size, and kind of type in declaration.
|
|
||||||
var align, size int
|
|
||||||
var kind asmKind
|
|
||||||
names := fld.Names
|
|
||||||
typ := f.gofmt(fld.Type)
|
|
||||||
switch t := fld.Type.(type) {
|
|
||||||
default:
|
|
||||||
switch typ {
|
|
||||||
default:
|
|
||||||
f.Warnf(fld.Type.Pos(), "unknown assembly argument type %s", typ)
|
|
||||||
failed = true
|
|
||||||
return
|
|
||||||
case "int8", "uint8", "byte", "bool":
|
|
||||||
size = 1
|
|
||||||
case "int16", "uint16":
|
|
||||||
size = 2
|
|
||||||
case "int32", "uint32", "float32":
|
|
||||||
size = 4
|
|
||||||
case "int64", "uint64", "float64":
|
|
||||||
align = arch.maxAlign
|
|
||||||
size = 8
|
|
||||||
case "int", "uint":
|
|
||||||
size = arch.intSize
|
|
||||||
case "uintptr", "iword", "Word", "Errno", "unsafe.Pointer":
|
|
||||||
size = arch.ptrSize
|
|
||||||
case "string", "ErrorString":
|
|
||||||
size = arch.ptrSize * 2
|
|
||||||
align = arch.ptrSize
|
|
||||||
kind = asmString
|
|
||||||
}
|
|
||||||
case *ast.ChanType, *ast.FuncType, *ast.MapType, *ast.StarExpr:
|
|
||||||
size = arch.ptrSize
|
|
||||||
case *ast.InterfaceType:
|
|
||||||
align = arch.ptrSize
|
|
||||||
size = 2 * arch.ptrSize
|
|
||||||
if len(t.Methods.List) > 0 {
|
|
||||||
kind = asmInterface
|
|
||||||
} else {
|
|
||||||
kind = asmEmptyInterface
|
|
||||||
}
|
|
||||||
case *ast.ArrayType:
|
|
||||||
if t.Len == nil {
|
|
||||||
size = arch.ptrSize + 2*arch.intSize
|
|
||||||
align = arch.ptrSize
|
|
||||||
kind = asmSlice
|
|
||||||
break
|
|
||||||
}
|
|
||||||
f.Warnf(fld.Type.Pos(), "unsupported assembly argument type %s", typ)
|
|
||||||
failed = true
|
|
||||||
case *ast.StructType:
|
|
||||||
f.Warnf(fld.Type.Pos(), "unsupported assembly argument type %s", typ)
|
|
||||||
failed = true
|
|
||||||
}
|
|
||||||
if align == 0 {
|
|
||||||
align = size
|
|
||||||
}
|
|
||||||
if kind == 0 {
|
|
||||||
kind = asmKind(size)
|
|
||||||
}
|
|
||||||
offset += -offset & (align - 1)
|
|
||||||
|
|
||||||
// Create variable for each name being declared with this type.
|
|
||||||
if len(names) == 0 {
|
|
||||||
name := "unnamed"
|
|
||||||
if decl.Type.Results != nil && len(decl.Type.Results.List) > 0 && &list[0] == &decl.Type.Results.List[0] && i == 0 {
|
|
||||||
// Assume assembly will refer to single unnamed result as r.
|
|
||||||
name = "ret"
|
|
||||||
}
|
|
||||||
names = []*ast.Ident{{Name: name}}
|
|
||||||
}
|
|
||||||
for _, id := range names {
|
|
||||||
name := id.Name
|
|
||||||
addVar("", asmVar{
|
|
||||||
name: name,
|
|
||||||
kind: kind,
|
|
||||||
typ: typ,
|
|
||||||
off: offset,
|
|
||||||
size: size,
|
|
||||||
})
|
|
||||||
switch kind {
|
|
||||||
case 8:
|
|
||||||
if arch.ptrSize == 4 {
|
|
||||||
w1, w2 := "lo", "hi"
|
|
||||||
if arch.bigEndian {
|
|
||||||
w1, w2 = w2, w1
|
|
||||||
}
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_" + w1,
|
|
||||||
kind: 4,
|
|
||||||
typ: "half " + typ,
|
|
||||||
off: offset,
|
|
||||||
size: 4,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_" + w2,
|
|
||||||
kind: 4,
|
|
||||||
typ: "half " + typ,
|
|
||||||
off: offset + 4,
|
|
||||||
size: 4,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
|
|
||||||
case asmEmptyInterface:
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_type",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "interface type",
|
|
||||||
off: offset,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_data",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "interface data",
|
|
||||||
off: offset + arch.ptrSize,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
|
|
||||||
case asmInterface:
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_itable",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "interface itable",
|
|
||||||
off: offset,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_data",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "interface data",
|
|
||||||
off: offset + arch.ptrSize,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
|
|
||||||
case asmSlice:
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_base",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "slice base",
|
|
||||||
off: offset,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_len",
|
|
||||||
kind: asmKind(arch.intSize),
|
|
||||||
typ: "slice len",
|
|
||||||
off: offset + arch.ptrSize,
|
|
||||||
size: arch.intSize,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_cap",
|
|
||||||
kind: asmKind(arch.intSize),
|
|
||||||
typ: "slice cap",
|
|
||||||
off: offset + arch.ptrSize + arch.intSize,
|
|
||||||
size: arch.intSize,
|
|
||||||
})
|
|
||||||
|
|
||||||
case asmString:
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_base",
|
|
||||||
kind: asmKind(arch.ptrSize),
|
|
||||||
typ: "string base",
|
|
||||||
off: offset,
|
|
||||||
size: arch.ptrSize,
|
|
||||||
})
|
|
||||||
addVar(name, asmVar{
|
|
||||||
name: name + "_len",
|
|
||||||
kind: asmKind(arch.intSize),
|
|
||||||
typ: "string len",
|
|
||||||
off: offset + arch.ptrSize,
|
|
||||||
size: arch.intSize,
|
|
||||||
})
|
|
||||||
}
|
|
||||||
offset += size
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
||||||
m := make(map[string]*asmFunc)
|
|
||||||
for _, arch = range arches {
|
|
||||||
fn = &asmFunc{
|
|
||||||
arch: arch,
|
|
||||||
vars: make(map[string]*asmVar),
|
|
||||||
varByOffset: make(map[int]*asmVar),
|
|
||||||
}
|
|
||||||
offset = 0
|
|
||||||
addParams(decl.Type.Params.List)
|
|
||||||
if decl.Type.Results != nil && len(decl.Type.Results.List) > 0 {
|
|
||||||
offset += -offset & (arch.maxAlign - 1)
|
|
||||||
addParams(decl.Type.Results.List)
|
|
||||||
}
|
|
||||||
fn.size = offset
|
|
||||||
m[arch.name] = fn
|
|
||||||
}
|
|
||||||
|
|
||||||
if failed {
|
|
||||||
return nil
|
|
||||||
}
|
|
||||||
return m
|
|
||||||
}
|
|
||||||
|
|
||||||
// asmCheckVar checks a single variable reference.
|
|
||||||
func asmCheckVar(badf func(string, ...interface{}), fn *asmFunc, line, expr string, off int, v *asmVar) {
|
|
||||||
m := asmOpcode.FindStringSubmatch(line)
|
|
||||||
if m == nil {
|
|
||||||
if !strings.HasPrefix(strings.TrimSpace(line), "//") {
|
|
||||||
badf("cannot find assembly opcode")
|
|
||||||
}
|
|
||||||
return
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine operand sizes from instruction.
|
|
||||||
// Typically the suffix suffices, but there are exceptions.
|
|
||||||
var src, dst, kind asmKind
|
|
||||||
op := m[1]
|
|
||||||
switch fn.arch.name + "." + op {
|
|
||||||
case "386.FMOVLP":
|
|
||||||
src, dst = 8, 4
|
|
||||||
case "arm.MOVD":
|
|
||||||
src = 8
|
|
||||||
case "arm.MOVW":
|
|
||||||
src = 4
|
|
||||||
case "arm.MOVH", "arm.MOVHU":
|
|
||||||
src = 2
|
|
||||||
case "arm.MOVB", "arm.MOVBU":
|
|
||||||
src = 1
|
|
||||||
// LEA* opcodes don't really read the second arg.
|
|
||||||
// They just take the address of it.
|
|
||||||
case "386.LEAL":
|
|
||||||
dst = 4
|
|
||||||
case "amd64.LEAQ":
|
|
||||||
dst = 8
|
|
||||||
case "amd64p32.LEAL":
|
|
||||||
dst = 4
|
|
||||||
default:
|
|
||||||
switch fn.arch.name {
|
|
||||||
case "386", "amd64":
|
|
||||||
if strings.HasPrefix(op, "F") && (strings.HasSuffix(op, "D") || strings.HasSuffix(op, "DP")) {
|
|
||||||
// FMOVDP, FXCHD, etc
|
|
||||||
src = 8
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(op, "P") && strings.HasSuffix(op, "RD") {
|
|
||||||
// PINSRD, PEXTRD, etc
|
|
||||||
src = 4
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(op, "F") && (strings.HasSuffix(op, "F") || strings.HasSuffix(op, "FP")) {
|
|
||||||
// FMOVFP, FXCHF, etc
|
|
||||||
src = 4
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if strings.HasSuffix(op, "SD") {
|
|
||||||
// MOVSD, SQRTSD, etc
|
|
||||||
src = 8
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if strings.HasSuffix(op, "SS") {
|
|
||||||
// MOVSS, SQRTSS, etc
|
|
||||||
src = 4
|
|
||||||
break
|
|
||||||
}
|
|
||||||
if strings.HasPrefix(op, "SET") {
|
|
||||||
// SETEQ, etc
|
|
||||||
src = 1
|
|
||||||
break
|
|
||||||
}
|
|
||||||
switch op[len(op)-1] {
|
|
||||||
case 'B':
|
|
||||||
src = 1
|
|
||||||
case 'W':
|
|
||||||
src = 2
|
|
||||||
case 'L':
|
|
||||||
src = 4
|
|
||||||
case 'D', 'Q':
|
|
||||||
src = 8
|
|
||||||
}
|
|
||||||
case "ppc64", "ppc64le":
|
|
||||||
// Strip standard suffixes to reveal size letter.
|
|
||||||
m := ppc64Suff.FindStringSubmatch(op)
|
|
||||||
if m != nil {
|
|
||||||
switch m[1][0] {
|
|
||||||
case 'B':
|
|
||||||
src = 1
|
|
||||||
case 'H':
|
|
||||||
src = 2
|
|
||||||
case 'W':
|
|
||||||
src = 4
|
|
||||||
case 'D':
|
|
||||||
src = 8
|
|
||||||
}
|
|
||||||
}
|
|
||||||
case "mips64", "mips64le":
|
|
||||||
switch op {
|
|
||||||
case "MOVB", "MOVBU":
|
|
||||||
src = 1
|
|
||||||
case "MOVH", "MOVHU":
|
|
||||||
src = 2
|
|
||||||
case "MOVW", "MOVWU", "MOVF":
|
|
||||||
src = 4
|
|
||||||
case "MOVV", "MOVD":
|
|
||||||
src = 8
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
if dst == 0 {
|
|
||||||
dst = src
|
|
||||||
}
|
|
||||||
|
|
||||||
// Determine whether the match we're holding
|
|
||||||
// is the first or second argument.
|
|
||||||
if strings.Index(line, expr) > strings.Index(line, ",") {
|
|
||||||
kind = dst
|
|
||||||
} else {
|
|
||||||
kind = src
|
|
||||||
}
|
|
||||||
|
|
||||||
vk := v.kind
|
|
||||||
vt := v.typ
|
|
||||||
switch vk {
|
|
||||||
case asmInterface, asmEmptyInterface, asmString, asmSlice:
|
|
||||||
// allow reference to first word (pointer)
|
|
||||||
vk = v.inner[0].kind
|
|
||||||
vt = v.inner[0].typ
|
|
||||||
}
|
|
||||||
|
|
||||||
if off != v.off {
|
|
||||||
var inner bytes.Buffer
|
|
||||||
for i, vi := range v.inner {
|
|
||||||
if len(v.inner) > 1 {
|
|
||||||
fmt.Fprintf(&inner, ",")
|
|
||||||
}
|
|
||||||
fmt.Fprintf(&inner, " ")
|
|
||||||
if i == len(v.inner)-1 {
|
|
||||||
fmt.Fprintf(&inner, "or ")
|
|
||||||
}
|
|
||||||
fmt.Fprintf(&inner, "%s+%d(FP)", vi.name, vi.off)
|
|
||||||
}
|
|
||||||
badf("invalid offset %s; expected %s+%d(FP)%s", expr, v.name, v.off, inner.String())
|
|
||||||
return
|
|
||||||
}
|
|
||||||
if kind != 0 && kind != vk {
|
|
||||||
var inner bytes.Buffer
|
|
||||||
if len(v.inner) > 0 {
|
|
||||||
fmt.Fprintf(&inner, " containing")
|
|
||||||
for i, vi := range v.inner {
|
|
||||||
if i > 0 && len(v.inner) > 2 {
|
|
||||||
fmt.Fprintf(&inner, ",")
|
|
||||||
}
|
|
||||||
fmt.Fprintf(&inner, " ")
|
|
||||||
if i > 0 && i == len(v.inner)-1 {
|
|
||||||
fmt.Fprintf(&inner, "and ")
|
|
||||||
}
|
|
||||||
fmt.Fprintf(&inner, "%s+%d(FP)", vi.name, vi.off)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
badf("invalid %s of %s; %s is %d-byte value%s", op, expr, vt, vk, inner.String())
|
|
||||||
}
|
|
||||||
}
|
|
||||||
|
|
@ -1,49 +0,0 @@
|
||||||
// Copyright 2013 The Go Authors. All rights reserved.
|
|
||||||
// Use of this source code is governed by a BSD-style
|
|
||||||
// license that can be found in the LICENSE file.
|
|
||||||
|
|
||||||
/*
|
|
||||||
This file contains the code to check for useless assignments.
|
|
||||||
*/
|
|
||||||
|
|
||||||
package main
|
|
||||||
|
|
||||||
import (
|
|
||||||
"go/ast"
|
|
||||||
"go/token"
|
|
||||||
"reflect"
|
|
||||||
)
|
|
||||||
|
|
||||||
func init() {
|
|
||||||
register("assign",
|
|
||||||
"check for useless assignments",
|
|
||||||
checkAssignStmt,
|
|
||||||
assignStmt)
|
|
||||||
}
|
|
||||||
|
|
||||||
// TODO: should also check for assignments to struct fields inside methods
|
|
||||||
// that are on T instead of *T.
|
|
||||||
|
|
||||||
// checkAssignStmt checks for assignments of the form "<expr> = <expr>".
|
|
||||||
// These are almost always useless, and even when they aren't they are usually a mistake.
|
|
||||||
func checkAssignStmt(f *File, node ast.Node) {
|
|
||||||
stmt := node.(*ast.AssignStmt)
|
|
||||||
if stmt.Tok != token.ASSIGN {
|
|
||||||
return // ignore :=
|
|
||||||
}
|
|
||||||
if len(stmt.Lhs) != len(stmt.Rhs) {
|
|
||||||
// If LHS and RHS have different cardinality, they can't be the same.
|
|
||||||
return
|
|
||||||
}
|
|
||||||
for i, lhs := range stmt.Lhs {
|
|
||||||
rhs := stmt.Rhs[i]
|
|
||||||
if reflect.TypeOf(lhs) != reflect.TypeOf(rhs) {
|
|
||||||
continue // short-circuit the heavy-weight gofmt check
|
|
||||||
}
|
|
||||||
le := f.gofmt(lhs)
|
|
||||||
re := f.gofmt(rhs)
|
|
||||||
if le == re {
|
|
||||||
f.Badf(stmt.Pos(), "self-assignment of %s to %s", re, le)
|
|
||||||
}
|
|
||||||
}
|
|
||||||
}
|
|
||||||
Some files were not shown because too many files have changed in this diff Show more
Loading…
Reference in a new issue