Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add support for token dictionaries #336

Open
wants to merge 5 commits into
base: master
Choose a base branch
from

Conversation

disconnect3d
Copy link

Hey,

We have had an internship project in Trail of Bits to improve go-fuzz recently which was done by @vfsrfs.

We are aware of the ongoing official work on native fuzzing support but since we still rely on go-fuzz we went ahead to improve its pain points and so that's why we propose this PR. Feel free to drop it if you feel it is too much or you do not want to introduce any changes in go-fuzz.

Below I am pasting the description from the original PR merged to our fork of go-fuzz (trail-of-forks#2).


This PR adds support for dictionaries containing interesting keywords (tokens) that are useful for the mutation of inputs while fuzzing, particularly, when fuzzing syntax-aware programs (#174). This modification allows to provide the -dict flag to go-fuzz, so that the user can provide a dictionary file with useful tokens for the fuzzing campaign. E.g.:

-dict /path/dictionary.dict

The tokens parsed from the dictionary are stored in ROData.strLits, as those are the string literals that are used by the mutator engine when generating new fuzzing inputs.

The dictionary format that is accepted by the -dict flag is the same that is used by AFL/Libfuzzer (see https://github.com/google/AFL/tree/master/dictionaries).

This dictionary format defines that there is one token per line. Every line consists of a name followed by an equal sign and the token in quotes (e.g. name=”token”). It is also possible to define binary sequences by providing the values in hex (e.g. \xNN) within the token. To insert a backslash or a double quote within the token, it has to be escaped using a backslash (e.g. \\ or \”). \n and \t are recognized as well, since they might be useful for text-based protocols. Other problematic characters can be added by providing its hex value.

To make this implementation fully compatible with AFL/Libfuzzer’s dictionaries, token levels are supported. A level can be appended to every token, by appending @<num> to the keyword, e.g.
keyword@1=”token”

These tokens will be loaded only, if the dictionary level is equal to or greater than the specified number. The default dictionary level is 0, but it can be increased by appending @<num> to the dictionary path. E.g.:

-dict /path/dictionary.dict@1

@josharian
Copy link
Collaborator

@thepudds I'm going to leave this to you to review, if you want.

Note that there is a similar outstanding PR at #315. It might be worth looking at it, and at the comments there.

@CityOfLight77
Copy link

@thepudds @dvyukov

_, err := os.Stat(*flagDict)
if err != nil {
// If not it might be because a dictLevel was provided by appending @<num> to the dict path
atIndex := strings.LastIndex(*flagDict, "@")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seems that this logic can be simpler if we try to split by @ first. Otherwise we have too many branches and duplicated error handling. If there is no specific reason to try to open the file with "@" first, please split by @ before first stat.

}
dictLevel, err = strconv.Atoi((*flagDict)[atIndex+1:])
if err != nil {
log.Printf("could not convert dict level using dict level 0 instead")
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why not Fatalf? That's incorrect user input.

@@ -57,6 +63,32 @@ func main() {
log.Fatalf("both -http and -worker are specified")
}

if *flagDict != "" {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this logic to a helper function. It's too low level for main.

@@ -116,6 +216,35 @@ func newHub(metadata MetaData) *Hub {
ro.intLits = append(ro.intLits, []byte(lit.Val))
}
}

if dictPath != "" {
/*
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please use C-style single-line // comments.

}
token := parseDictTokenLine(&tokenLine, tokenLineNo)
if token != nil {
// add token to ro.strLits
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This does not look useful. Remove.

@@ -73,6 +77,102 @@ type Stats struct {
restarts uint64
}

func parseDictTokenLine(tokenLine *[]byte, tokenLineNo int) *[]byte {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please move this function to somewhere at the bottom (after use). It's not the most important function of the file to be first.

if bytes.HasPrefix(bytes.TrimSpace(tokenLine), []byte("#")) || len(tokenLine) == 0 {
continue
}
token := parseDictTokenLine(&tokenLine, tokenLineNo)
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why pass pointer to tokenLine?

@@ -73,6 +77,102 @@ type Stats struct {
restarts uint64
}

func parseDictTokenLine(tokenLine *[]byte, tokenLineNo int) *[]byte {
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why return pointer to []byte?

token = append(token, (*tokenLine)[index])
break

case byte('x'):
Copy link
Owner

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Do these need to be casted to byte?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants