-
Notifications
You must be signed in to change notification settings - Fork 62
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Detect fuzzing issues by considering past results #2054
Comments
I like these ideas a lot and would be more than happy to review PRs. Regarding third-party code, then my personal position is that any third-party code in your target is from a security standpoint the same as your own code, as longs as it's reachable/triggerable from untrusted input. So I think it's a bit more nuanced than just excluding third-party code. In general I like the direction of these ideas and would happy to land them. I think these would require most changes to be done in the webapp rather than core, but am happy in either case to review and get PRs landed. |
Happy to hear you are interested. It will take a bit before I have some real results as I'm still getting familiar with the code.
I understand your point to be that, third party code included in the project can have the same impact on security as project code. I definitely agree, however, what I am not quite sure about is who is responsible for testing/fuzzing the third-party code. So maybe we can discuss this a bit. Thinking about this some more, we could differentiate between:
I would only exclude code coverage for category 2. I guess the alternative would be to duplicate the fuzzer harnesses for this dependency, which seem wasteful to me. There is however the argument that the project might use the library code in a specific way that is not already tested for. For me the big reason to exclude code coverage of these dependencies is to make the coverage metric more meaningful. Coming back to the grpc-httpjson-transcoding example, I actually made a mistake and the code is not vendored but should be of category 2. So if the "real" coverage of this project drops we would not really know, a current introspector report also seems to suggest that there is hardly any fuzzing going on. Is this just because the runtime coverage is higher than static reachable code? |
Hello, as part of some research we analyzed fuzzer performance degradation by looking at the reasons why fuzzing coverage reduces for C/C++ projects in OSS-Fuzz. We found that there are several types of issues that are easier to detect by comparing to past reports.
I would be happy to implement these metrics if you are interested.
This is also related to diffing runs: #734
I can also provide more examples if you want, just wanted to keep it short.
The text was updated successfully, but these errors were encountered: