| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Poll the tree and consider it during patch series triage. Note that the
media lists are not monitored as of now, so the tree will only be used
when we receive media patches through other mailing lists.
|
| |
|
|
|
|
| |
Add some initial #syz invalid support to syz-cluster. For now, mark all
findings as invalid and don't display that such series have findings on
the web dashboard.
|
| |
|
|
|
|
|
|
| |
Add the drm tree to the list of trees considered during patch series
triage.
Currently we sometimes hit false positives becase of not considering
this tree, see e.g. https://ci.syzbot.org/series/1ecba15f-79c3-40ff-99c6-8f3540bddf65
|
| |
|
|
|
|
| |
A number of mm patches do not apply neither on top of torvalds, nor on
top of linux-next. The mm/mm-new branch seems to be a more reliable
base.
|
| |
|
|
|
|
|
|
| |
Instead of a predefined set of manually written syz-manager configs,
construct it dynamically from different bits.
During triage, select not just one, but all matching fuzzer
configurations and then merge them together.
|
| |
|
|
|
|
|
|
| |
During triage, process each fuzzing campaign separately as they may have
different base kernel revisions (e.g. if the newest revisions of the
kernel no longer build/boot under the specific kernel configuration).
Refactor the representation of the fuzzing targets in api.go.
|
| |
|
|
|
| |
Specify a track name for each fuzzing campaign.
It will help distinguish them once there are multiple ones.
|
| |
|
|
|
| |
Instead of just checking whether the bug was observed on the base crash,
accept a regexp of accepted bug titles as well.
|
| |
|
|
|
| |
Adjut the workflow template and the API to run multiple fuzzing
campaigns as a part of single patch series processing.
|
| |
|
|
|
|
|
|
| |
There are a number of patch series that don't apply to torvalds, but do
apply to linux-next.
Since we don't fetch all maintainer trees, use linux-next as the last
resort.
|
| |
|
|
|
| |
Some series don't apply to torvalds, see e.g.
https://ci.syzbot.org/series/f429573e-7862-4f1e-97dd-3215a235b1d3
|
| | |
|
| |
|
|
|
| |
When fuzzing fs-related series, enable fs syscalls and use the fs
corpus.
|
| |
|
|
|
|
|
|
| |
If all symbol hashes between the base and the pathed kernel match,
there's no reason to spend time fuzzing the series.
Add a 'skipped' status to the enum of possible session test results and
set it from the fuzz-step.
|
| | |
|
| |
|
|
|
|
|
|
| |
There are cases when we do not need the "if the patched code is not
reached within 30 minutes, abort fuzzing" check.
This is e.g. the case of mm/ code that is not fully instrumented by
KCOV.
|
| |
|
|
|
|
|
|
| |
Keep the fuzz-step parameters in a separate structure to minimize the
field duplication.
It will also facilitate the reuse of the same syzkaller config in
several fuzzing configurations.
|
| | |
|
| |
|
|
| |
Use a custom set of enabled syscalls.
|
| |
|
|
|
| |
Add a config to fuzz kvm patches.
Listen on the kvm mailing list.
|
| |
|
|
|
|
|
|
| |
Not always are fuzzing targets well represented by their own kernel
trees, so let's select a kernel tree and a fuzzing config separately.
Drop explicit priorities and instead just sort the lists of trees and
configs.
|
| |
|
|
|
|
|
|
| |
We used to only upload them on triage failure, but let's improve the
inspectability even for successfully finished triage jobs.
Slightly refactor the controller API around the triage result
submission.
|
| |
|
|
|
|
| |
Fuzz bpf patches differently from net patches.
Monitor netfilter and bpf mailing lists
|
| | |
|
| |
|
|
|
|
|
| |
Current results show that way too many series do not apply to the
non-next versions of their corresponding trees. So let's make -next the
default choice unless it the opposite was specified in the series
subject.
|
| |
|
|
|
| |
Add several more network-related trees, including those that will only
be selected if mentioned directly.
|
| |
|
|
|
| |
Sometimes the patch series directly hint at the kernel tree they should
be applied to. Extract and remember this information.
|
| |
|
|
|
|
|
|
|
| |
Currently, the list was both within the Series object and within the
SessionReport object that also encloses Series. And, since only was was
actually filled, we were unable to actually Cc the people from the
series.
Keep only the Cc list in the Series object and adjust the tests.
|
| |
|
|
|
|
|
|
| |
Share not just the tree name (mainline, net, etc), but also the full URL
to check out the repository.
For that, add one more field to the Build entity and adjust email
reporting templates.
|
| | |
|
| |
|
|
|
|
|
| |
Fill in build details per each finding and display that information in
the report email.
Extend the test that verifies how api.SessionReport is filled.
|
| | |
|
| |
|
|
| |
For now, only share it for the skipped series.
|
| | |
|
| |
|
|
|
|
| |
Refactor Tree structure to host both the kernel config and the fuzzer
config.
Add some basic net fuzzing configs.
|
| |
|
|
|
|
|
|
| |
The first revision of the email template that will be used for
reporting the findings.
This PR adds more fields to the pkg/api package, but these are not
filled by the implementation yet. That will be done separately.
|
| |
|
|
|
|
| |
Once a new kernel revision becomes available, build it to figure out
whether it's buildable. This information will be used in the triage step
to figure out the right base kernel revision.
|
| |
|
|
|
|
|
|
|
|
|
| |
Instead of giving several base commits to try, make the more concrete
decision at the triage step and return only one option.
This relies on the triager always having the information about the
current state of the each tree, which will be added in the following
commit.
As the result, the workflow script becomes much simpler.
|
| |
|
|
|
| |
Leave a TODO with a better Linux repository mirror. We still need to
double-check that it works fine on syz-cluster before switching to it.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Provide an API to set up the reporting of finished sessions for which
syz-cluster collected reportable findings.
The actual sending of the results is to be done in a separate component
that would:
1) Call Next() to get the next report to send.
2) Call Confirm() to confirm that the report has been sent.
3) Call Upstream() if the report has been moderated and needs to be sent
to e.g. public mailing lists.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
|
|
|
|
| |
It lets immediately distinguish the series that were actually processed
from the series that were skipped early on.
By storing a string, we also make it apparent why exactly the series was
skipped.
|
| |
|
|
|
| |
Findings are crashes and build/boot/test errors that happened during the
patch series processing.
|
|
|
The basic code of a K8S-based cluster that:
* Aggregates new LKML patch series.
* Determines the kernel trees to apply them to.
* Builds the basic and the patched kernel.
* Displays the results on a web dashboard.
This is a very rudimentary version with a lot of TODOs that
provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
* syz-cluster/dashboard: a web dashboard listing patch series
and their test results.
* syz-cluster/series-tracker: polls Lore archives and submits
the new patch series to the DB.
* syz-cluster/controller: schedules workflows and provides API for them.
* syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.
* syz-cluster/workflow/*: workflow steps.
For the DB structure see syz-cluster/pkg/db/migrations/*.
|