| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Copy everything into the build context.
Add a .dockerignore file to avoid copying the definitely unnecessary
files and folders.
Check copyrights presence in Dockerfiles.
|
| |
|
|
|
|
|
|
| |
Remap remote branches to local ones both when polling remote
repositories and when cloning the distributed repository.
This will ensure that the branches are still accessible via
TreeName/BranchName (it got broken during the latest changes).
|
| |
|
|
|
|
|
|
|
|
| |
Instead of a complicated overlayfs setup, do a lightweight git clone in
a way that the cloned local copy keeps on referencing the git object
storage on the NFS.
It's simpler code-wise and hopefully will be less susceptible to
failures when local git operations coincide with a git fetch on the
shared repository.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
When a triage or build step coincides with a cron job that polls new
kernel trees, they often fail due to git command noticing that the
repository is being updated.
In this case, the step logs an error and exits with status=1. Argo
workflows offers a functionality to retry such steps up to the specific
number of times and with exponentially increasing backoffs.
Configure the build and triage step templates to retry 3 times with 5
and then 10 minutes distance between the retries.
|
| |
|
|
|
|
|
|
| |
During triage, process each fuzzing campaign separately as they may have
different base kernel revisions (e.g. if the newest revisions of the
kernel no longer build/boot under the specific kernel configuration).
Refactor the representation of the fuzzing targets in api.go.
|
| |
|
|
|
| |
After this change it fits more naturally into the Go's error
functionality.
|
| |
|
|
|
|
| |
The image is to be deprecated.
Closes #6350.
|
| |
|
|
|
|
|
|
|
| |
When determining whether a patch series is worth fuzzing, consider not
only the hashes of .text symbols, but also the hashes of the global
(static and non-static) variables.
As before, calculate the hashes during build and process them at the
beginning of the fuzz step.
|
| |
|
|
|
|
|
|
|
|
| |
Fix the following error:
runtime error: invalid memory address or nil pointer dereference
[signal SIGSEGV: segmentation violation code=0x1 addr=0x70 pc=0x16e3c5e]
at main.reportResults ( /syz-cluster/workflow/build-step/main.go:146 )
at main.main ( /syz-cluster/workflow/build-step/main.go:84 )
|
| |
|
|
|
|
| |
In case of an infrastructure-related error, do not report the build as
failed, but rather just report the error status for the particular
session test.
|
| |
|
|
|
|
|
| |
During smoke builds, we may have a tree name/branch name pair instead of
just a commit hash, which is the case for normal kernel build requests.
Support both types of requests.
|
| |
|
|
|
| |
For smoke builds, move the corresponding code below the error check for
the kernel checkout.
|
| |
|
|
|
|
|
|
|
| |
Hash the code section of the individual symbols from vmlinux.o and use
it to determine the functions that changed their bodies between the base
and the patched build.
If the number of affected symbols is reasonable (<5%), fuzz it with the
highest priority.
|
| |
|
|
|
|
|
|
| |
Share not just the tree name (mainline, net, etc), but also the full URL
to check out the repository.
For that, add one more field to the Build entity and adjust email
reporting templates.
|
| | |
|
| |
|
|
| |
Extract Report/Log from the errors returned by build.Image().
|
| |
|
|
|
| |
Before incorporating it into the process, let's see how reliable this
value is at the moment.
|
| |
|
|
|
|
|
|
|
| |
Accept IMAGE_PREFIX and IMAGE_TAG parameters that allow to reuse the
Makefile and a lot of k8s configurations both for local and prod
environments.
Refactor Makefile: define build-* and push-* rules, use templates to
avoid repetition.
|
| |
|
|
|
|
| |
Once a new kernel revision becomes available, build it to figure out
whether it's buildable. This information will be used in the triage step
to figure out the right base kernel revision.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Provide an API to set up the reporting of finished sessions for which
syz-cluster collected reportable findings.
The actual sending of the results is to be done in a separate component
that would:
1) Call Next() to get the next report to send.
2) Call Confirm() to confirm that the report has been sent.
3) Call Upstream() if the report has been moderated and needs to be sent
to e.g. public mailing lists.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
|
| |
It will be important once we deploy to GKE.
For now, let's set just some limits, we'll adjust them over time.
|
| |
|
|
|
| |
Use the syzbot container as the base.
Use ADD instead of wget.
|
| |
|
|
| |
Record the logs from the build and fuzzing steps.
|
| |
|
|
| |
Run a smoke test on the base kernel build and report back the results.
|
| |
|
|
|
|
|
| |
It's not necessary - submit the results from the individual steps
instead.
Report patched kernel build failures as findings.
|
|
|
The basic code of a K8S-based cluster that:
* Aggregates new LKML patch series.
* Determines the kernel trees to apply them to.
* Builds the basic and the patched kernel.
* Displays the results on a web dashboard.
This is a very rudimentary version with a lot of TODOs that
provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
* syz-cluster/dashboard: a web dashboard listing patch series
and their test results.
* syz-cluster/series-tracker: polls Lore archives and submits
the new patch series to the DB.
* syz-cluster/controller: schedules workflows and provides API for them.
* syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.
* syz-cluster/workflow/*: workflow steps.
For the DB structure see syz-cluster/pkg/db/migrations/*.
|