| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
| |
Copy everything into the build context.
Add a .dockerignore file to avoid copying the definitely unnecessary
files and folders.
Check copyrights presence in Dockerfiles.
|
| |
|
|
| |
It's become necessary after #6533.
|
| |
|
|
| |
Print a bit more info to the logs to make them more understandable.
|
| |
|
|
|
| |
For some reason, it does not download the newer toolchain versions
automatically.
|
| |
|
|
|
| |
Take web dashboard URL from the config and use it to generate links for
logs, reproducers, etc.
|
| |
|
|
|
|
|
|
|
|
|
| |
Make blob store URIs dependent on the IDs explicitly passed into the
Write() function. In many cases this removes the need to distinguish
between the case when the object has already been saved and we must
overwrite it and when it's saved the first time.
Keep on first storing the object to the blob storage and only then
submitting the entities to Spanner. This will lead to some wasted space,
but we'll add garbage collection at some point.
|
| |
|
|
|
|
|
|
| |
As the cluster is private, use the ClusterIP type to only request a
cluster-internal IP.
Since web dashboard will need to be exposed via Load Balancer, set the
necessary metadata annotation.
|
| |
|
|
|
|
|
|
|
| |
Environment variables are convenient for storing values like DB or GCS
bucket names, but structured formats are more convenient for the actual
service configuration.
Separate global-config from global-config-env and add the functionality
that queries and parses the config options.
|
| |
|
|
|
|
|
|
|
| |
If the workflow step crashed or timed out, we used to have Running
status for such steps even though the session itself may be long
finished.
In order to prevent this inconsistency, on finishing each session go
through all remaining running steps and update their status to Error.
|
| |
|
|
|
| |
Add simple Previous/Next navigation for the list of series.
For now, just rely on SQL's LIMIT/OFFSET functionality.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If the Loop() was restarted in between the moment we marked the session
as started in the DB and the moment we actually started the workflow,
there was no way back to the normal operation.
That was the reason of the sporadic TestProcessor failures we've seen in
the presubmit tests.
Handle this case in the code by just continuing the non-finished calls.
Closes #5776.
|
| |
|
|
|
| |
For each series, display the Cc'd email list and let users filter the
patch series list by those addresses.
|
| |
|
|
|
| |
For minikube, it changes nothing, but it will make it easier to plug it
into GKE.
|
| |
|
|
|
|
|
|
|
| |
Accept IMAGE_PREFIX and IMAGE_TAG parameters that allow to reuse the
Makefile and a lot of k8s configurations both for local and prod
environments.
Refactor Makefile: define build-* and push-* rules, use templates to
avoid repetition.
|
| |
|
|
| |
That should hopefully shed more light on #5776.
|
| |
|
|
| |
This will facilitate its reuse in tests.
|
| |
|
|
|
| |
This will facilitate the reuse of the code.
Split off SessionService from SeriesService.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
|
| |
It will be important once we deploy to GKE.
For now, let's set just some limits, we'll adjust them over time.
|
| |
|
|
|
|
|
| |
We already use a GCS emulator for the dev environment, use a separate
bucket for blobs.
Keep using the local storage driver for unit tests.
|
| |
|
|
| |
Record the logs from the build and fuzzing steps.
|
| |
|
|
|
| |
If we don't consider the possibility, we risk processing a nil value and
causing a nil pointer dereference.
|
| | |
|
| |
|
|
| |
This gives a more natural order than just the names.
|
| |
|
|
|
|
|
|
| |
It lets immediately distinguish the series that were actually processed
from the series that were skipped early on.
By storing a string, we also make it apparent why exactly the series was
skipped.
|
| |
|
|
|
| |
Configure the number of patch series processed in parallel via an env
variable.
|
| |
|
|
|
| |
Findings are crashes and build/boot/test errors that happened during the
patch series processing.
|
|
|
The basic code of a K8S-based cluster that:
* Aggregates new LKML patch series.
* Determines the kernel trees to apply them to.
* Builds the basic and the patched kernel.
* Displays the results on a web dashboard.
This is a very rudimentary version with a lot of TODOs that
provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
* syz-cluster/dashboard: a web dashboard listing patch series
and their test results.
* syz-cluster/series-tracker: polls Lore archives and submits
the new patch series to the DB.
* syz-cluster/controller: schedules workflows and provides API for them.
* syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.
* syz-cluster/workflow/*: workflow steps.
For the DB structure see syz-cluster/pkg/db/migrations/*.
|