| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
| |
Print a bit more info to the logs to make them more understandable.
|
| |
|
|
|
|
|
|
|
|
|
| |
Make blob store URIs dependent on the IDs explicitly passed into the
Write() function. In many cases this removes the need to distinguish
between the case when the object has already been saved and we must
overwrite it and when it's saved the first time.
Keep on first storing the object to the blob storage and only then
submitting the entities to Spanner. This will lead to some wasted space,
but we'll add garbage collection at some point.
|
| |
|
|
|
|
|
|
|
| |
Environment variables are convenient for storing values like DB or GCS
bucket names, but structured formats are more convenient for the actual
service configuration.
Separate global-config from global-config-env and add the functionality
that queries and parses the config options.
|
| |
|
|
|
|
|
|
|
| |
If the workflow step crashed or timed out, we used to have Running
status for such steps even though the session itself may be long
finished.
In order to prevent this inconsistency, on finishing each session go
through all remaining running steps and update their status to Error.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If the Loop() was restarted in between the moment we marked the session
as started in the DB and the moment we actually started the workflow,
there was no way back to the normal operation.
That was the reason of the sporadic TestProcessor failures we've seen in
the presubmit tests.
Handle this case in the code by just continuing the non-finished calls.
Closes #5776.
|
| |
|
|
| |
That should hopefully shed more light on #5776.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
|
| |
If we don't consider the possibility, we risk processing a nil value and
causing a nil pointer dereference.
|
| |
|
|
|
| |
Configure the number of patch series processed in parallel via an env
variable.
|
|
|
The basic code of a K8S-based cluster that:
* Aggregates new LKML patch series.
* Determines the kernel trees to apply them to.
* Builds the basic and the patched kernel.
* Displays the results on a web dashboard.
This is a very rudimentary version with a lot of TODOs that
provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
* syz-cluster/dashboard: a web dashboard listing patch series
and their test results.
* syz-cluster/series-tracker: polls Lore archives and submits
the new patch series to the DB.
* syz-cluster/controller: schedules workflows and provides API for them.
* syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.
* syz-cluster/workflow/*: workflow steps.
For the DB structure see syz-cluster/pkg/db/migrations/*.
|