| Commit message (Collapse) | Author | Age | Files | Lines |
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a bunch of bugs now:
1. The emulator binary is killed when the first test finished,
before subsequent tests start.
2. The child emulator binary (the actual emulator "emulator_main") is leaked.
These subprocesses are never killed and live past tests.
(that's why we get away with the first problem)
3. Errors are not handled well if emulator setup fails.
We leave spannerHost empty and subsequent tests ignore the original error
(since only the first test executes setupSpannerOnce).
4. NewTransientDB duplicates a bunch of work that needs to happen only once
(in particular os.Setenv("SPANNER_EMULATOR_HOST")).
Fix all of that.
Support spanner emulator distributed as part of google-cloud-sdk while we are here,
so that tests can be run locally.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not necessary, makes things slower, than is not working anyway:
=== RUN TestSeriesRepositoryUpdate
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSeriesRepositoryUpdate (0.07s)
=== RUN TestSeriesInsertSession
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSeriesInsertSession (0.11s)
=== RUN TestQueryWaitingSessions
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestQueryWaitingSessions (0.12s)
=== RUN TestSessionTestRepository
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSessionTestRepository (0.09s)
=== RUN TestMigrations
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestMigrations (0.15s)
=== RUN TestStatsSQLs
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
|
| |
|
|
| |
Any is the preferred over interface{} now in Go.
|
| |
|
|
| |
This is a better choice than context.Background().
|
| |
|
|
|
| |
Extract the common "Query - ReadOne - close iterator" pattern into a
separate method.
|
| |
|
|
|
|
|
| |
Fill in build details per each finding and display that information in
the report email.
Extend the test that verifies how api.SessionReport is filled.
|
| |
|
|
| |
Since these may be included into email addresses, keep them short.
|
| |
|
|
|
|
| |
Replace an UpdateReport() call with a RecordReply(). This will
eventually allow us to support the email sender implementations for
which we do not immediately know the MessageID of the reported message.
|
| |
|
|
|
| |
Don't restart the job if it returned a non-zero exit code.
Don't treat the ErrNoChange error as a failure.
|
| |
|
|
|
| |
Add a test that verifies that we have correct down migrations.
Fix the down migrations sql file.
|
| |
|
|
|
| |
Add API to record replies under the reports that allows to determine the
original report only by having the MessageID.
|
| |
|
|
|
| |
The job can be created by a CD to apply the missing DB migrations.
Add a Makefile target to prepare the job's description.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Provide an API to set up the reporting of finished sessions for which
syz-cluster collected reportable findings.
The actual sending of the results is to be done in a separate component
that would:
1) Call Next() to get the next report to send.
2) Call Confirm() to confirm that the report has been sent.
3) Call Upstream() if the report has been moderated and needs to be sent
to e.g. public mailing lists.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
|
|
|
|
| |
It lets immediately distinguish the series that were actually processed
from the series that were skipped early on.
By storing a string, we also make it apparent why exactly the series was
skipped.
|
| |
|
|
| |
Start a spanner emulator binary (if it's present in the image).
|
|
|
The basic code of a K8S-based cluster that:
* Aggregates new LKML patch series.
* Determines the kernel trees to apply them to.
* Builds the basic and the patched kernel.
* Displays the results on a web dashboard.
This is a very rudimentary version with a lot of TODOs that
provides a skeleton for further work.
The project makes use of Argo workflows and Spanner DB.
Bootstrap is used for the web interface.
Overall structure:
* syz-cluster/dashboard: a web dashboard listing patch series
and their test results.
* syz-cluster/series-tracker: polls Lore archives and submits
the new patch series to the DB.
* syz-cluster/controller: schedules workflows and provides API for them.
* syz-cluster/kernel-disk: a cron job that keeps a kernel checkout up to date.
* syz-cluster/workflow/*: workflow steps.
For the DB structure see syz-cluster/pkg/db/migrations/*.
|