| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Display a list of other versions of the same series on the series details page.
Fetch all series sharing the same title and render them in a new "Series Versions" table, allowing navigation between
different versions of a patch series.
|
| |
|
|
| |
Adds unit tests for the series and patch name filtering functionality in the database repository.
|
| |
|
|
|
| |
Update SeriesFilter in pkg/db to include PatchName and SeriesName fields, implement the SQL logic to filter by these
fields case-insensitively, and expose these filters in the dashboard UI.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
There is a bunch of bugs now:
1. The emulator binary is killed when the first test finished,
before subsequent tests start.
2. The child emulator binary (the actual emulator "emulator_main") is leaked.
These subprocesses are never killed and live past tests.
(that's why we get away with the first problem)
3. Errors are not handled well if emulator setup fails.
We leave spannerHost empty and subsequent tests ignore the original error
(since only the first test executes setupSpannerOnce).
4. NewTransientDB duplicates a bunch of work that needs to happen only once
(in particular os.Setenv("SPANNER_EMULATOR_HOST")).
Fix all of that.
Support spanner emulator distributed as part of google-cloud-sdk while we are here,
so that tests can be run locally.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
This is not necessary, makes things slower, than is not working anyway:
=== RUN TestSeriesRepositoryUpdate
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSeriesRepositoryUpdate (0.07s)
=== RUN TestSeriesInsertSession
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSeriesInsertSession (0.11s)
=== RUN TestQueryWaitingSessions
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestQueryWaitingSessions (0.12s)
=== RUN TestSessionTestRepository
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestSessionTestRepository (0.09s)
=== RUN TestMigrations
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
--- PASS: TestMigrations (0.15s)
=== RUN TestStatsSQLs
spanner.go:168: failed to drop the test DB: rpc error: code = Canceled desc = context canceled
|
| |
|
|
| |
Any is the preferred over interface{} now in Go.
|
| |
|
|
|
|
| |
Add some initial #syz invalid support to syz-cluster. For now, mark all
findings as invalid and don't display that such series have findings on
the web dashboard.
|
| |
|
|
|
|
| |
Apart from just the total number of findings (some of which may end up
being non-reported), also display specifically the number of reports
that have found their way to the mailing lists.
|
| |
|
|
| |
This is a better choice than context.Background().
|
| |
|
|
| |
Track base crashes for (commit hash, config, arch) tuples.
|
| |
|
|
| |
It is the session steps with the "error" status that must be counted.
|
| | |
|
| |
|
|
| |
Display the share of sessions that have failed and skipped tests.
|
| |
|
|
|
|
|
|
| |
If all symbol hashes between the base and the pathed kernel match,
there's no reason to spend time fuzzing the series.
Add a 'skipped' status to the enum of possible session test results and
set it from the fuzz-step.
|
| |
|
|
|
|
|
| |
We could not generate the stats page when there were findings for a not
yet finished session.
Fix it and adjust the tests.
|
| |
|
|
|
|
|
| |
Add a web dashboard page with the main statistics concerning patch
series fuzzing.
Improve the navigation bar on top of the page.
|
| |
|
|
|
| |
Extract the common "Query - ReadOne - close iterator" pattern into a
separate method.
|
| |
|
|
|
|
|
|
|
|
| |
Permit the following scenario: a finding is first submitted without a C
reproducer and then resubmitted again, now with one.
Ensure that it's only possible as long as the session is still in
progress.
Refactor Finding repository and service and adjust the tests.
|
| |
|
|
|
| |
Sometimes the patch series directly hint at the kernel tree they should
be applied to. Extract and remember this information.
|
| |
|
|
|
|
|
|
| |
Share not just the tree name (mainline, net, etc), but also the full URL
to check out the repository.
For that, add one more field to the Build entity and adjust email
reporting templates.
|
| | |
|
| |
|
|
|
|
|
| |
Fill in build details per each finding and display that information in
the report email.
Extend the test that verifies how api.SessionReport is filled.
|
| |
|
|
| |
Since these may be included into email addresses, keep them short.
|
| |
|
|
|
|
| |
Replace an UpdateReport() call with a RecordReply(). This will
eventually allow us to support the email sender implementations for
which we do not immediately know the MessageID of the reported message.
|
| | |
|
| |
|
|
|
| |
Don't restart the job if it returned a non-zero exit code.
Don't treat the ErrNoChange error as a failure.
|
| |
|
|
|
| |
Add a test that verifies that we have correct down migrations.
Fix the down migrations sql file.
|
| |
|
|
|
| |
Add API to record replies under the reports that allows to determine the
original report only by having the MessageID.
|
| |
|
|
|
| |
Introduce a Reporter column to the SessionReport.
For finished reports, store both a MessageID instead of Link.
|
| |
|
|
|
|
|
|
|
| |
If the workflow step crashed or timed out, we used to have Running
status for such steps even though the session itself may be long
finished.
In order to prevent this inconsistency, on finishing each session go
through all remaining running steps and update their status to Error.
|
| |
|
|
|
|
|
|
| |
A raw InsertOrUpdate method is not very reliable in case of concurrent
update requests. Add a callback inside which the modified fields would
be set.
Refactor the existing code that used to call the old method.
|
| |
|
|
|
| |
If the series was skipped during triage, show that in the status and let
users filter by it.
|
| |
|
|
| |
For now, only share it for the skipped series.
|
| |
|
|
|
|
|
|
|
|
|
| |
The archive would be a useful source of debugging information.
Provide an HTTP endpoint that accepts a multipart form request with
the archived data.
Provide an *api.Client method to encapsulate the encoding of the data.
Add a test.
|
| |
|
|
| |
Add a checkbox to only display the series for which there are findings.
|
| |
|
|
| |
It will help identify the series to highlight.
|
| |
|
|
|
|
| |
In the tests, we often spawn different dummy objects.
Add a separate helper class to avoid duplicating this code.
|
| |
|
|
|
| |
Add simple Previous/Next navigation for the list of series.
For now, just rely on SQL's LIMIT/OFFSET functionality.
|
| |
|
|
| |
Update the Web UI to have a filter form on top of the index page.
|
| |
|
|
|
| |
For each series, display the Cc'd email list and let users filter the
patch series list by those addresses.
|
| |
|
|
|
| |
The job can be created by a CD to apply the missing DB migrations.
Add a Makefile target to prepare the job's description.
|
| |
|
|
|
|
| |
Once a new kernel revision becomes available, build it to figure out
whether it's buildable. This information will be used in the triage step
to figure out the right base kernel revision.
|
| |
|
|
| |
It's a bit more conscise than uuid.New().String().
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Provide an API to set up the reporting of finished sessions for which
syz-cluster collected reportable findings.
The actual sending of the results is to be done in a separate component
that would:
1) Call Next() to get the next report to send.
2) Call Confirm() to confirm that the report has been sent.
3) Call Upstream() if the report has been moderated and needs to be sent
to e.g. public mailing lists.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
In the previous version of the code, series-tracker was directly pushing
patch series into the DB and the controller auto-created fuzzing
sessions.
Mediate these via the controller API instead.
Instead of creating Session objects on the fly, pre-create them and
let processor take them one by one.
The approach has multiple
benefits:
1) The same API might be used for the patch series sources other than
LKML.
2) If the existence of Session objects is not a sign that we have
started working on it, it allows for a more precise status display
(not created/waiting/running/finished).
3) We could manually push older patch series and manually trigger
fuzzing sessions to experimentally measure the bug detection rates.
4) The controller tests could be organized only by relying on the API
offered by the component.
|
| |
|
|
| |
Record the logs from the build and fuzzing steps.
|
| | |
|
| |
|
|
|
|
|
|
| |
It looks convenient (as it automatically enforces the uniqueness
constraint), but once we start referencing findings in URLs and from
other DB entities, it becomes increasingly problematic.
Let's use UUIDs and a separate uniqueness constraint.
|