| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Lay down foundation for spanner DB migrations by adding Jobs.Correct field.
This will allow us to test deployment of such changes.
The field will be used to record results of manual assessment of AI job results.
|
| |
|
|
|
|
|
|
| |
Support for:
- polling for AI jobs
- handling completion of AI jobs
- submitting job trajectory logs
- basic visualization for AI jobs
|
| |
|
|
|
|
| |
Start spanner emulator for tests.
Create isolated per-test instance+database.
Test that DDL migration scripts are work.
|
| |
|
|
| |
Any is the preferred over interface{} now in Go.
|
| |
|
|
|
| |
Set aetest startup timeout to 120 seconds to prevent flaky tests from failing due to timeouts. This is especially
helpful on slower machines or when the system is under heavy load.
|
| |
|
|
|
|
| |
Don't specify the subsystem revision in the dashboard config and instead
let it be nested in the registered subsystems. This reduces the amount
of the manual work needed to switch syzbot to a newer subsystem list.
|
| |
|
|
| |
./tools/syz-env bin/golangci-lint run ./... --fix
|
| |
|
|
|
|
| |
For some specified inboxes, forward the emails that contain syz
commands.
Add tests to verify the behavior.
|
| |
|
|
| |
getSpannerClient returns prod client as a default.
|
| |
|
|
|
|
|
|
| |
1. Init coveragedb client once and propagate it through context to enable mocking.
2. Always init coverage handlers. It simplifies testing.
3. Read webGit and coveragedb client from ctx to make it mockable.
4. Use int for file line number and int64 for merged coverage.
5. Add tests.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
|
| |
The test has recently become broken, but we didn't notice it in our
presubmit testing.
Fix the problem (ReproLog being set to 0).
Run TestAccess also in -short mode, but limit it to ensuring that
non-public URLs are not accessible publicly. The short test now takes 60
seconds compared to 104 seconds without -short.
|
| |
|
|
|
|
|
| |
As it is problematic to set up automatic bidirectional sharing of
reproducer files between namespaces, let's support the ability to
manually request a reproduction attempt on a specific syz-manager
instance. That should help in the time being.
|
| |
|
|
|
|
|
|
|
| |
Linter warns:
dashboard/app/util_test.go:680:47: SA1029: should not use built-in type string
as key for value; define your own type to avoid collisions
newContext := context.WithValue(r.Context(), requestIDKey, requestNum)
^
|
| | |
|
| | |
|
| | |
|
| |
|
|
|
| |
In case or build/boot/test errors there will never be a reproducer.
It's better to report them right away, without waiting.
|
| |
|
|
|
| |
To ensure service stability, let's rate limit incoming requests to our
web endpoints.
|
| |
|
|
|
|
|
| |
Instead of reminding users to mention our mailing lists, forward their
emails there automatically.
Closes #4260.
|
| |
|
|
|
| |
Marshal the config at the installing and during getConfig() to ensure
that it's not accidentally changed during code execution.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to have a single global `config` variable and access it
throughout the whole dashboard application.
However, this approach has been more and more complicated test writing
-- sometimes we want the config to be only slightly different, so that
it's not worth it adding new namespaces, sometimes we have to test how
dashboard handles config changes over time.
This has already led to a number of hacky contextWithXXX methods that
mocked various parts of the global variable. The rest of the code had to
sometimes still use `config` directly and sometimes invoke getXXX(c)
methods. This is very inconsistent and prone to errors.
With more and more situations where we need to patch the config
appearing (see #4118), let's refactor the application to always access
config via the getConfig(c) method. This allows us to uniformly patch
the config and be sure that the non-patched copy is not accessible from
anywhere else.
|
| | |
|
| | |
|
| |
|
|
|
|
|
|
|
|
| |
Race detector reports a race between
dashboard/app/reporting_test.go:1102
and
dashboard/app/handler.go:183
Fix this by storing decommission updates in the context rather than by
directly modifying the global config variable.
|
| |
|
|
|
| |
Let presubmit_dashboard run dashboard tests and presubmit_build run all
others.
|
| |
|
|
| |
Use just env instead of that.
|
| |
|
|
|
| |
Display high prio bugs on top of the list and low prio bugs at the
bottom.
|
| |
|
|
|
|
|
|
|
|
| |
Changes to our rootfs, compilers or bisection logic regularly cause
regressions in our bisection accuracy. Retrying them currently entails
fiddling with the GCP datastore directly or mass deleting all failed
bisections.
This change will allow us to retry specific bisections with a single
click.
|
| |
|
|
| |
This allows us to write cleaner tests.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Run reproducers on other trees in order to determine
1) Trees where a bug originates (i.e. the reproducer can trigger a bug
in none of the further trees, from which commits flow).
2) Trees, to which the bug has already spread (i.e. the reproducer works
on none of the trees that receive commits from the current one).
3) The abscence of a backport (= the reproducer used to work on some
upstream tree and then stopped).
For (1) the bot assigns the LabelIntroduced from KernelRepo.
For (2) -- the value of LabelReached.
For better understanding see sample configs in tree_test.go.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
Refactor patch testing functionality.
Instead of creating patch testing jobs per cron, create them on demand
during pollJob(). This allows to only create jobs for managers that do
support jobs and also simplifies further changes that are related to
patch testing.
As there are too many bugs and poll period is quite small, don't query
all the database each time. Consider a random subset of bugs each time,
this should be enough given high poll rate.
|
| |
|
|
|
|
|
|
|
| |
Collect the information about the discussions under bug reports and
patches that target the reported bugs.
For now only Lore discussions are supported.
Display the discussion list on the bug info page.
|
| |
|
|
|
|
|
|
|
| |
Once in a month, collect for each subsystem the list of bugs that are
still open on the syzbot dashboard and send an email to the
corresponding mailing list.
Support manual moderation of such reminders, we'll need that at least
for the time being.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
This will both help set subsystems for bugs that existed before these
changes and keep subsystems up to date with changes to the subsystem
list.
Do the update once a month for open bugs and also update all open and
fixed bugs every time a revision changes.
|
| |
|
|
|
| |
Instead of exposing a map with the list of variables to set, let users
specify a callback to transform context.Context before making a request.
|
| |
|
|
|
| |
Currently it's the only cron job that is still occupying a top level
URL.
|
| |
|
|
|
|
| |
This lets us keep only one access rule and reliefs from the need to keep
handlers and app.yaml in sync for cron jobs that'll be added in the
future.
|
| |
|
|
|
| |
Now it's quite difficult to recognize it in the dashboard testing logs,
especially when the error message itself is not very clear.
|
| |
|
|
|
| |
In order to do that, we need to tweak the subsystem extraction code. Use
context values, as this should simplify the flow.
|
| |
|
|
|
|
|
| |
Currently we use context.Context values themselves, but that limits the
ability to derive contexts during the request processing. Also, the race
detector is reporting a data race during the `reflect.DeepEqual`
comparison.
|
| |
|
|
|
|
|
| |
If the underlying request fails, we get the `panic: runtime error:
invalid memory address or nil pointer dereference` error.
Gracefully handle the possible edge cases.
|
| |
|
|
|
|
|
|
|
|
|
| |
Currently, if one of the ctx.Close asserts fails, we never close the
connection to the dev_appserver instance we use.
As a result, the application created by `go test` is left with lots of
orphaned dev_appserver children and `go test` endlessly waits for them
to stop.
Defer the aetest.Instance.Close() call to prevent this from happening.
|
| |
|
| |
* dashboard/app/graphs.go: return 400 if bad input
|
| |
|
|
|
|
|
|
|
|
|
| |
This test is skipped during short testing, but during the full testing
it takes most of the time. At the same time, for testing purposes it's
actually irrelevant whether we keep 40 or 20 crashes, so we're just
wasting time.
Make maxCrashes() mockable and set it to 20 during testing. This makes
full testing 1.5x faster and also gives a small speed improvement for
-short testing.
|
| |
|
|
|
| |
After the previous commit, our existing patch testing jobs began to
fail. Fix this by switching them to another test namespace.
|
| |
|
|
|
|
|
| |
Explicitly tell users about the situations when the incoming request
could be related to several different syzbot bugs.
Test this behavior.
|
| |
|
|
|
|
|
|
| |
The test config changes needed for the recent changes in incoming email
handling testing accidentally broke the AccessTest.
Fix this by introducing a separate test namespace for public email
testing.
|