| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
| |
Add race:harmful/benign label.
Set it automatically by confirmed AI jobs.
|
| |
|
|
| |
Any is the preferred over interface{} now in Go.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
Use a common number of attempts (10) everywhere in dashboard/app
for db.RunInTransaction() with the custom wrapper runInTransaction().
This should fix the issues when the ErrConcurrentTransaction occurs
after a default number of attempts (3). If the max. limit (10) is not
enough, then it should hint on problem at the tx transaction function itself
for the further debugging.
For very valuable transactions in createBugForCrash(), reportCrash(),
let's leave 30 attempts as it is currently.
Fixes: https://github.com/google/syzkaller/issues/5441
|
| | |
|
| | |
|
| |
|
|
| |
Signed-off-by: cui fliter <imcusg@gmail.com>
|
| |
|
|
|
| |
In many cases we want to just access the namespaces's config.
Introduce a special helper function to keep code shorter and more conscise.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| |
We used to have a single global `config` variable and access it
throughout the whole dashboard application.
However, this approach has been more and more complicated test writing
-- sometimes we want the config to be only slightly different, so that
it's not worth it adding new namespaces, sometimes we have to test how
dashboard handles config changes over time.
This has already led to a number of hacky contextWithXXX methods that
mocked various parts of the global variable. The rest of the code had to
sometimes still use `config` directly and sometimes invoke getXXX(c)
methods. This is very inconsistent and prone to errors.
With more and more situations where we need to patch the config
appearing (see #4118), let's refactor the application to always access
config via the getConfig(c) method. This allows us to uniformly patch
the config and be sure that the non-patched copy is not accessible from
anywhere else.
|
| |
|
|
|
|
|
| |
The existing code fails if it's executed before we have finished the
tree origin job for the tree where the fix is supposed to be applied to.
Add a test.
|
| |
|
|
|
|
|
|
|
|
| |
If a higher-priority crash has become available, re-run bug origin
tests.
Currently, since fix candidate testing runs on tree origin testing
results, we remain bound to the manager used back then. And it can be
that the manager does not support bisections (e.g. qemu) or is quite
problematic (arm64).
|
| |
|
|
|
|
|
| |
Determine "expensive" lookups correctly, previously a few db lookups
were overlooked.
Increase the limit, as 2 definitely discriminates such jobs.
|
| |
|
|
|
|
|
|
|
|
|
|
| |
Currently these jobs are heavily underrepresented -- we rarely get to
generate them and, when we do, we only look up 3 random bugs.
At the same time, for the majority of issues, the check is rather cheap,
we don't even have to query anything from the DB.
Split the job generation results into expensive (= extra DB queries
needed) and non-expensive (= it was enough to examine *Bug) one. Limit
only the amount of the former ones.
|
| |
|
|
|
|
|
|
| |
If bug's origin has not yet been determined, the dashboard app might
crash with "panic: runtime error: invalid memory address or nil pointer
dereference" at dashboard/app/tree.go:740.
Fix this and add a test.
|
| |
|
|
|
|
|
|
|
|
|
|
|
| |
If tree origin assessment code has identified that the bug is not
reproducible in a tree from which we merge/cherry-pick commits, use fix
bisection to identify that commit. This can e.g. be used to find fixing
commits that were not backported from Linux kernel mainline into LTS
branches.
In case of bisection errors, re-do such jobs every 30 days.
Remember in the Bug structure whether there's a fix candidate and return
the details in the full bug info API query.
|
| |
|
|
|
|
| |
It's more flexible if it's not tied to tree origin detection context.
Also, make runOnMergeBase contain the actual merge base repo and branch.
|
| | |
|
| |
|
|
|
|
|
|
|
| |
We don't always need a consistent view of data when we're inside a
transaction. Moreover, querying less can help us avoid "too much
contention on these datastore entities".
Let's see how it all behaves if pass both a transaction context and
a global context to the tree.go machinery.
|
| |
|
|
|
|
|
|
| |
There are cases when e.g. an LTS kernel does not build if provided with
some downstream kernel config.
Introduce a special AppendConfig option to KernelRepo that can help in
this case.
|
| |
|
|
|
|
| |
There's no need to query the same entity twice, it was done just for
convenience. It might also contribute to the `too much contention on these
datastore entities` error.
|
| |
|
|
| |
Not all bugs will get the C repro.
|
| |
|
|
|
| |
Display them as a collapsible block on the bug info page. Use colors to
distinguish results.
|
| |
|
|
|
| |
If the tested tree is build/boot/test broken, repeat the job in 2 weeks.
If we are waiting for a bug to get fix, repeat every 45 days.
|
| |
|
|
|
|
| |
It might be the case that the same repo is mentioned both as Merge=false
and via some chain of Merge=true links. Adjust the code to properly
handle such scenarios and add a test.
|
|
|
Run reproducers on other trees in order to determine
1) Trees where a bug originates (i.e. the reproducer can trigger a bug
in none of the further trees, from which commits flow).
2) Trees, to which the bug has already spread (i.e. the reproducer works
on none of the trees that receive commits from the current one).
3) The abscence of a backport (= the reproducer used to work on some
upstream tree and then stopped).
For (1) the bot assigns the LabelIntroduced from KernelRepo.
For (2) -- the value of LabelReached.
For better understanding see sample configs in tree_test.go.
|