| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
| |
Having LLM model per-agent is even more flexible than per-flow.
We can have some more complex tasks during patch generation with the most elaborate model,
but also some simpler ones with less elaborate models.
|
| |
|
|
|
| |
We may want to use a weaker model for some workflows.
Allow to use different models for different workflows.
|
| |
|
|
|
| |
Add race:harmful/benign label.
Set it automatically by confirmed AI jobs.
|
| |
|
|
|
|
| |
Add workflow that can be used for moderation of UAF bugs (consistent/actionable reports),
such UAF bugs can be upstreammed automatically, even if they happened only once
and don't have a reproducer.
|
| |
|
|
|
|
|
|
|
|
|
|
|
|
| |
Rephrase the prompt to be only about KCSAN,
currently it has some leftovers from more generic assessment prompt
that covered KASAN bugs as well (actionability).
Also add Confident bool output.
We may want to act on both benign/non-benign,
so we need to know when LLM wasn't actually sure either way.
This should also be useful for manual verification/statistics.
If LLM is not confident and can can admit that, it's much better
than giving a wrong answer. But we will likely want to track
percent of non-confident answers.
|
| |
|