| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
There is no point in using Provide more than once,
and anywhere besides the first action of a flow.
So it's not really an action, but more of a flow property.
Add Flow.Consts field to handle this case better.
Also provide slightly less verbose syntax by using a map
instead of a struct, and add tests.
|
| |
|
|
| |
This will be needed to an MCP server.
|
| |
|
|
|
| |
Sometimes LLM requests just hang dead for tens of minutes,
abort them after 10 minutes and retry.
|
| |
|
|
| |
Factor out the initialization function to make functions shorter.
|
| |
|
|
|
|
|
| |
Handle LLM tool input token overflow by removing the last tool reply,
and replacing it with an order to answer right now.
I've seen an LLM tool went into too deap research and in the end
just overflowed input tokens. It could provide at least some answer instead.
|
| |
|
|
|
|
| |
If LLMAgent.Temperature is assigned an untyped float const (0.5)
it will be typed as float64 rather than float32. So recast them.
Cap Temperature at model's supported MaxTemperature.
|
| | |
|
| |
|
|
|
|
| |
Using cached replies is faster, cheaper, and more reliable.
Espcially handy during development when the same workflows
are retried lots of times with some changes.
|
| |
|
|
|
|
|
|
|
|
|
| |
Use object caching instead of file caching.
This removes the need for explicit file writing/reading,
and reduces amount of error handling. Also any future changes
should be easier to do.
Add support for flow temp dirs to implement this
(repro needs temp manager workdir).
Previously repro abused the cache dir for that.
|
| |
|
|
|
|
|
| |
For things like kernel build, we want to cache an actual file system dir,
but in some cases we want to cache some in-memory object.
Using file system dir interface is inconvinient in these cases.
Add a helper that allows to cache an object directly (via json serialization).
|
| |
|
|
|
|
|
|
| |
Detect model quota violations (assumed to be RPD).
Make syz-agent not request jobs that use the model
until the next quota reset time.
Fixes #6573
|
| |
|
|
|
|
| |
Having LLM model per-agent is even more flexible than per-flow.
We can have some more complex tasks during patch generation with the most elaborate model,
but also some simpler ones with less elaborate models.
|
| |
|
|
|
|
|
|
|
|
| |
Flow errors denote failure of the flow itself,
rather than an infrastructure error. A flow errors mean an expected
condition in the flow when it cannot continue, and cannot produce
expected outputs. For example, if we are doing something with the kernel,
but the kernel build fails. Flow errors shouldn't be flagged in
Fixes #6610
|
| |
|
|
|
| |
We may want to use a weaker model for some workflows.
Allow to use different models for different workflows.
|
| |
|