| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
| |
There is no point in using Provide more than once,
and anywhere besides the first action of a flow.
So it's not really an action, but more of a flow property.
Add Flow.Consts field to handle this case better.
Also provide slightly less verbose syntax by using a map
instead of a struct, and add tests.
|
| |
|
|
|
|
|
| |
Currently we crash on nil deref, if LLM specifies explicit 'nil'
for an optional (pointer) argument. Handle such cases properly.
Fixes #6811
|
| |
|
|
|
| |
In some cases there may be not final text reply, only some structured outputs
(e.g. some bool). Don't require final reply, if structured outputs are specified.
|
| |
|
|
|
|
| |
If LLM calls set-results tool to set structured results,
and then calls another unrelated tool, currently we lose structured results
(overwrite with nil). Don't do that, keep structured results.
|
| |
|
|
|
|
|
|
|
|
| |
Introduce abstract "task type" for LLM agents instead of specifying
temperature explicitly for each agent. This has 2 advantages:
- we don't hardcode it everywhere, and can change centrally
as our understanding of the right temperature evolves
- we can control other LLM parameters (topn/topk) using task type as well
Update #6576
|
| | |
|
| |
|
|
|
|
|
|
|
| |
It's very inconvinient to hardcode exact LLM replies in this test,
because it's hard to understand when exactly it will be asked to summarize.
It's easy to make a bug in the test, and provide summary reply when it wasn't asked to.
Instead support proving full generateContent callback,
and just model what an LLM would do -- provide summary only when it's asked to.
|
| |
|
|
| |
Don't memorize repeated request configs.
|
| |
|
|
|
|
|
|
|
| |
This adds a flow feature (and creates a new flow using it) called
"sliding window summary".
It works by asking the AI to always summarize the latest knowledge,
and then we toss the old messages if they fall outside the context
sliding window.
|
| |
|
|
|
|
| |
Add DoWhile.MaxIterations and make it mandatory.
I think it's useful to make workflow implementer to think
explicitly about a reasonable cap on the number of iterations.
|
| |
|
|
|
|
|
| |
Handle LLM tool input token overflow by removing the last tool reply,
and replacing it with an order to answer right now.
I've seen an LLM tool went into too deap research and in the end
just overflowed input tokens. It could provide at least some answer instead.
|
| | |
|
| |
|
|
|
|
| |
If LLMAgent.Temperature is assigned an untyped float const (0.5)
it will be typed as float64 rather than float32. So recast them.
Cap Temperature at model's supported MaxTemperature.
|
| |
|
|
|
| |
DoWhile represents "do { body } while (cond)" loop.
See added test for an example.
|
| |
|
|
|
|
|
| |
LLMTool acts like a tool for the parent LLM, but is itself implemented as an LLM agent.
It can have own tools, different from the parent LLM agent.
It can do complex multi-step research, and provide a concise answer to the parent LLM
without polluting its context window.
|
| | |
|
|
|
Add helper function that executes test workflows,
compares results (trajectory, LLM requests) against golden files,
and if requested updates these golden files.
|