| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
Introduce abstract "task type" for LLM agents instead of specifying
temperature explicitly for each agent. This has 2 advantages:
- we don't hardcode it everywhere, and can change centrally
as our understanding of the right temperature evolves
- we can control other LLM parameters (topn/topk) using task type as well
Update #6576
|
| | |
|
| |
|
|
|
|
|
|
|
| |
It's very inconvinient to hardcode exact LLM replies in this test,
because it's hard to understand when exactly it will be asked to summarize.
It's easy to make a bug in the test, and provide summary reply when it wasn't asked to.
Instead support proving full generateContent callback,
and just model what an LLM would do -- provide summary only when it's asked to.
|
| |
|
|
| |
Don't memorize repeated request configs.
|
|
|
This adds a flow feature (and creates a new flow using it) called
"sliding window summary".
It works by asking the AI to always summarize the latest knowledge,
and then we toss the old messages if they fall outside the context
sliding window.
|