aboutsummaryrefslogtreecommitdiffstats
path: root/pkg/aflow/testdata/TestSummaryWindow.llm.json
Commit message (Collapse)AuthorAgeFilesLines
* pkg/aflow: abstract away LLM temperatureDmitry Vyukov2026-02-021-1/+1
| | | | | | | | | | Introduce abstract "task type" for LLM agents instead of specifying temperature explicitly for each agent. This has 2 advantages: - we don't hardcode it everywhere, and can change centrally as our understanding of the right temperature evolves - we can control other LLM parameters (topn/topk) using task type as well Update #6576
* pkg/aflow: fix role in test repliesDmitry Vyukov2026-01-301-9/+9
|
* pkg/aflow: refactor the LLM summarization testDmitry Vyukov2026-01-301-6/+108
| | | | | | | | | It's very inconvinient to hardcode exact LLM replies in this test, because it's hard to understand when exactly it will be asked to summarize. It's easy to make a bug in the test, and provide summary reply when it wasn't asked to. Instead support proving full generateContent callback, and just model what an LLM would do -- provide summary only when it's asked to.
* pkg/aflow: reduce size of golden test filesDmitry Vyukov2026-01-301-205/+0
| | | | Don't memorize repeated request configs.
* pkg/aflow: adding sliding window summary featureYulong Zhang2026-01-301-0/+471
This adds a flow feature (and creates a new flow using it) called "sliding window summary". It works by asking the AI to always summarize the latest knowledge, and then we toss the old messages if they fall outside the context sliding window.