| Commit message (Collapse) | Author | Age | Files | Lines |
| |
|
|
|
|
|
|
|
|
| |
Introduce abstract "task type" for LLM agents instead of specifying
temperature explicitly for each agent. This has 2 advantages:
- we don't hardcode it everywhere, and can change centrally
as our understanding of the right temperature evolves
- we can control other LLM parameters (topn/topk) using task type as well
Update #6576
|
| |
|
|
| |
Don't memorize repeated request configs.
|
| |
|
|
|
|
|
| |
Handle LLM tool input token overflow by removing the last tool reply,
and replacing it with an order to answer right now.
I've seen an LLM tool went into too deap research and in the end
just overflowed input tokens. It could provide at least some answer instead.
|
|
|
LLMTool acts like a tool for the parent LLM, but is itself implemented as an LLM agent.
It can have own tools, different from the parent LLM agent.
It can do complex multi-step research, and provide a concise answer to the parent LLM
without polluting its context window.
|