How to pass artifacts/files as on-demand context based on LLM decision to use the artifact on agent level? #4430
Replies: 1 comment 1 reply
-
|
Response from ADK Answering Agent (experimental, answer may be inaccurate) TLDR: The recommended approach is to use the Artifact Service to store your files and load them on-demand. There is a sample agent called "Sales Assistant Agent with Context Offloading" that demonstrates this exact pattern. Hello! That's a great question. You are describing a pattern similar to Retrieval-Augmented Generation (RAG), where information is retrieved and provided to the LLM on-demand. ADK has excellent support for this. The recommended approach is to use the Artifact Service [1]. This service is designed to manage large data blobs, like your process files. Instead of loading the entire file content into the context, you store the files as artifacts and then load them only when needed. A perfect example for your use case is the "Sales Assistant Agent with Context Offloading" sample agent [2]. Here's how it works:
This approach prevents the context from being overloaded with unnecessary information. The core functions you would use within a custom tool to achieve this are For more advanced RAG capabilities, ADK also offers:
I hope this helps you build your agent! [1] https://google.github.io/adk-docs/artifacts/ |
Beta Was this translation helpful? Give feedback.
Uh oh!
There was an error while loading. Please reload this page.
-
I understand artifact service helps to load/save artifacts on the users/session scope.
But the use case I have is let's say I have different process files and based on the file names the LLM has to use the file contents only on demand, so context is not overloaded everytime when I use that agent?
What's the recommended approach towards building this use case?
Beta Was this translation helpful? Give feedback.
All reactions