-
Notifications
You must be signed in to change notification settings - Fork 17
Open
Description
Thanks for your great work! I'm wondering how to use Mem-alpha in inference like agent workload?
I noticed that in the evaluation, the Mem-alpha model visit the whole knowledge store at first, generating a memory for all information. Then, a frozen RAG and LLM will be used to work on downstream tasks.
In my understanding, Mem-alpha model is like a stand-alone summary model. If so, does that mean in agent inference cases, we will use a standard LLM to reason and take action, then use Mem-alpha to summarize/compress when necessary?
Reactions are currently unavailable
Metadata
Metadata
Assignees
Labels
No labels