Agent Memory
The mechanisms (episodic or semantic) by which an agent recalls past interactions across sessions.
Definition
Agent memory refers to the systems and techniques that allow AI agents to store, retrieve, and leverage information from past interactions, enabling continuity and personalization across sessions. Without memory, each agent invocation starts from scratch, losing valuable context about user preferences, prior decisions, and learned patterns.
Key characteristics of agent memory include:
-
Short-Term (Working) Memory: The immediate conversation context held within the model's context window. This is ephemeral and lost when the session ends, but essential for coherent multi-turn interactions.
-
Long-Term (Episodic) Memory: Persistent storage of past conversations, decisions, and outcomes, typically in a vector database. Agents retrieve relevant episodes to inform current behavior, similar to how humans recall past experiences.
-
Semantic Memory: Structured knowledge extracted from interactions, such as user preferences, project conventions, or domain facts. This distilled information is more efficient to retrieve than raw conversation logs.
-
Memory Management: Effective agents implement strategies for summarization, importance scoring, and garbage collection to prevent memory stores from growing unbounded while retaining the most useful information.
Agent memory is a key differentiator between simple chatbots and sophisticated assistants that improve over time.