

Agent Memory with Mastra
Agent memory is the difference between chatbots that forget everything and AI assistants that truly understand you. This workshop will teach you how to implement Mastra's newly improved memory system that achieved 80% accuracy on the LongMemEval benchmark - outperforming other frameworks by 8 points.
We'll cover both types of memory in Mastra: working memory for tracking user preferences and characteristics, and semantic recall (RAG) for long-term conversation history. You'll learn the practical techniques we discovered while spending $8k and burning through 3.8 billion tokens to optimize these systems.
Build agents that remember user preferences across sessions, recall relevant context from months of conversation history, and handle temporal reasoning correctly. We'll walk through real implementation examples including how to configure memory templates, optimize retrieval settings, and format recalled information for better LLM understanding.
You'll learn why RAG is very much alive for agent memory (despite claims otherwise), when to use working memory versus semantic recall, and how proper formatting can dramatically improve accuracy. We'll also cover the performance considerations and cost optimizations that matter in production.
This event is open to all devs and aspiring AI engineers, regardless of background, so feel free to share the invite link. It's recommended that you have a code editor and node v20+ installed prior to the session. You should be comfortable with basic JavaScript and the command line. This isn't a talk; it's a live workshop where you'll walk away with agents that have state-of-the-art memory capabilities.