


Building Better Agents: Evaluation Frameworks & Feedback Loops for Automated Improvement
Join us at the Google office in Chicago for a special evening focused on building and improving LLM-powered agents.
As developers push the boundaries of what agents can do, a key challenge emerges: how do you evaluate performance, identify failure modes, and enable agents to improve over time—autonomously? In this session, we’ll explore practical strategies for tracing agent behavior, running structured evaluations, optimizing prompts, and closing the loop with experimentation and monitoring.
Whether you're building with open-source frameworks or rolling your own, you’ll leave with actionable techniques and ideas to level up your agentic systems.
Come for the technical deep dive, stay for the food, drinks, and great conversations with fellow builders.