


AI Evals Mixer
Join us at the AI Evals Mixer in Bangalore on 21st May.
We’re bringing together product and engineering leaders building and scaling AI applications for an evening of hands-on learning and cross-domain insights on evaluating the quality and reliability of LLM applications.
🗓️ Date and time: 21st May, 6:30 pm - 9 pm
📍 Venue: Koramangala
✨ Learn and network over great food
What to expect?
🛠 Workshop (45 mins + Q&A)
This will be a hands-on workshop covering how AI teams can build reliable, high-quality products by placing evaluation at the core of their product development process. It will explore how top teams experiment & test before launch, monitor AI behavior & quality in the wild, and close the loop between data and decisions - driving faster iteration and greater confidence in what they ship.
Key questions it will explore:
How to measure and achieve 'good enough' AI output quality before release?
How can teams capture the right signals to understand and effectively debug AI behavior in production?
When does monitoring become meaningful, and what should trigger concern?
Where and how should human review be added for quality control and alignment?
How can data be curated to better align with real-world usage and expectations?
What makes an iteration loop truly effective - and driven by evaluation, not guesswork & vibes?
Speakers:
Rajaswa Patil, AI evals & DevTooling at Maxim, Postman, & Microsoft PROSE
Rachitt Shah, Applied AI consultant & Evals specialist.
Expect practical tips, hard-won lessons, and patterns from successful AI practitioners & products at scale - plus many more questions that teams face when shipping AI in the real world.
🎙 Expert talks
Hear from leading PMs and engineers across domains on how they’re approaching AI evaluation and navigating real-world quality challenges.
Notable speakers:
Adhiraj Somani, evals at Glean
Nehal Gajraj, code-focused evals at CodeRabbit
Arkajit Datta, multi-modal evals at Atomicwork
Rishabh Dahale, voice evals at Smallest.ai
💡 Why attend?
Learn how to implement evals from hands-on sessions and discover how leading teams are evaluating LLM applications.
Learn from peers solving similar AI and product challenges. Exchange shortcuts and strategies you won’t find in blogs.
Network with PMs and engineers over candid conversations and great food. Build quality connections.
🤝 About us
Maxim AI is the evals platform of choice for AI teams -- from start-ups to enterprises -- empowering them to ship AI applications with the quality and speed required for real-world use.
RSVP now to join. Hope to see you soon!