

π¦ ai that works: Eval-ing multiple models for each prompt
βπ¦ ai that works
βA weekly conversation about how we can all get the most juice out of todays models with @hellovai & @dexhorthy
βhttps://www.github.com/hellovai/ai-that-works
β
β
β
βAI That Works #16 will be a super-practical deep dive into real-world examples and techniques for evaluating a single prompt against multiple models. While this is a commonly heralded use case for Evals, e.g. "how do we know if the new model is better" / "how do we know if the new model breaks anything", there's not a ton of practical examples out there for real-world use cases.
β
βOn this episode we'll do a ton of hands-on live coding to look at different ways to slice and dice your prompt library to test and evolve it while understanding performance with different models.
βPre-reading
βTo prevent repeating the basics, we recommend you come in having already understanding some of the tooling we will be using:
βDiscord
βCursor (A vscode replacement)
βProgramming languages
βApplication Logic: Python or Typescript or Go
βPrompting: BAML (recommend video)
βMeet the Speaker π§βπ»
βββMeet Vaibhav Gupta, one of the creators of BAML and YC alum. He spent 10 years in AI performance optimization at places like Google, Microsoft, and D. E. Shaw. He loves diving deep and chatting about anything related to Gen AI and Computer Vision!Β
Meet Dex Horothy, founder at Human Layer - a YC company. He spent 10+ years building devops tools at Replicated, Sprout Social and JPL. DevOps junkie turned AI Engineer.