Ensuring high-quality data requires engineering robust, scalable, production-grade retrieval pipelines.
In this event, we’ll demonstrate how to use hierarchical embeddings to return the most relevant context from a large data set, as measured by context precision and context recall via the RAG ASsessment (RAGAS) framework.
We will also show the impact that fine-tuning our embedding model has on our retrieval metrics. Finally, we will take a look at how our generations improve across the advanced retrieval techniques discussed.
As always, all GitHub repositories and colab notebooks will be shared live for you to follow along with during the event!
Who should attend the event?
AI engineers who want to build performant RAG applications for production.
Learners who want to understand advanced retrieval methods and techniques
LLM practitioners who want to baseline retrieval using industry-standard metrics
Dr. Greg Loughnane is the Founder & CEO of AI Makerspace, where he serves as lead instructor for their LLM Ops: LLMs in Production course. Since 2021, he has built and led industry-leading Machine Learning & AI bootcamp programs. Previously, he has worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and ML researcher. He loves trail running and is based in Dayton, Ohio.
Chris Alexiuk is the Head of LLMs at AI Makerspace, where he serves as a programming instructor, curriculum developer, and thought leader for their flagship LLM Ops: LLMs in Production course. During the day, he’s a Founding Machine Learning Engineer at Ox. He is also a solo YouTube creator and Dungeons & Dragons enthusiast and is based in Toronto, Canada.