Steps to Production: Evaluating RAG Pipelines
Evaluation is a big step before bringing your LLM-based application to production. Join our livestream with experts from Ragas and Qdrant as we discuss the details of evaluating RAG systems 🚀
📆 Agenda
Talk 1: Integrating User Feedback into Your Evaluation Stack with ragas by Shahul Es
How we can leverage AI to automate the boring part of evaluating LLM applications
Detailed methods for utilising user feedback from production as part of your evaluation stack
Roadmap of ragas
Talk 2: RAG Evaluation in Action: Building, Tackling Cold Start Challenges, and Optimizing Your RAG with Qdrant and RAGAS by Atita Arora
As Retrieval Augmented Generation (RAG) continues to make significant strides across diverse industries, the need to elevate its performance has become paramount. In this session, we’ll dive deep into the nuances of the process of building a Documentation RAG using Qdrant as the knowledge store, evaluating RAG pipelines and leveraging experimentation to make well-informed decisions. We'll demonstrate how the open-source evaluation framework, RAGAS, serves as a powerful ally in refining RAG solutions. But wait, there's more! We'll tackle head-on the persistent challenge of the cold start problem in building evaluation datasets. Join us as we discuss proven strategies to navigate this obstacle seamlessly, ensuring your RAG journey is successful. Experience all of this in our engaging workshop-style session with code-walkthrough and immerse yourself in practical learning.
Sessions will be recorded.