Cover Image for RAG WARS - Advancing AI: Enhancing LLMs and RAG for Improved Performance & Reliability
Cover Image for RAG WARS - Advancing AI: Enhancing LLMs and RAG for Improved Performance & Reliability
Avatar for Vectara
Presented by
Vectara
The Trusted GenAI Platform for All Builders - Put Generative AI into Action

RAG WARS - Advancing AI: Enhancing LLMs and RAG for Improved Performance & Reliability

Registration
Welcome! To join the event, please register below.
About Event

โ€‹๐Ÿš€ Join Us for a Dynamic Event (Food will be served 6-7 pm)!

โ€‹
๐Ÿ” Topic: "Advancing AI: Enhancing LLMs and RAG for Improved Performance & Reliability"
๐Ÿ“… June 19th, Time: 6:00 PM PST/ 9:00 PM CET

โ€‹This meetup explores advanced techniques to enhance the utility and reliability of Large Language Models (LLMs) across diverse applications. From structured outputs and external function integration to robust enterprise data architecture and strategies for reducing hallucinations, the talks cover a spectrum of methods to optimize both the performance and accuracy of LLM and RAG-based systems in real-world settings.

โ€‹Talks are listed in order of presentation:

โ€‹[1] Structured Output and Function Calling for Large Language Models

โ€‹Suleman Kazi, ML @ Vectara

โ€‹Ever wanted your LLM to produce output in a particular format (JSON, CSV, XMLโ€ฆ.) so you can easily parse it out or use it in a downstream task? How about giving it access to external functions that perform a task or return information that the LLM does not have access to? In this talk, youโ€™ll learn about doing both of these tasks, known as structured output and function calling, respectively. Weโ€™ll talk about how they are useful and how you can enable their use with open-source LLMs on HuggingFace.

โ€‹

โ€‹[2] Enterprise data architecture in machine learning and RAG systems

โ€‹Nikhil Bysani, Engineering @ Vectara

  • โ€‹Best practices in storing and consuming data to be used in ML systems, like a data lake/warehouse, s3, event driven systems

  • โ€‹Talk about data lifecycle and best ingestion practices with vectara

  • โ€‹Talk about managing state such that and synchronization of data between Vectara and other data systemsย 

โ€‹

โ€‹[3] Strategies for Mitigating Hallucination in Large Language Models

โ€‹
Rogger Luo, ML @ Vectara

โ€‹Hallucination poses a significant challenge to the usability and reliability of LLM applications. In this presentation, we offer an insightful overview of contemporary methods aimed at mitigating hallucination in summarization, drawing from our own practical experiences with these techniques. Our examination reveals that these methods can be broadly categorized into three main approaches: Alignment with Fine-tuning (DPO), Control at Inference (DoLA), and Post-Editing(FAVA).

โ€‹The whole conversation will be moderated by:
- Ofer Mendelevitch, Head of Developer Relations at Vectara

โ€‹This event is open for everyone to join so save the date and meet us 6 pm PST on June 19th. Let's explore the cutting-edge of RAG together while networking and enjoying food and drinks! ๐Ÿš€

Location
Procopio, Cory, Hargreaves & Savitch LLP
3000 El Camino Real, 5 Palo Alto Square Suite 400, Palo Alto, CA 94306, USA
Suite 400. Enter lobby, take elevator to 4th floor. Follow the fun!
Avatar for Vectara
Presented by
Vectara
The Trusted GenAI Platform for All Builders - Put Generative AI into Action