Cover Image for Advancing Open Source LLM Evaluation, Testing, and Debugging with Arize and Ragas
Cover Image for Advancing Open Source LLM Evaluation, Testing, and Debugging with Arize and Ragas
Avatar for Arize AI
Presented by
Arize AI
Hosted By

Advancing Open Source LLM Evaluation, Testing, and Debugging with Arize and Ragas

Register to See Address
San Francisco, California
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

Join our recurring meetup dedicated to exploring and advancing open source tools and best practices for evaluating and debugging large language models (LLMs). In these meetups, we will:

  • Discuss the latest open source tools and frameworks for assessing LLM performance, safety, and robustness

  • Share case studies and insights from researchers and practitioners working on LLM evaluation and debugging

  • Collaborate on developing new open source resources, such as datasets, benchmarks, and tools to support the LLM community

  • Establish best practices and guidelines for rigorous, transparent, and reproducible LLM evaluation and debugging

Whether you are a researcher, engineer, or enthusiast interested in understanding and improving LLMs, this meetup series provides a platform to learn, share, and contribute to the growing ecosystem of open source LLM evaluation and debugging tools. Together, we can work towards building more reliable, unbiased, and trustworthy language models.

Speakers for our upcoming meetup (4/16) include:

  • Ragas Co-Founders: Shahul Es & Jithin James

  • LlamaIndex CEO & Co-Founder: Jerry Liu

  • Arize AI CEO & Co-Founder: Jason Lopatecki

  • SQL Gen Evals Showcase with Community Member: Manas Singh

  • Arize AI ML Solutions Architect: Hakan Tekgul

Location
Please register to see the exact location of this event.
San Francisco, California
Avatar for Arize AI
Presented by
Arize AI
Hosted By