Cover Image for LLM Observability: Evaluations
Cover Image for LLM Observability: Evaluations
Avatar for Arize AI
Presented by
Arize AI
Hosted By

LLM Observability: Evaluations

Zoom
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

Join us, November 7th and November 14th for this free, 2-part, virtual workshop where participants will have hands-on experience in understanding LLM evaluation metrics. 

In Part 1 of this workshop, we will discuss how implementing LLM evaluations provide scalability, flexibility, and consistency, for your LLM orchestration framework. In Part 2, we will dive into a code-along Google Colab notebook to tackle adding evaluations to your LLM outputs. Attendees will walk away with the ability to implement LLM observability for their LLM application.

Key Objectives:

  • Deep-dive into how performance metrics can make LLMs more ethical, safe, and reliable. 

  • Using custom and predefined metrics, such as accuracy, fluency, and coherence, to measure the model’s performance.

  • Gain hands-on experience for how to leverage open source tools like Phoenix, LlamaIndex and LangChain for LLM application building and maintenance. 

Avatar for Arize AI
Presented by
Arize AI
Hosted By