Cover Image for Deploying, Evaluating, and Tracing LLM Apps with Arize Phoenix and BentoML

Deploying, Evaluating, and Tracing LLM Apps with Arize Phoenix and BentoML

 
 
Register to See Address
San Francisco, California
Registration
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

LLM use cases are growing fast – chatbots, summarization, Q&A assistants, code generation and more. As these LLM apps are being developed and deployed to production, teams need to evaluate the performance of their LLM use case, in addition to drilling down to each trace and span to get visibility into where their application breaks. In this hands-on workshop you will learn how to build and deploy a complex LLM app with BentoML’s OpenLLM. Then, learn how to troubleshoot, evaluate and trace your LLM app with Arize Phoenix.

Learning Objectives:

  • Build a powerful LLM application, with a native Langchain integration, and easily serve and deploy.

  • Use the Phoenix LLM Evals library designed for simple, fast and accurate LLM based evaluations.

  • Troubleshoot and debug your LLM app with Phoenix Traces and Spans – to find where application broke when Langchain is used.