Cover Image for Workshop: Fine-tune and Analyze LLMs
 
 
Register to See Address
San Francisco, California
Registration
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

Slow iteration cycles, inaccurate responses, and poor retrieval are just a few of the issues that currently plague the language models (LLMs) used today. So, how can you efficiently fine-tune and evaluate the performance of an LLM that powers your application?

​In this hands-on workshop, you will fine-tune a popular open source LLM (Google’s Flan-T5) at scale using Ray and HuggingFace. Additionally, you will visualize and inspect your model predictions using Arize Phoenix. 

You’ll learn about flexible and scalable approaches to fine-tuning. Then you’ll also learn how to visualize and analyze clusters of data points inside a notebook - with the ultimate goal of extracting tangible insights for fine-tuning an LLM to your own data, task, or desired response approach. 

Learning Objectives:

  • Hands-on demonstration focused on scalable fine-tuning and inference of an open-source language model (Google's Flan-T5) in 40 minutes using Ray and HuggingFace. Workshop participants will have the opportunity to fine-tune their own Flan-T5 model in Colab.

  • Once fine-tuning is complete, use Phoenix to visualize the embedding distribution of our model before and after fine-tuning 

​** Please bring your laptop to follow along with this hands-on workshop.

---

Hosts: Arize AI & Anyscale

Agenda:
5:30pm - 6:00pm Arrival
6:00-7:30 - Hands-on workshop
7:30-8:00pm - Networking

---

Note: By RSVP'ing to this event, you are opting in to receive event updates and marketing communications from Arize AI and Anyscale according to their respective privacy policies: https://arize.com/privacy-policy/ https://www.anyscale.com/privacy-policy