Cover Image for MLOps Community San Francisco Fall & Winter Workshops - Part V
Cover Image for MLOps Community San Francisco Fall & Winter Workshops - Part V

MLOps Community San Francisco Fall & Winter Workshops - Part V

Hosted by Rahul Parundekar
Registration
Past Event
Welcome! To join the event, please register below.
About Event

In Part V of our "Fall & Winter Workshops" series, the MLOps.community is organizing a half-day workshop where you'll learn how to fine-tune LLMs and take them to production. These workshops are for Software Engineers, AI/ML Engineers, and Product Managers trying to learn more about how to build with AI.

NOTE: PLEASE RSVP BY MONDAY MARCH 11th 5:00 pm. Due to building security updates, we might not be able to allow walk-ins, or registrations on Tuesday.

​This will be a half-day event with two sessions:

1:00pm - 3:00pm: Introduction to fine-tuning LLMs + Fine-tuning for RAG by Rahul Parundekar at the MLOps Community.
3:00pm - 3:30pm: Break
3:30pm - 5:30pm: Lessons learned fine-tuning Llama2 + Advanced fine-tuning LLMOps by Rahul Parundekar at the MLOps Community.

We're excited to have Stan Lee from Upstage talk about The Art of Fine-tuning for Exceptional sLLMs - Discover the untapped power of fine-tuning small LLMs (sLLMs) to surpass GPT-4 in specific domains or tasks. This talk delves into two compelling case studies: building a custom sLLM for a commercial company and enhancing an sLLM’s ability to comprehend and solve complex mathematical problems.

Session I and II: Introduction and advanced fine-tuning by Rahul Parundekar at the MLOps Community.

Objectives: As we learn to integrate LLMs into our products and the tools we build, we frequently wonder if fine-tuning is the right technique for us.

The first session covers an intro to fine-tuning LLMs, and talks about when is the right time to fine-tune. We'll then do a hands-on fine-tuning for Retrieval-Augmented Generation in a Google Colab to understand how we can effectively use fine-tuning to improve model performance.

In the second session, we'll talk about lessons learned fine-tuning LLama2 and gotchas about fine-tuning that you'd need to know when you embark on that journey. Lastly, we'll talk about an LLM Ops pipeline for fine-tuning at scale and deploying LLMs in your private cloud on Kubernetes.

Prerequisites:

  • Proficient in Python

  • Basic understanding of RAG.


Resources: We'll use Google Colab and OpenAI


​**PLEASE BRING YOUR OWN LAPTOPS**


Thanks to the amazing folks at Microsoft Reactor for being a community partner and hosting our upcoming events!!

Note: To comply with the venue, we've added a few questions that have been requested of us. Thank you for understanding.

Location
555 California St
San Francisco, CA 94104, USA