Cover Image for MLOps Community San Francisco Fall & Winter Workshops - Part 1
Cover Image for MLOps Community San Francisco Fall & Winter Workshops - Part 1
102 Going

MLOps Community San Francisco Fall & Winter Workshops - Part 1

Hosted by Rahul Parundekar & 5 others
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

The MLOps Community in San Francisco is putting together a new series of technical workshops for the AI/ML Engineer together with some of the most cutting-edge companies ​🎉

In this first part of our "Fall & Winter Workshops" series, we are partnering with Union AI and Weaviate to create workshops on Fine-tuning LLMs and Multimodal RAGs - some of the hottest skills in demand right now.

This will be a half-day event with two sessions:

1:00pm - 3:00pm: Fine-tuning Large Language Models with Declarative AI Orchestration - by Union.ai
3:00pm - 3:30pm: Break
3:30pm - 5:30pm: Vector Databases: From Fundamentals to Multimodal RAG Systems - by Weaviate

You can choose to register for either one or both the workshops. But spots are limited. So please apply in advance.


Session I: Fine-tuning Large Language Models with Declarative AI Orchestration - by Union.ai

Instructor: Eduardo Apolinario, Engineering Manager at Union.ai
Bio: Eduardo is an engineering manager at Union.ai where he leads the Open-source team. He's also one of the maintainers of Flyte. Before working at Union, Eduardo was an engineer at Lyft working on the intersection of ML and infrastructure.

Description: The use of Language Models has become more widespread in recent years, thanks in part to the broader accessibility of datasets and the ML frameworks needed to facilitate the training of these models. Many of these models are large – hence the terminology of Large Language Models (LLMs) – and serve as so-called foundation models, which are trained by organizations with compute resources to train them. These foundation models, in turn, can be fine-tuned by the broader Machine Learning community for specific use cases, perhaps on proprietary data. One of the barriers that make fine-tuning these models is infrastructure: even with cloud tools like Google Colab and the wider availability of consumer-grade GPUs, putting together a runtime environment to fine-tune these models is still a major challenge. This workshop will give attendees hands-on experience on how to use Flyte, the Kubernetes-native AI orchestrator for building production-grade data and ML pipelines, to declaratively specify infrastructure so that they can configure training jobs to run on the required compute resources to fine-tune language models on their own data.

Who is it for: Whether you're new to the intersection of AI and infrastructure or an experienced practitioner, this workshop will give you the conceptual understanding to fine-tune LLMs using modern ML libraries.

Key takeaways/what you will learn: This workshop has two main learning goals. First, attendees will learn the main concepts behind Flyte, a workflow orchestrator for data and machine learning. Many of these concepts are orchestrator-agnostic, such as containerization for reproducibility, declarative infrastructure, and type safety. Secondly, they will also learn how to leverage the latest deep learning frameworks that optimize memory and compute resources required to fine-tune language models in the most economical way.

Prerequisites: Intermediate Python, working knowledge of Docker, and intermediate knowledge of machine learning.

**PLEASE BRING YOUR OWN LAPTOPS**

Resources:

  • GitHub repository with relevant resources including setup instructions, and a README for an overview (Coming Soon)

  • Access to a UnionCloud account to run the workshop examples


Session II: Vector Databases: From Fundamentals to Multimodal RAG Systems - by Weaviate

Description: The recent rise of LLMs has completely changed the conversation around vector databases and their absolute necessity. Additionally, terms like "Multimodality and Retrieval Augmented Generation (RAG) are two very important concepts that are currently being discussed when building real-life AI applications.

But what are they, really? How do you get the most out of your vector database? How can you build cutting-edge ai applications leveraging these concepts? Join us for a hands-on workshop to learn all about the fundamentals of vector databases and how to build up an application that leverages multi-modal RAG search (AI that combines video, text, images, and audio in the context of your own data).

Who is it for: This Workshop is intended for intermediate to professional-level developers/data scientists (programming skills highly recommended) who want to learn how to leverage vector databases for multimodality in a RAG environment.

Key takeaways/what you will learn:

  • Fundamental concepts of Vector databases and RAG

  • Using Vector databases in a multimodal RAG environments

  • How to build a first application using a vector database for multimodal search with context data

Prerequisites:

  • Working knowledge of Python (or intermediate in JS/Typescript or Go to follow along),

  • recommended: already setup your prefered client library to interact with Weaviate,

  • Good Energy!

**PLEASE BRING YOUR OWN LAPTOPS**

Resources:

  • Coming Soon


Thanks to the amazing folks at Microsoft Reactor for being a community partner and hosting our upcoming events!!

Note: To comply with the venue, we've added a few questions that have been requested of us. Thank you for understanding.

Location
555 California St
San Francisco, CA 94104, USA
102 Going