Cover Image for LLMOps Micro-Summit: Live Stream
Cover Image for LLMOps Micro-Summit: Live Stream
682 Going

LLMOps Micro-Summit: Live Stream

Hosted by Michael Ortega, Kathryn Adams & Shohil Kothari
Zoom
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

A small summit with big ideas. Learn what it takes to build the GenAI stack of the future!


Many developers have felt the pain of huge OpenAI bills or the challenge of building big model infra on their own. The LLMOps world is changing and the future looks much smaller.

​Pioneered by Apple, the new GenAI stack is built on small models (SLMs) and cost-effective inference without sacrificing performance. So what does it take to build like Apple? 

​Join us Aug. 22nd in San Francisco to hear from AI leaders on what it takes to build the next-gen LLM architecture. We’ll cover the latest techniques from data through deployment. 

This is a live event that will be streamed online. If you'd like to attend in person in San Francisco register here.


🎙️Talks & Speakers

Small is the New Big: Why Apple and Other AI Leaders are Betting Big on Small Language Models
Dev Rishi, Cofounder & CEO, Predibase
Piero Molino, Cofounder & Chief Scientific Officer, Predibase, and creator of open-source Ludwig

GenAI at Production Scale with SLMs that Beat GPT-4
Vlad Bukhin, Staff ML Engineer, Checkr

Next Gen LLM Inference: Blazing Fast + Cost-Effective
Arnav Garg, ML Eng Lead, Predibase, and maintainer of open-source LoRAX and Ludwig

Fine-Tuning SLMs for Enterprise-Grade Evaluation & Observability
Atin Sanyal, Co-founder & CTO, Galileo

Build Better Models Faster with Synthetic Data
Maarten Van Segbroeck, Head of Applied Science, Gretel

682 Going