LLMOps Micro-Summit: Small Models, Big Results
A small summit with big ideas for ML engineers. Come learn about the GenAI stack of the future!
Many developers have felt the pain of huge OpenAI bills or the challenge of building big model infra on their own. The LLMOps world is changing and the future looks much smaller.
Pioneered by Apple, the new GenAI stack is built on small models (SLMs) and cost-effective inference without sacrificing performance. So what does it take to build like Apple?
Join us Aug. 22nd in San Francisco to hear from AI leaders on what it takes to build the next-gen LLM architecture. We’ll cover the latest techniques from data through deployment.
Space is limited. For those we cannot accomodate or live outside of the area, we are providing a live stream of the event.
🎙️Talks & Speakers
Small is the New Big: Why Apple and Other AI Leaders are Betting Big on Small Language Models
Dev Rishi, Cofounder & CEO, Predibase
Piero Molino, Cofounder & Chief Scientific Officer, Predibase, and creator of open-source Ludwig
GenAI at Production Scale with SLMs that Beat GPT-4
Vlad Bukhin, Staff ML Engineer, Checkr
Next Gen LLM Inference: Blazing Fast + Cost-Effective
Arnav Garg, ML Eng Lead, Predibase, and maintainer of open-source LoRAX and Ludwig
Fine-Tuning SLMs for Enterprise-Grade Evaluation & Observability
Atin Sanyal, Co-founder & CTO, Galileo
Build Better Models Faster with Synthetic Data
Maarten Van Segbroeck, Head of Applied Science, Gretel
📅 Agenda
4:00: Doors Open
4:30: Lightning Talks
7:00: Networking, food and drinks
8:00: Event concludes