AI Systems at Scale: Embeddings, LLMs, and Beyond
Join us for an evening of technical deep dives and practical insights into the challenges of deploying AI systems at scale. This meetup brings together engineers and researchers who are pushing the boundaries of what's possible with embeddings, large language models, and distributed AI infrastructure.
This is the first of a monthly series by crackedsf.com.
Featured Talks
Effortlessly Infinite Embeddings with Modal
By Charles Frye, AI Engineer @ Modal Labs
Large-Scale Data Analysis with LLM Batch Inference Optimization
By Amog Kamsetty, ex-Founding ML Engineer @ Anyscale
Designing a Scalable Distributed Cache for ML Training Workloads in the Cloud
By Bin Fan, Founding Member @ Alluxio
Temporal Similarity Search at Scale
By Michael Ryaboy, Developer Advocate @ KDB.AI
Why Attend?
Gain practical insights from engineers tackling real-world scaling challenges
Learn about cutting-edge techniques in embedding applications, LLM optimization, and distributed caching for ML
Connect with peers who are passionate about solving complex AI problems at scale
Engage in technical discussions that go beyond the usual buzzwords and hype
Agenda
5:45 PM - 6:00 PM: Registration and Networking
6:15 PM - 7:30 PM: Technical Presentations
7:30 PM - 9:00 PM: Q&A and Extended Networking
Each talk will include time for Q&A, allowing for in-depth technical discussions.
What to Expect
Technical deep dives with code examples and architecture discussions
Honest conversations about the challenges of scaling AI systems
Opportunities to network with speakers and attendees facing similar technical hurdles
A collaborative atmosphere focused on knowledge sharing and problem-solving
Who Should Attend?
This meetup is tailored for:
ML/AI engineers working on production systems
Researchers interested in the practical aspects of scaling AI
Data scientists looking to optimize their workflows
Software engineers curious about the infrastructure powering modern AI
While we welcome attendees at all levels, our talks will assume some understanding of machine learning concepts and distributed systems principles.
Complimentary refreshments, including pizza, will be provided to fuel our discussions.
Don't miss this opportunity to elevate your understanding of AI systems at scale and connect with fellow engineers pushing the boundaries of what's possible.
Space is limited because of our venue, so register now to secure your spot!
Call for speakers! We are seeking passionate individuals as speakers. If you have valuable insights to share with the AI/ML Infra community, please fill out this form: