Accelerating AI: Optimizing LLM Training and Cloud Infrastructure @ Dolby Laboratories
Join us for an exciting fireside chat with three innovators at the forefront of AI acceleration and infrastructure optimization. This meetup brings together experts who are revolutionizing the way we train large language models and manage cloud resources for AI workloads.
The meetup is being hosted at Dolby Laboratories where we will also have a demo of Dolby Vision and Dolby Atmos technologies in their state-of-the-art Dolby Cinema.
Event Description In this engaging panel discussion, we'll dive deep into the cutting-edge advancements in AI compute optimization. Our distinguished speakers will share insights on:
Faster LLM Training: Learn how innovative software and mathematical techniques can dramatically speed up the training of large language models like Llama-3, Mistral, Phi-3, and Gemma, while significantly reducing memory usage. Cloud
Infrastructure for AI: Discover how to streamline AI compute procurement, command, and control across multiple cloud providers, optimizing costs and improving resource utilization.
Scaling AI Assets: Explore strategies for rapidly scaling AI resources to meet dynamic needs, from multi-node training to efficient inference deployment. Future of AI Infrastructure: Get a glimpse into the future of AI compute, including advancements in container loading, data movement, and workload migration.
Whether you're an AI researcher, a CIO managing enterprise AI initiatives, or a startup founder looking to optimize your AI infrastructure, this meetup offers valuable insights into maximizing the efficiency and effectiveness of your AI projects.
Our Speakers:
Daniel Han, Unsloth Co-founder: A trailblazer in LLM optimization, achieving 30x faster finetuning and 85% VRAM reduction through innovative software and mathematical techniques.
Ben Sand, Strong Compute Founder: An expert in AI compute infrastructure, offering solutions for enterprise-scale AI deployment, multi-cloud management, and high-speed tooling for AI workloads.
Piero Molino, Predibase Co-founder & Chief Scientific Officer: A pioneer in efficient AI model deployment, leading the development of cost-effective LLM fine-tuning and serving solutions that rival GPT-4 performance at a fraction of the cost.
Sunil Mallya, Flip.ai Co-founder: Lead NLP for AWS, Pioneer behind AWS Bedrock, RL guru and brains behind AWS DeepRacer. Currently building domain specific autonomous LLMs for observability.
Our Hosts:
Paul Conyngham, Spagbol Co-founder: 15 years of experience in AI. 5 years in GPT. Startup founder. Currently working on LLM dataset curation at Spagbol.co
Ari Chanen, lead AI consultant at AI Ari consulting : 20+ years of experience in AI, ML, GenAI, NLP. AI consultant, former VP of AI, and Lead AI engineer. MIT grad.
Don't miss this opportunity to learn from and engage with these industry leaders as they discuss the latest trends and technologies shaping the future of AI computation and infrastructure.
Join us for an evening of insightful discussion, networking, and a glimpse into the cutting edge of AI optimization!