Cover Image for vLLM: Easy, Fast, and Cheap LLM Serving for Everyone
Cover Image for vLLM: Easy, Fast, and Cheap LLM Serving for Everyone
Avatar for Tune AI
Presented by
Tune AI

vLLM: Easy, Fast, and Cheap LLM Serving for Everyone

Zoom
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

โ€‹For developers, builders, AI enthusiasts, and anyone looking to optimize LLM serving and opportunity to contribute to open source project.

โ€‹
๐Ÿ“… When: July 18th, 2024

โ€‹โฐ Time: 10:30 AM PST

โ€‹๐Ÿ“ Where: Zoom Webinar

โ€‹
๐Ÿ“ Agenda:

โ€‹This presentation introduces vLLM, a high-performance open-source LLM inference engine. We'll cover:

  • โ€‹Overview of vLLM and its key benefits

  • โ€‹Deep dive into features like pipeline parallelism and speculative decoding

  • โ€‹Live demonstration of setting up and optimizing LLM inference

  • โ€‹Contribution opportunities and current development priorities

โ€‹
๐ŸŽค Speakers:

  • โ€‹Woosuk Kwon - Ph.D. student at UC Berkeley, creator of vLLM

  • โ€‹Kaichao You - Ph.D. student at Tsinghua University, vLLM contributor

โ€‹๐Ÿ’ซ What's the Deal?

โ€‹We promise 1 hour of cutting-edge AI technology, insightful demonstrations, and the chance to connect with the creators of a amazing open-source project. Learn how to significantly improve your AI infrastructure's performance and cost-efficiency!

โ€‹๐Ÿ”— Resources:

Avatar for Tune AI
Presented by
Tune AI