Cover Image for The Second vLLM Meetup @ The AI Alliance
Cover Image for The Second vLLM Meetup @ The AI Alliance
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!
Registration
Past Event
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

​Welcome to the second vLLM Meetup @ The AI Alliance!

We are at capacity and the registration is now closed. If you are still interested in going, please email us at vllm-questions at lists.berkeley.edu

​We are absolutely thrilled to invite you to the second vLLM meetup. This event is for the growing community of vLLM users and developers to connect, share, and learn together. The vLLM team will share the recent updates on the project. We will also have vLLM collaborators from IBM coming up to the stage to discuss their insights on LLM optimizations.

Tentative ​Agenda:

5:00pm - 6:00pm: Doors open and check-in. Mingles and discussions. Food and refreshments will be available.

6:00pm - 6:40pm: Talks

  • Introduction: AI Alliance - Alexy Khrabrov, IBM Research

  • vLLM Project Update - Zhuohan Li, Woosuk Kwon, Simon Mo, UC Berkeley

  • torch.compile and vLLM - Antoni Viros i Martin, IBM Research

​6:40pm - 7:30pm: Q&A and Social Hour - This is your chance to mingle, share your experiences, ask questions, and get to know the vLLM team on a personal level.

​Note from the vLLM Team:

​We, the vLLM team, are incredibly excited to meet each and every one of you. This meetup is not just about sharing updates but also about celebrating the community that makes vLLM what it is today. Let's come together, share our stories, and envision the future of vLLM.

​Special Thanks:

​A heartfelt thank you to IBM for generously providing us with the venue for this event. Their continuous support has been invaluable to the vLLM's open-source community.

Location
425 Market Street
425 Market St, San Francisco, CA 94105, USA
Avatar for vLLM Meetups and Events
Join the vLLM community to discuss optimizing LLM inference!