

vLLM Inference Meetup: New Delhi, India
We are excited to invite you to the vLLM meetup in Delhi hosted by Red Hat.
This is your chance to connect with a growing community of vLLM users, developers, maintainers, and engineers from Red Hat. We'll dive deep into technical talks, share insights, and discuss our journey in optimizing LLM inference for performance and efficiency.
What to expect:
Technical insights
Networking with industry experts
Hands-on learning & demos
Agenda
09:30-10:00: Kick-off and Opening Remarks
10:00-10:30: How AI Inference works
10:30-11:00: vLLM and Advance Inference Techniques
11:00-11:30: LLM Compressor
11:30-12:00: Running GenAI Mode on vLLM
12:00-12:30: Break
12:30-02:00: Hands-on Lab
Bring your laptop with SSH installed. GPU instances provided by organizers.
Hosts:
eprasad96@gmail.com
jpathani@redhat.com