VESSL AI x Pinecone Meetup SF : LLMs in Production
<LLMs in Production>
LLMs in Production is an AI product meetup hosted by VESSL AI and Pinecone. We bring together local founders, product leaders, and engineers to cover the best practices and the latest trends in Production LLMs.
Join our premiere event with SVB, where we will be joined by Koyeb, Snowflake, and more. We'll have food, drinks, and like-minded folks you can talk to about the roadblocks and solutions you've found along the way.
When & Where
When — Monday, September 9 at 5:30 PM
Where — SVB Experience Center, San Francisco
Agenda
5:00-5:30 Doors open
5:30-6:00 "Building context-augmented LLMs with RAG & Vector DB"
Roie Schwaber-Cohen, Staff Developer Advocate, Pinecone
6:00-6:30 "Custom LLMs—smarter, faster, and cheaper"
6:30-7:00 "High-Performance LLMs: Serverless Deployment across Accelerators, GPUs, and CPUs"
Yann Leger, Co-founder & CEO, Koyeb
7:00-7:30 "Evaluating LLM Apps"
Anupam Datta, Principal Research Scientist for AI, Snowflake
7:30-8:00 "Multi-agent Systems in Production"
Laurie Voss, VP Developer Relations, LIamaIndex
8:00-9:00 Networking with drinks & small bites
About VESSL AI
VESSL AI is an end-to-end AI development platform that enables the world's leading AI teams to train, deploy, and automate full spectrum AI & LLMs in minutes. Sign up for free at vessl.ai.
If you're curious about what our previous meetups were like, feel free to check out this VESSL's YouTube playlist !
About Pinecone
Pinecone is a fully managed vector database that makes it easy to add vector search to production applications. It combines state-of-the-art vector search libraries, advanced features such as filtering, and distributed infrastructure to provide high performance and reliability at any scale. No more hassles of benchmarking and tuning algorithms or building and maintaining infrastructure for vector search. Pinecone serverless lets you deliver remarkable GenAI applications faster, at up to 50x lower cost. Sign up for free at pinecone.io.