
AI infra & open-source models #1
We're hosting our first dstack user group event in Berlin and inviting AI engineers, researchers, and infrastructure experts interested in AI infra and open-source models. Join us for lightning talks, demos, and focused discussions.
This event designed to bring together individuals deeply involved in the AI space to exchange ideas, share technical insights, and connect.
Schedule:
6:00pm – Arrival
6:30pm – Lightning talks and demos (Part 1)
7:15pm – Break
7:30pm – Lightning talks and demos (Part 2)
8:15pm – Networking with food & drinks
Each demo will take 5 mins, with a Q&A sesh at the end of all the demos.
Speakers:
Tim from
RunPod- 'Deploy AI apps using natural language'
Szymon from
Aleph Alpha- 'How to keep your GPU happy'
Andrey from
dstack- 'dstack: Beyond Kubernetes and Slurm'
Scott from
Jina AI- 'Jina AI: Feature-rich enterprise-ready models'
Andrey from
Qdrant- 'miniCOIL: a new model for Sparse Neural Retrieval'
Tanja from
Rasa- 'Keep CALM and Generate Commands: A New Dialogue Understanding Architecture'
Roman from
Nixiesearch- 'Serving a 9B embedding model without losing your mind'
David from
Lambda- 'Winning the ARC prize 2024'
Anton from
Tokalon.ai- 'Budget-friendly image generation'
Piotr from
Aleph Alpha- 'Prompt case vs token by token decodin'
Artem from
JetBrains- 'Fast track for data synchronization in AI'
Mish from
E2B- 'Building an open-source computer use agent'
Vedant from
Aleph Alpha- 'Steering vectors'
Mathis from
deepset- 'Building open source agents with Haystack'
Tejas from
DataStax- 'Build your own local-first, local-only AI workflow'
Jay from
TextCortex- 'Saving your sanity and money with pgvector'
About dstack:
dstack is an open-source alternative to Kubernetes and Slurm, simplifying AI development and deployment across clouds and on-prem with support for NVIDIA, AMD, and TPU hardware.
Thanks to
RunPod, Aleph Alpha, and Tensor Ventures for supporting this event.