Cover Image for Beyond the Frame: Multimodal Video Recommendations with TwelveLabs + Qdrant
17 Going

Beyond the Frame: Multimodal Video Recommendations with TwelveLabs + Qdrant

Hosted by Thierry Damiba, Madelyn Duhon & James Le
Register to See Address
San Francisco, California
Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

Qdrant AI Builders: Video Recommendations with Twelve Labs


What if your app could understand what’s happening in a video — not just the title or transcript, but the emotion in the scene, the objects on screen, and the context of the conversation?

Join Qdrant and Twelve Labs for a live, behind-the-scenes presentation on building smarter video recommendation systems using state-of-the-art vector search and multimodal AI. We’ll walk through a real open-source demo that combines Twelve Labs’ video intelligence API with Qdrant’s vector database to enable rich, semantic recommendations based on what’s actually happening inside the video.

You’ll learn:

  • How Marengo and Pegasus foundation models are transforming video understanding

  • ​Explore real-world applications in sports, media, and security solutions

  • How multimodal embeddings work across audio, visual, and textual signals

  • How to store and search them at scale using Qdrant

  • What it takes to build a recommendation engine that feels intelligent

  • Key takeaways from the GitHub project

This event is for developers, ML engineers, product builders, and anyone curious about the next generation of video search and personalization.

Location
Please register to see the exact location of this event.
San Francisco, California
17 Going