Cover Image for Using Vector Databases for Multimodal Search and Retrieval Augmented Generation
Cover Image for Using Vector Databases for Multimodal Search and Retrieval Augmented Generation
Avatar for Data Phoenix Events

Using Vector Databases for Multimodal Search and Retrieval Augmented Generation

Get Tickets
Welcome! Please choose your desired ticket type:
About Event

​​​​The Data Phoenix team invites you to our upcoming webinar, which will take place on July 25 at 10 a.m. PDT.

  • ​​​​​​Topic: Using Vector Databases for Multimodal Search and Retrieval Augmented Generation

  • ​​​​​​Speakers: Zain Hasan (Developer Relations Engineer at Weaviate)

  • ​​​​​​Participation: free (but you’ll be required to register)

Many real-world problems are inherently multimodal, from the communicative modalities humans use such as spoken language and gestures to the force, sensory, and visual sensors used in robotics. For machine learning models to address these problems and interact more naturally and wholistically with the world around them and ultimately be more general and powerful reasoning engines, we need them to understand data across all of its corresponding images, video, text, audio, and tactile representations.

In this talk, Zain Hasan will discuss how we can use open-source multimodal embedding models in conjunction with large generative multimodal models that can that can see, hear, read, and feel data(!), to perform cross-modal search(searching audio with images, videos with text etc.) and multimodal retrieval augmented generation (MM-RAG) at the billion-object scale with the help of open source vector databases. I will also demonstrate, with live code demos, how being able to perform this cross-modal retrieval in real-time can enables users to use LLMs that can reason over their enterprise multimodal data. This talk will revolve around how we can scale the usage of multimodal embedding and generative models in production.

​Key Highlights of the Webinar:

  • Multimodal Embedding models

  • Multimodal retrieval with Weaviate

  • Multimodal generation models: How can we finetune language models to see

  • Performing Multimodal RAG using Weaviate and language vision models

  • Multimodal applications for recommender systems

Speaker

Zain Hasan is a developer relations engineer at Weaviate. An engineer and data scientist by training, he pursued his undergraduate and graduate work at the University of Toronto building artificially intelligent assistive technologies, then founded his company, VinciLabs in the digital health-tech space. More recently he practiced as a consultant senior data scientist in Toronto. Zain is passionate about the fields of machine learning, education, and public speaking.

Please join DataPhoenix Discord and follow us on LinkedIn and YouTube to stay updated on our community events and the latest AI and data news.

Location
https://events.dataphoenix.info/live
Avatar for Data Phoenix Events