Cover Image for Sharing our tricks and magic for pushing generative AI applications into production with open source

Sharing our tricks and magic for pushing generative AI applications into production with open source

Hosted by Philip Vollet, Tuana Çelik & Ronny Hoesada
 
 
Registration
Past Event
Please click on the button below to join the waitlist. You will be notified if additional spots become available.
About Event

Join us for a fun evening with snacks, drinks, and lots of knowledge to unlock the true potential of AI!

Integrate your vast internal knowledge base, build production-ready RAG Retrieval Augmented Generation pipelines, and guide your model to produce accurate results.

  • Ever felt like your LLM daydreams its own alternative facts?

  • Ever hit a knowledge wall because your AI's memory just isn't expansive enough or its wisdom doesn't stretch far enough?

  • Tired of digging deep into your pockets just to keep that model finely tuned?

Talks this evening

Customizing LLM Applications with Haystack

Every LLM application comes with a unique set of requirements, use cases and restrictions. Let's see how we can make use of open-source tools and frameworks to design around our custom needs.

Build bulletproof generative AI applications with Weaviate and LLMs

Building AI applications for production is challenging, your users don't like to wait, and delivering the right results in milliseconds instead of seconds will win their hearts. We'll show you how to build caching, fact-checking, and RAG: Retrieval Augmented Generation pipelines with real-world examples, live demos, and ready-to-run GitHub projects using Weaviate, your favorite open-source vector database.

Context Matters: Boosting LLM Accuracy with Unstructured.io Metadata

Retrieval Augmented Generations (RAGs), limited by plain text representation and token size restrictions, often struggle to capture specific, factual information from reliable source documents. Discover how to use metadata and vector search to enhance the ability of LLMs to accurately retrieve specific knowledge and facts from a vast array of documents.

Your AI future awaits! 🌟