Cover Image for Should I Use RAG or Fine-Tuning?
Cover Image for Should I Use RAG or Fine-Tuning?
Avatar for Data Phoenix Events
624 Going

Should I Use RAG or Fine-Tuning?

YouTube
Get Tickets
Past Event
Welcome! Please choose your desired ticket type:
About Event

​The Data Phoenix team invites you to our upcoming webinar, which will take place on May 2nd at 10 a.m. PT.

  • ​​Topic: Should I Use RAG or Fine-Tuning?

  • ​​Speakers: Dr. Greg” Loughnane (Co-Founder & CEO of AI Makerspace) & Chris “The Wiz” Alexiuk (Co-Founder & CTO at AI Makerspace)

  • ​​Participation: free (but you’ll be required to register)

One question we get a lot as we teach students around the world to build, ship, and share production-grade LLM applications is “Should I use RAG or fine-tuning?“

The answer is yes. You should use RAG AND fine-tuning, especially if you’re aiming at human-level performance in production.

To best understand exactly how and when to use RAG and Supervised Fine-Tuning (a.k.a SFT or just fine-tuning), there are many nuances that we must consider!

In this event, we’ll zoom in on prototyping LLM applications, provide mental models for how to think about using RAG, and how to think about using fine-tuning. We’ll dive into RAG and how fine-tuned models, including LLMs and embedding models, are typically leveraged within RAG applications.

Specifically, we will break down Retrieval Augmented Generation into dense vector retrieval plus in-context learning. With this in mind, we’ll articulate the primary forms of fine-tuning you need to know, including task training, constraining the I-O schema, and language training in detail.

Finally, we’ll provide an end-to-end domain-adapted RAG application to solve a use case. All code will be demoed live, including what is necessary to build our RAG application with LangChain v0.1 and to fine-tune an open-source embedding model from Hugging Face!

You’ll learn:

  • RAG and fine-tuning are not alternatives, but rather two pieces to the puzzle

  • RAG and fine-tuning are not specific things. They are patterns.

  • How to build a RAG application using fine-tuned domain-adapted embeddings

Who should attend the event?

  • Any GenAI practitioner who has asked themselves “Should I use RAG or fine-tuning?”

  • Aspiring AI Engineers looking to build and fine-tune complex LLM applications

  • AI Engineering leaders who want to understand primary patterns for GenAI prototypes

Speakers

Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021 he has built and led industry-leading Machine Learning education programs.  Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher.  He loves trail running and is based in Dayton, Ohio.

Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator YouTube who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

Avatar for Data Phoenix Events
624 Going