Cover Image for Fine-tuning LLMs: Deep-dive on Preference & Continued Fine-tuning
Cover Image for Fine-tuning LLMs: Deep-dive on Preference & Continued Fine-tuning
Hosted By
427 Going

Fine-tuning LLMs: Deep-dive on Preference & Continued Fine-tuning

Hosted by Together AI
YouTube
Registration
Welcome! To join the event, please register below.
About Event

Join us for an exploration of advanced techniques to continuously adapt and improve your large language models. This session features practical demonstrations and actionable insights for implementing preference tuning and continual fine-tuning that evolve with your application needs.What You'll Learn:

  • Fine-Tuning vs RAG: When to fine-tune your LLM and when Retrieval-Augmented Generation (RAG) is the better choice.

  • Preference Tuning & DPO: Direct Preference Optimization techniques allow you to align models with human preferences without expensive reinforcement learning. See how to effectively use pairwise feedback data to shape model outputs and behavior.

  • Continual Fine-Tuning Strategies: Go beyond one-time customization with ongoing checkpoint tuning approaches. Learn how to safely and effectively update your models as your application, data, and requirements evolve.

  • Implementation Deep Dive: Practical code demonstrations for setting up continuous training pipelines that automatically incorporate new data and preferences.

This session is ideal for ML engineers, AI developers, and organizations looking to maintain state-of-the-art performance through sophisticated, ongoing model adaptation strategies.

Hosted By
427 Going