Community Paper Reading: RAFT - Adapting Language Model to Domain Specific RAG
Registration
Past Event
About Event
We’re excited to host Sai Kolasani, researcher at UC Berkeley’s RISE Lab, to talk about his work on RAFT: Adapting Language Model to Domain Specific RAG. RAFT is a training recipe that improves an LLM’s ability to answer questions in a “open-book” in-domain settings. Given a question, and a set of retrieved documents, the model is trained to ignore documents that don’t help in answering the question (aka distractor documents). This coupled with RAFT’s chain-of-thought-style response, helps improve the model’s ability to reason. In domain-specific RAG, RAFT consistently improves the model’s performance across PubMed, HotpotQA, and Gorilla datasets, presenting a post-training recipe to improve pre-trained LLMs to in-domain RAG.