Cover Image for Community Paper Reading: RAFT - Adapting Language Model to Domain Specific RAG
Cover Image for Community Paper Reading: RAFT - Adapting Language Model to Domain Specific RAG
Avatar for Arize AI
Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
Hosted By
15 Went

Community Paper Reading: RAFT - Adapting Language Model to Domain Specific RAG

Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

We’re excited to host Sai Kolasani, researcher at UC Berkeley’s RISE Lab, to talk about his work on RAFT: Adapting Language Model to Domain Specific RAG. RAFT is a training recipe that improves an LLM’s ability to answer questions in a “open-book” in-domain settings. Given a question, and a set of retrieved documents, the model is trained to ignore documents that don’t help in answering the question (aka distractor documents). This coupled with RAFT’s chain-of-thought-style response, helps improve the model’s ability to reason. In domain-specific RAG, RAFT consistently improves the model’s performance across PubMed, HotpotQA, and Gorilla datasets, presenting a post-training recipe to improve pre-trained LLMs to in-domain RAG.

Avatar for Arize AI
Presented by
Arize AI
Generative AI-focused workshops, hackathons, and more. Come build with us!
Hosted By
15 Went