Cover Image for LlamaIndex Webinar: Retrieval-Augmented Fine-Tuning (RAFT)
Cover Image for LlamaIndex Webinar: Retrieval-Augmented Fine-Tuning (RAFT)
Hosted By
379 Going

LlamaIndex Webinar: Retrieval-Augmented Fine-Tuning (RAFT)

Hosted by Jerry Liu
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

RAFT - Retrieval Augmented Fine Tuning 🔥

Retrieval-Augmented Fine-Tuning (RAFT) by Zhang et al. is a new technique to fine-tune pre-trained LLMs for specific domain RAG settings.

Conventional RAG is like an open-book exam, retrieving documents from an index to provide context for answering queries. This makes it more effective than the closed-book exam setting where LLMs rely solely on their pre-training and fine-tuning to respond to prompts, but doesn't allow the LLM to learn the domain beforehand.

We're excited to feature Tianjun Zhang and Shishir Patil, the two lead co-authors of RAFT, in a special LlamaIndex Webinar this Thursday 9am PT. They'll be doing a presentation on RAFT and also engaging in a discussion on fine-tuning and RAG - an ever-relevant topic.

RAFT blog: https://gorilla.cs.berkeley.edu/blogs/9_raft.html


**Extra**

Thanks to Ravi, you can now generate a dataset for RAFT using our RAFTDatasetPack: https://llamahub.ai/l/llama-packs/llama-index-packs-raft-dataset?from=

Notebook: https://github.com/run-llama/llama_index/blob/main/llama-index-packs/llama-index-packs-raft-dataset/examples/raft_dataset.ipynb

Hosted By
379 Going