Cover Image for LlamaIndex Webinar: NUDGE  Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval
Cover Image for LlamaIndex Webinar: NUDGE  Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval
Hosted By

LlamaIndex Webinar: NUDGE Lightweight Non-Parametric Fine-Tuning of Embeddings for Retrieval

Hosted by Jerry Liu
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Fine-tuning your embedding model is an underrated way of increasing RAG performance - come learn about it!

We're excited to host the authors of NUDGE (Sepanta Zeighami et al.) - a new non-parametric approach to embedding fine-tuning - for a special LlamaIndex webinar.

Existing embedding optimization approaches either fine-tune the base model directly (which has downside that you need to reindex all your data with the new model), or train adaptor models that transform the output of the pre-trained model during inference.

In contrast, NUDGE directly modifies the data embedding records themselves within a constrained bound - this does not drastically change the embeddings from pre-training but helps "nudge" the embeddings into a space that's better suited for the given use case at hand.

Benefits: 

✅ NUDGE runs in minutes over millions of data records vs. spending hours for embedding model finetuning 

✅ You don’t need to know about the original embedding model at all 

✅ No added compute during inference time. 

Try it today in LlamaIndex with a simple code import. Thanks to Zac Wellmer for contributing an embedding finetuning integration here: https://github.com/run-llama/llama_index/blob/main/docs/docs/examples/finetuning/embeddings/finetune_corpus_embedding.ipynb

Source paper: https://www.arxiv.org/pdf/2409.02343

Hosted By