Cover Image for BerriAI Improving chatGPT/LLM App results via chunking strategies

BerriAI Improving chatGPT/LLM App results via chunking strategies

Hosted by Ishaan Jaffer & Krrish Dholakia
 
 
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Join us this Sunday for our virtual workshop - a deep dive into data chunking strategies!

As GPT has a limited context window of around 4-8k tokens, breaking down data into smaller chunks becomes necessary. Our default approach is to divide the data into smaller, fixed-size chunks.

During this workshop, we will focus on improving the performance of your ChatGPT/LLM using the following data chunking strategies:

  • Chunking data into fixed sized smaller chunks

  • Creating a summary for each chunk

  • Generating potential questions/answers users might ask

  • Hypothetical Document Embeddings

  • Writing your own custom chunking strategies and applying it to all chunks

  • Prototyping efficient data loading and pre-processing strategies

Come collaborate with other AI engineers, data scientists, and ML engineers to discuss best practices and explore innovative techniques.

By the end of the workshop, you will have a solid understanding of how to implement advanced chunking strategies to improve the performance of your ChatGPT/LLM app.