Building and Analyzing an AI-powered Search System with Arize & Llamaindex
As more and more companies start using LLMs to build their own chatbots or search systems, poor retrieval is a common issue plaguing teams. So, how can you efficiently build and analyze the LLM that powers your search system, and how do you know how to improve it? If there isn't enough context to pull in, then the prompt doesn't have enough context to answer the question.
Imagine you've built and deployed an LLM question-answering service that enables users to ask questions and receive answers from a knowledge base. You want to understand what kinds of questions your users are asking and whether you're providing good answers to those questions.
In this hands-on workshop, you will download a pre-indexed knowledge base and run a LlamaIndex application. You will download user query data and knowledge base data, including embeddings computed using the OpenAI API. Using Phoenix, you will investigate clusters of user queries with no, or limited, corresponding knowledge base entries.
You’ll learn how to identify if there is decent overlap between queries and context, locate where there is a density of queries without enough context, and the next steps you can take to fine tune your model.
Learning Objectives:
Hands-on demonstration focused on building and analyzing a context retrieval sue case. Workshop participants will have the opportunity to investigate the model in Colab.
Once building is complete leveraging LlamaIndex, use Phoenix to visualize the query and context density of the model.
** Please bring your laptop to follow along with this hands-on workshop.
---
Hosts: Arize AI & LlamaIndex
Agenda:
5:30-6pm - attendee check-in
6-7:15pm - demos + hands-on workshop
7:15-8pm - networking