Cover Image for LLM Observability 101: Common Challenges Seen in Production

LLM Observability 101: Common Challenges Seen in Production

 
 
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

In today’s hyper-connected world, the pace of AI development and innovation is staggering, reshaping industries and redefining what’s possible at unprecedented speed. According to a recent survey, over half (53%) of data science and machine learning teams say they plan to deploy large language model (LLM) applications into production in the next 12 months or “as soon as possible” – however, nearly as many (43%) cite issues like accuracy of responses and hallucinations as a main barrier to implementation. 

In this Lunch & Learn, we will discuss how to best alleviate the challenges machine learning and data science teams face when implementing LLMs in production. 

​Learning Objectives:

  • ​Understand the landscape of AI innovation, including LLMs, and its transformative potential 

  • ​Discover the foundational technologies required to build robust and resilient LLM infrastructure

  • Deep-dive into the world of word embeddings, learning how these vector representations are fundamental to the operation of language models.

  • Understand where issues generally emerge with LLMs in production, their causes, and implications for your LLMOps practice. 

  • Introduction into strategies for monitoring, troubleshooting, and fine-tuning LLM models.