Cover Image for Guardrails
Cover Image for Guardrails
Avatar for Public AIM Events!
Presented by
Public AIM Events!
Hosted By
16 Going

Guardrails

YouTube
Registration
Welcome! To join the event, please register below.
About Event

Everyone talks about putting guardrails on production LLM applications.

But why do we need guardrails, what are they exactly, and where do they live in our production application infrastructure stack?

Why Guardrails

It’s all about the safety and reliability of our applications. This means that our applications are safe and reliable for our customers and our business.

It’s not just about hallucinations and undesirable outputs, but also about prompt injections & jailbreaks as well as data leakage.

The “How” of Guardrails

When we put our applications “on rails,” we think about many different techniques.

These techniques include ones that we might apply during the training of LLMs, prior to input (pre-generation), during inference or orchestration, following output (post-generation), as well as during operation of our application.

Importantly guardrails should be thought of as a layered approach, not a single tactic.

Importantly, the best-practices for first-cut “guardrails” today is to use another LLM on the input and/or on the output of your application.

But what other LLM should we use? We recently looked at Llama Guard. Is this one-size-fits-all model the best place to start?

To investigate this, as well as to gain key insights into the most important layers of guardrails in 2025, during this event we’re going to deep dive the latest “Comprehensive AI Guardrails Benchmark” from Guardrails AI, where 20+ guardrail solutions are evaluated across 6 critical safety domains, including:

  • Jailbreak Prevention: Safeguarding against unauthorized system access and misuse

  • PII Detection: Protecting sensitive personal information from exposure

  • Content Moderation: Ensuring appropriate and compliant content generation

  • Hallucination Detection: Identifying and mitigating inaccurate or misleading outputs

  • Competitor Presence: Preventing unauthorized use of proprietary data and models

  • Restricted Topics: Enforcing content boundaries and avoiding sensitive subjects

To use guardrails, we need to understand the core concept of The Guard that we can use to run the Guardrails AI engine. This guard object is the main interface for guardrails and is responsible for both wrapping LLM calls and orchestrating validation. We will explore the different ways we can use The Guard in our applications.

Of course, we’ll also build, ship, and share a production LLM application use case with guadrails in mind, and offer an end-to-end example of how we might leverage guardrails for a specific application we’re interested in.

📚 You’ll learn:

  • Where to start with putting guardrails on your next production LLM applications

  • The key categories of guardrails that you should think about with every use case in 2025

  • How to leverage the Guardrails AI library to put your LLM applications on rails

  • How to build, ship, and share a best-practice agent application with guadrails

🤓 Who should attend the event:

  • AI Engineers who want to put guardrails on their production agent applications in 2025

  • AI Engineering leaders who want to protect their company’s brand, competitive advantage, investment in AI, and customer/stakeholder privacy as they continue building and generating ROI on production LLM, RAG, and Agent applications

Speaker Bios

  • Dr. Greg” Loughnane is the Co-Founder & CEO of AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. Since 2021, he has built and led industry-leading Machine Learning education programs.  Previously, he worked as an AI product manager, a university professor teaching AI, an AI consultant and startup advisor, and an ML researcher.  He loves trail running and is based in Dayton, Ohio.

  • Chris “The Wiz” Alexiuk is the Co-Founder & CTO at AI Makerspace, where he is an instructor for their AI Engineering Bootcamp. During the day, he is also a Developer Advocate at NVIDIA. Previously, he was a Founding Machine Learning Engineer, Data Scientist, and ML curriculum developer and instructor. He’s a YouTube content creator who’s motto is “Build, build, build!” He loves Dungeons & Dragons and is based in Toronto, Canada.

Follow AI Makerspace on LinkedIn and YouTube to stay updated about workshops, new courses, and corporate training opportunities.

Avatar for Public AIM Events!
Presented by
Public AIM Events!
Hosted By
16 Going