Cover Image for Intro to AI Safety & Red Teaming
Cover Image for Intro to AI Safety & Red Teaming
Hosted By

Intro to AI Safety & Red Teaming

Hosted by Outlier AI
Zoom
Registration Closed
This event is not currently taking registrations. You may contact the host or subscribe to receive updates.
About Event

An Intro to AI Red Teaming and Risk-Aware Building

AI is moving fast. New models, bigger capabilities, and better tools hit the market weekly. But under the hood, there’s a growing need to make sure these systems behave in ways we can actually trust.

Join us for a session that breaks down how to think about AI safety without the jargon. We’ll explore how red teaming works—why it’s used, what makes it hard, and how it helps us test real-world risks before they spiral.

Whether you’re an AI developer, a builder using models in your workflow, or just AI-curious, this talk will give you a grounding in:

  • What “AI safety” actually means in practice

  • How red teaming helps uncover failure modes

  • Real examples of red teaming techniques

  • How individuals can contribute to safer AI systems

If you build with AI, this talk’s for you.

🎙️ About the Speaker

Jeremy Kritz is a Prompt Engineer and Researcher at Scale working on red teaming and synthetic data research.

Hosted By