Cover Image for What makes LLM alignment and safety challenging? 18 foundational challenges!
Cover Image for What makes LLM alignment and safety challenging? 18 foundational challenges!
Hosted By
38 Going

What makes LLM alignment and safety challenging? 18 foundational challenges!

Hosted by BuzzRobot
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

The BuzzRobot speaker, Usman Anwar from the University of Cambridge, will share the work he led with contributions from over 35 co-authors across NLP, ML, AI Safety, and AI Ethics.

In this work, researchers identify 18 foundational challenges in assuring the alignment and safety of large language models (LLMs).

These challenges are organized into three different categories:

  • scientific understanding of LLMs

  • development and deployment methods

  • socio-technical challenges.

In this talk, we will discuss each of these challenges and reflect on possible solutions.

Join the BuzzRobot community on Slack

Subscribe to our YouTube channel with previous talks

Read the paper

Hosted By
38 Going