Cover Image for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Cover Image for Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems
Hosted By
34 Going

Towards Guaranteed Safe AI: A Framework for Ensuring Robust and Reliable AI Systems

Hosted by BuzzRobot
Zoom
Registration
Past Event
Welcome! To join the event, please register below.
About Event

Ensuring that AI systems reliably and robustly avoid harmful or dangerous behaviors is a crucial challenge, especially for AI systems with a high degree of autonomy and general intelligence.

In this research, which involved prominent researchers and scientists such as Max Tegmark and Yoshua Bengio among others, a family of approaches to AI safety is introduced and defined as guaranteed safe (GS) AI.

The core feature of these approaches is that they aim to produce AI systems equipped with high-assurance quantitative safety guarantees.

This is achieved by the interplay of three core components: - a world model (which provides a mathematical description of how the AI system affects the outside world)
- a safety specification (which is a mathematical description of what effects are acceptable)
- a verifier which provides an auditable proof certificate that the AI satisfies the safety specification relative to the world model.

The researchers outline several approaches for creating each of these three core components, describe the main technical challenges, and suggest a number of potential solutions.

The paper https://arxiv.org/pdf/2405.06624

​Join the BuzzRobot community on Slack: https://join.slack.com/t/buzzrobot/shared_invite/zt-1zsh7k8pd-iMu_M8bUxIK3pOJgqJgCRQ

Hosted By
34 Going