


SL5 Task Force: Securing superintelligent AI models against powerful adversaries (AI Safety Talk & Meetup)
Open meetup for people working on or interested in AI Safety (incl. AI alignment, evals, policy, advocacy and other fields related to safe & beneficial AI). People new to the field welcome!
19:30 Doors open, arrivals, catch up
19:45 Talk + Q&A
20:30 Open announcement & pitch round
20:45 Open discussion & networking
Open end (doors close at 22:30)
Talk: SL5 Task Force - Securing future frontier AI models against powerful adversaries
As AI models become more powerful, the companies building them are facing more powerful adversaries. As AI approaches human level, we expect various risks, but it would be particularly bad if malicious actors got their hands on unprotected versions of extremely intelligent models. To prevent that, AI companies in the future will need to be secured against the strongest adversaries, which global policy think tank RAND refers to as Security Level 5 (SL5) adversaries.
The SL5 Task Force team is developing plans and prototypes for how to achieve this level of security, under the assumption that we don’t have time to wait for financial incentives to align. Berlin-based AI researcher and aisafety.berlin organiser Guy will share some of his work in the Task Force and answer questions.
Networking
After the talk & Q+A, attendees can introduce themselves and (optional) pitch topics, to discuss 1-1 or in small groups. We'll probably have several other attendees working in various related fields (technical AI safety, AI governance, etc) at the event.
New to AI risk?
You're very welcome, and this event is a good starting point to meet the Berlin AI safety community and learn more. If you have time, we recommend having a look at the articles or videos on aisafety.berlin/learn. This can be helpful but it's not required.
Feedback? Suggestions for future events, speakers or topics? --> Reach out! You can also submit anonymous feedback via this form.
We're looking forward to meeting you!
------------
Note that by attending, you consent to being photographed. If this is a problem for you, let us know!
Subscribe to AI Safety Berlin Announcements on Telegram, Signal or Whatsapp and join the Community Chat.
aisafety.berlin