AI Safety Debate: "Is AI an existential threat to humanity"?
2023 was a big year for AI development and for perceptions of AI risk. Prominent Open Letters have called for AI development to be Paused for 6-Months, and for AI to be considered as risky as Nuclear Weapons and Pandemics. However, these long-term risks from AI are still contentious amongst experts.
So, is AI an existential threat to humanity?
Join us at UCL on Tuesday 20th February for an exciting debate on one of the most important questions of our generation!
Our Speakers:
Arguing for the motion:
Chris Watkins: Professor of Computer Science at Royal Holloway, and leading expert on ‘reinforcement learning’ algorithms
Reuben Adams: UCL AI PhD student and host of the Steering AI Podcast
Arguing Against the Motion
Jack Stilgoe: Professor of Science and Technology Studies at UCL, and one of the leaders of UK Research and Innovation’s Responsible AI programme
Kenneth Cukier: Deputy Executive Editor at The Economist Magazine in London, and cohost of its weekly tech podcast Babbage.
Moderator:
Tom Ough: Freelance Writer, whose work has featured in newspapers and magazines, including Prospect, BBC Future, the Telegraph, and other publications.