Aran Nayebi, CMU Machine Learning Dept |Barriers and Pathways to Human-AI Alignment: A Game-Theoretic Approach
Foresight Institute’s Neurotech Group
Barriers and Pathways to Human-AI Alignment: A Game-Theoretic Approach
Abstract: Under what conditions can capable AI systems efficiently align with human preferences, and when is this alignment computationally feasible? Since such generally capable systems do not yet exist, a theoretical analysis is needed to establish when guarantees hold -- and what they even are. We provide the first complexity-theoretic analysis of the alignment problem, introducing a game-theoretic framework that generalizes prior alignment approaches under minimal assumptions, providing both upper and lower bounds on alignment’s complexity across M objectives and N agents. We show that even very capable, cooperative AI agents—including those enhanced by brain-computer interfaces—face inherent bottlenecks when the task space or number of agents grows large. Nevertheless, we identify key conditions under which efficient alignment remains possible, clarifying what makes an AI agent “sufficiently safe” and valuable to humans.
Full paper: https://arxiv.org/abs/2502.05934
Bio: Aran Nayebi is an Assistant Professor at Carnegie Mellon University’s Machine Learning Department, a member of the Neuroscience & Robotics Institutes. His lab works at the intersection of neuroscience & AI to reverse-engineer animal intelligence and build the next generation of autonomous agents. Previously, he was a postdoctoral fellow at MIT, and before that, a Ph.D. student at Stanford University with Dan Yamins and Surya Ganguli.
This seminar is part of Foresight's Neurotech Seminar Series. To join future seminars in this program please apply here.
A group of neuroscience researchers, entrepreneurs, and allies advancing beneficial short-term and long-term neurotechnology applications.
Nominate a seminar presenter/topic
Feel free to reach out to lydia@foresight.org with any questions.