Cover Image for Evan Hubinger (ANTHROPIC) - Talk and Q&A
Cover Image for Evan Hubinger (ANTHROPIC) - Talk and Q&A
196 Went

Evan Hubinger (ANTHROPIC) - Talk and Q&A

Hosted by Berkeley AI Safety (BASIS) & Aakarsh Bengani
Register to See Address
Berkeley, California
Registration
Sold Out
This event is sold out and no longer taking registrations.
About Event

Join BASIS for our first speaker event of the semester, featuring Evan Hubinger from the Anthropic safety team. The event will consist of a talk followed by a Q&A session. We'll also be providing food for the attendees.

About Evan: Evan Hubinger is an American researcher who has worked on AI alignment and safety. He is known for his work on inner alignment and deceptive alignment, particularly his paper "Risks from Learned Optimization" in 2019. His research focuses on understanding how AI systems might develop mesa-optimizers and potentially deceptive behavior. Evan worked as a Research Scientist at Anthropic and previously as a Research Fellow at the Machine Intelligence Research Institute (MIRI). He received his B.S. in Computer Science from Harvey Mudd College in 2019.

Location
Please register to see the exact location of this event.
Berkeley, California
196 Went