Cover Image for FAR.AI Social: Can we verify AI safety claims?
Cover Image for FAR.AI Social: Can we verify AI safety claims?
17 Going

FAR.AI Social: Can we verify AI safety claims?

Hosted by FAR.AI, Vael Gates & Aidan O'Gara
Registration
Approval Required
Your registration is subject to approval by the host.
Welcome! To join the event, please register below.
About Event

On Wednesday, 13 August—the first night of the USENIX Security Symposium—FAR.AI will host a social from 7-10 p.m. at The Fog Room (1610 2nd Ave, a 10-min walk from the venue; bring your ID!) We expect roughly 50-75 security and hardware specialists from USENIX Security and the University of Washington. 

Are you a hardware or systems researcher, SWE, or ML engineer interested in exploring technical approaches to enforcing safety standards for advanced AI?

Drop by to meet peers, swap ideas, and chat about: 

  • Secure hardware features: Confidential Computing, remote attestation, secure boot, and tamper-resistance for AI chips

  • Monitoring compliance: Network monitoring, workload attestation, analog sensor telemetry (e.g. power usage), and secure audits

  • Systems security, red-teaming and detection of vulnerabilities: Identifying hardware or software exploits that could undermine verification regimes

  • Applications to governance and oversight: How can these technical verification mechanisms underpin safety standards, regulations or agreements by being translated into policy-relevant tools?

...

The last few years have seen increasing debate about safety standards, domestic regulations, and international agreements on AI development. We are now starting to see the first binding regulation on AI in some jurisdictions, such as the EU AI Act. However, in all these discussions, one big question looms: Can we verify that AI developers are actually following the rules?

Join us to discuss how technical tools—from secure boot to proof-of-learning—might enable meaningful oversight of advanced AI systems. We're interested in both speculative ideas and practical implementation challenges, with an emphasis on verification mechanisms that could support safe and accountable AI development.

At 8pm, we’ll dive further into the discussion with a brief talk from Aidan O'Gara, a doctoral student in AI at Oxford University and a grantmaker at Longview Philanthropy, where he funds academic research on hardware security relevant to reducing AI risks.

Looking forward to seeing you there!

...

FAR.AI is a research nonprofit dedicated to making advanced AI systems safe and beneficial.

Location
Fog Room
1610 2nd Ave, Seattle, WA 98101, USA
17 Going