Cover Image for PauseAI presents: The Google DeepMind Protest
Cover Image for PauseAI presents: The Google DeepMind Protest
Hosted By
128 Went
Joep Meindertsma
invites you to join

PauseAI presents: The Google DeepMind Protest

Hosted by Joseph Miller
Registration
Past Event
Welcome! To join the event, please register below.
About Event

PROTEST GUIDE HERE: https://docs.google.com/document/d/1O3mAduuvY239iY7Bk2tp7Cfyjoli-dpxe2AlgnM0Sjs

In 2024, Google made a public commitment at the AI Summit in Seoul. They signed the Frontier AI Safety Commitments, pledging to conduct rigorous testing of their AI models. They said they would consider results from independent third-party evaluators as appropriate and provide full transparency about the process, including government involvement.

Then came March 2025, when Google released Gemini 2.5 Pro - their most advanced AI model yet. When safety experts looked for the promised testing report, they found nothing. No external evaluation. No transparency report. Just silence.

A month later, under pressure, Google published a barebones "model card" with some internal evaluations but no mention of external evaluations. They later added vague references to "external testers" but provided no details. When Fortune asked directly whether governments were involved, Google refused to answer - violating their transparency pledge.

They made similar commitments to the White House in 2023 and signed the Hiroshima Process International Code of Conduct in 2025. With the Gemini 2.5 Pro release, Google appears to have also violated, at least in spirit, these other sets of safety commitments.

While today's AI models aren't dangerous enough to cause mass destruction, AI development is accelerating unpredictably. We need rigorous testing of each generation to avoid being caught off-guard by sudden leaps in capability. More importantly, Google's casual disregard for safety commitments sets a dangerous precedent.

As AI becomes more powerful, competitive pressures will intensify and the stakes will grow higher. If we allow companies to ignore safety commitments now, when the risks are relatively low, what hope do we have of holding them accountable when AI systems could pose existential threats?

The norms we establish today will shape how the most powerful technology in human history is developed.

PauseAI is a growing movement refusing to accept that AI safety should be an afterthought. On June 30th, we're gathering outside Google DeepMind's London headquarters with a simple message: Stick to your commitments.

Our ask is simple - just that Google honour the promises they've already made. Publish timely and transparent reports on the results of their pre-deployment safety testing.

Our ultimate goal is a moratorium on frontier AI development until we can ensure advanced systems will be safe. But we need one thing right now: basic accountability from one of the world's most powerful AI companies.

The future of AI will be shaped by the precedents we set today.

We call on Google DeepMind to:

  1. Establish clear definitions of "deployment" that align with common understanding - when a model is publicly accessible, it is deployed.

  2. Publish a specific timeline for when safety evaluation reports will be released for all future models.

  3. Clarify unambiguously, for each model release, which government agencies and independent third-parties are involved in testing, and the exact timelines of their testing procedures.

Location
Granary Square
London N1C, UK
Meet at the Santander Bikes next to the canal.
Hosted By
128 Went