

PauseAI presents: The Google DeepMind Protest
In 2024, Google made a public commitment at the AI Summit in Seoul. They signed the Frontier AI Safety Commitments, pledging to conduct rigorous testing of their AI models with independent third-party evaluators before release and provide full transparency about the process, including government involvement.
Then came March 2025, when Google released Gemini 2.5 Pro - their most advanced AI model yet. When safety experts looked for the promised testing report, they found nothing. No external evaluation. No transparency report. Just silence.
A month later, under pressure, Google published a barebones "model card" with no mention of external evaluations. They later added vague references to "external testers" but provided no details. When Fortune asked directly whether governments were involved, Google refused to answer—violating their transparency pledge.
This wasn't Google's only broken promise. They had made similar commitments to the White House in 2023 and signed the Hiroshima Process International Code of Conduct in 2025. With the Gemini Pro release, Google appears to have violated all three sets of safety commitments.
While today's AI models aren't dangerous enough to cause mass destruction, AI development is accelerating unpredictably. We need rigorous testing of each generation to avoid being caught off-guard by sudden leaps in capability. More importantly, Google's casual disregard for safety commitments sets a dangerous precedent.
As AI becomes more powerful, competitive pressures will intensify and the stakes will grow higher. If we allow companies to ignore safety commitments now, when the risks are relatively low, what hope do we have of holding them accountable when AI systems could pose existential threats?
The norms we establish today will shape how the most powerful technology in human history is developed.
PauseAI is a growing movement refusing to accept that AI safety should be an afterthought. On June 30th, we're gathering outside Google DeepMind's London headquarters with a simple message: Stick to your commitments.
Our ask is simple - just that Google honour the promises they've already made. Conduct proper external testing before releasing AI models and publish transparent reports about the results.
Our ultimate goal is a moratorium on frontier AI development until we can ensure advanced systems will be safe. But we need one thing right now: basic accountability from one of the world's most powerful AI companies.
The future of AI will be shaped by the precedents we set today.