

PauseAI presents: The Google DeepMind Protest
In 2024, Google made a promise. They signed the Frontier AI Safety Commitments at the AI Summit in Seoul. This was a pledge made in front of governments, the tech industry and the public. They said that they would conduct tests "before deploying" their models with input from "independent third-party evaluators". And, they said they would "provide public transparency" into their testing process including "how, if at all, external actors, such as governments... are involved in the process".
In March this year Google released their new Gemini Pro model, the most advanced chatbot ever made. And what did their testing report say about the model? Nothing. There was no report.
A month later they finally published a barebones model card that made no mention of external evaluations. A few weeks after that they updated the report with a vague mention of external testers. But there was no detail about who the external parties were. Even when asked directly by Fortune, they still wouldn't say whether governments were involved, directly violating their pledge.
So how many of the Frontier AI Safety Commitments did they break? We don't know, because they violated the transparency commitment! But it was probably quite a few, or else they wouldn't feel the need to be so secretive.
Amazingly, that isn't the only set of commitments they violated with the Gemini release. In 2023, they also made a similar promise to the White House to test their models, "subjecting them to external testing… and making the results of those assessments public". And this year in 2025, they pledged to report compliance with the Hiroshima Process International Code of Conduct which requires companies to conduct "external testing measures... before deployment" and publish "up-to-date" transparency reports.
But the models we have today aren't really dangerous, they can't be used to cause mass destruction because they're not smart enough. So why does this matter?
It matters for two reasons. Firstly, AI is progressing very fast. The models we have today aren't scary, but no one knows when they will become truly powerful. Many experts have been shocked at the rate of progress over the past 5 years. We can't predict with any precision when our AIs will change from useful chatbots to something more powerful. So we need to test each new generation so that we always know where we are and don't get caught off guard by a surprising new improvement.
The second reason is even more important. Google is demonstrating their expectation that the rules around AI can just be ignored when they become mildly inconvenient for companies racing for a competitive edge. AI will be the most powerful, complex and dangerous technology that humans ever create. Getting the governance right will be incredibly challenging. The incentives to move as fast as possible will only become stronger. Money and power will only become more concentrated in the hands of a few AI developers. And the number of people affected by AI will only grow. We need to establish strong precedents and norms right now to have any hope of successfully navigating future challenges. And Google ignoring their commitments so soon after they've made them is not helping with that.
So how can they get away with what they're doing? Companies normally think twice before blatantly ignoring multiple international agreements made to the world's major governments. Well, one reason is that there is not enough public accountability. Not enough people are calling out their behavior. No one is protesting in the street when companies break their agreements. Until now.
I've had enough with companies making a joke of AI safety. I think it's not ok for the people building the most powerful technology in history to treat safety as an afterthought. And I know I'm not alone. Lots of people are starting to get concerned about the breathtaking recklessness with which AI companies are racing towards the brink of catastrophe, with a total disregard for the consequences to human wellbeing.
So on June 30th I am organizing a protest outside the London headquarters of Google DeepMind. Our message is simple: Stick to your commitments! Google must start following the rules that it promised the world that it would follow. It must conduct external red teaming of its models before they are released and publish the results.
Our movement is called PauseAI. It's a group of people who are willing to take a stand for AI safety. We are not afraid to call out companies when they undermine international efforts to ensure that AI doesn't harm people. As the name suggests, our ultimate goal is a moratorium on all frontier AI development until we can be sure that they will always be safe and aligned with human interests.
But for Monday, our goal is a little more modest. Right now we're just asking for the bare minimum from AI companies. We want Google DeepMind to simply do what it has said it will since 2023, and not violate any more of its commitments on AI safety. So on Monday you'll find us with our placards in King's Cross, London, doing our bit to make the future of humanity a little bit brighter.