Britain Left Out: The AI Race Just Got More Crowded
Today was one of those days in AI where the news isn’t just news. It’s a signal. Here’s what happened, and what it actually means.
The headline is blunt: OpenAI has paused the Stargate UK data centre programme. The project, part of a £31 billion inward investment package, backed by Nvidia and UK developer Nscale, and loudly championed by the government as proof Britain was serious about AI, has been put on hold.
The reason OpenAI gave is simple. UK electricity is too expensive. The regulatory environment is too uncertain. The business case doesn’t work.
That matters for a few reasons beyond the obvious political embarrassment. Data centres are where AI actually lives. They house the computing power that runs the models. Without enough of them, on your soil, you are renting your AI future from someone else’s infrastructure. The government’s sovereign compute ambitions, the idea that Britain should be able to run its own AI workloads without depending on foreign data centres, just took a serious hit.
To be fair, OpenAI was careful. It said it still sees huge potential in the UK. Translation: come back to us when the energy policy changes. But they’re not waiting around.
The uncomfortable truth is that AI infrastructure investment follows cheap energy and predictable regulation. Right now the UK isn’t competitive on either. That’s not a tech problem. It’s a planning and energy policy problem. And it’s not going to be fixed by another minister giving a speech about AI leadership.
The second story is harder to get your head around. Anthropic has quietly announced Claude Mythos, a new frontier model currently restricted to a small group of vetted partners under something called Project Glasswing. The reason they’re not releasing it more broadly: it’s too capable at finding and exploiting software vulnerabilities.
In testing, Mythos apparently identified thousands of serious security flaws across major operating systems and browsers. Anthropic decided that making that available to everyone, including criminals, nation states, and people who would simply like to cause damage, was not a good idea.
This is worth taking seriously. Anthropic is not a company that tends toward hyperbole. If they say a model is too dangerous to release, they’ve probably done the analysis. It’s also a little alarming. We are now at the point where commercial AI labs are building tools with capabilities that rival state level offensive cyber operations, and then having to make judgment calls about who gets access. That’s a genuinely new situation. The question of who oversees those decisions, and what accountability looks like, doesn’t have a clear answer yet.
For UK policymakers who have spent two years arguing for a pro innovation regulatory approach, broadly meaning don’t regulate until you have to, Mythos is an awkward exhibit. Sometimes the capability arrives before anyone is ready for it.
And then there’s Meta. The company launched Muse Spark, the first model from its new Meta Superintelligence Labs unit, led by Alexandr Wang. The model focuses on reasoning in mathematics, science and health, and powers an upgraded Meta AI assistant on web and mobile. The Meta AI app jumped from around 57th to the top five on the US App Store in the days after launch. That’s a lot of people suddenly noticing it exists.
Meta’s play is different from OpenAI’s or Google’s. They’re not trying to win on the frontier model benchmarks. They’re trying to win on distribution, getting an AI assistant into WhatsApp, Instagram, Facebook, Messenger and their smart glasses, used by people who may never open ChatGPT in their lives. For the UK, where Meta’s platforms reach tens of millions of people, Muse Spark is likely to be the AI model most people actually encounter, whether they know it or not.
Three things happened today that point in the same direction. AI capability is advancing faster than infrastructure, governance and public understanding can keep up. Britain is at risk of becoming a consumer rather than a shaper of that technology. And the companies building these systems are increasingly making consequential decisions, about what to release, what to restrict, where to invest, that governments have very little influence over. That’s not a grim conclusion. But it is an honest one. Worth paying attention to.
At a glance
The rest of today’s AI news, in brief:
- CoreWeave expands its Meta AI compute deal from $14bn to $21bn. Capacity now contracted through 2032, backed by Nvidia’s next generation Rubin chips. The scale of AI infrastructure spending continues to be staggering. (Bloomberg)
- Amazon says its in-house AI chip business is on a $20bn annual run rate and may start selling chips externally. Trainium and Inferentia accelerators, developed for AWS’s own use, could become a product in their own right. (Bloomberg)
- The Pentagon dropped Anthropic as a supplier and smaller AI defence firms are now fielding a surge of interest. US defence officials designated Anthropic a supply chain risk, opening the door for alternative suppliers in military AI work. (Reuters)
- Florida’s attorney general has opened a probe into OpenAI, alleging ChatGPT may have been used to help plan a mass shooting at Florida State University. One of the most aggressive state level regulatory actions against a major AI lab to date. (Reuters)
- Microsoft is quietly removing Copilot branding from Windows 11. Notepad and Snipping Tool now show a neutral writing tools label. The AI features remain; the branding is being toned down. (Windows Latest)
- Microsoft 365 Copilot is now routing prompts across multiple AI models in parallel. OpenAI and Anthropic models running side by side inside Office, depending on the task. The era of picking a single AI model is quietly ending. (GeekWire)
- Intel and Google have deepened their partnership on AI focused CPUs and infrastructure chips. As AI workloads move from training to large scale deployment, demand for general purpose compute is growing alongside GPUs. (Reuters)
- A group of former Google DeepMind researchers have founded Elorian, a startup building AI models that better understand visual scenes, targeting robotics, automotive and architecture. (Bloomberg)