News

Britain Left Out: The AI Race Just Got More Crowded

Today was one of those days in AI where the news isn’t just news. It’s a signal. Here’s what happened, and what it actually means.

The headline is blunt: OpenAI has paused the Stargate UK data centre programme. The project, part of a £31 billion inward investment package, backed by Nvidia and UK developer Nscale, and loudly championed by the government as proof Britain was serious about AI, has been put on hold.

The reason OpenAI gave is simple. UK electricity is too expensive. The regulatory environment is too uncertain. The business case doesn’t work.

That matters for a few reasons beyond the obvious political embarrassment. Data centres are where AI actually lives. They house the computing power that runs the models. Without enough of them, on your soil, you are renting your AI future from someone else’s infrastructure. The government’s sovereign compute ambitions, the idea that Britain should be able to run its own AI workloads without depending on foreign data centres, just took a serious hit.

To be fair, OpenAI was careful. It said it still sees huge potential in the UK. Translation: come back to us when the energy policy changes. But they’re not waiting around.

The uncomfortable truth is that AI infrastructure investment follows cheap energy and predictable regulation. Right now the UK isn’t competitive on either. That’s not a tech problem. It’s a planning and energy policy problem. And it’s not going to be fixed by another minister giving a speech about AI leadership.

The second story is harder to get your head around. Anthropic has quietly announced Claude Mythos, a new frontier model currently restricted to a small group of vetted partners under something called Project Glasswing. The reason they’re not releasing it more broadly: it’s too capable at finding and exploiting software vulnerabilities.

In testing, Mythos apparently identified thousands of serious security flaws across major operating systems and browsers. Anthropic decided that making that available to everyone, including criminals, nation states, and people who would simply like to cause damage, was not a good idea.

This is worth taking seriously. Anthropic is not a company that tends toward hyperbole. If they say a model is too dangerous to release, they’ve probably done the analysis. It’s also a little alarming. We are now at the point where commercial AI labs are building tools with capabilities that rival state level offensive cyber operations, and then having to make judgment calls about who gets access. That’s a genuinely new situation. The question of who oversees those decisions, and what accountability looks like, doesn’t have a clear answer yet.

For UK policymakers who have spent two years arguing for a pro innovation regulatory approach, broadly meaning don’t regulate until you have to, Mythos is an awkward exhibit. Sometimes the capability arrives before anyone is ready for it.

And then there’s Meta. The company launched Muse Spark, the first model from its new Meta Superintelligence Labs unit, led by Alexandr Wang. The model focuses on reasoning in mathematics, science and health, and powers an upgraded Meta AI assistant on web and mobile. The Meta AI app jumped from around 57th to the top five on the US App Store in the days after launch. That’s a lot of people suddenly noticing it exists.

Meta’s play is different from OpenAI’s or Google’s. They’re not trying to win on the frontier model benchmarks. They’re trying to win on distribution, getting an AI assistant into WhatsApp, Instagram, Facebook, Messenger and their smart glasses, used by people who may never open ChatGPT in their lives. For the UK, where Meta’s platforms reach tens of millions of people, Muse Spark is likely to be the AI model most people actually encounter, whether they know it or not.

Three things happened today that point in the same direction. AI capability is advancing faster than infrastructure, governance and public understanding can keep up. Britain is at risk of becoming a consumer rather than a shaper of that technology. And the companies building these systems are increasingly making consequential decisions, about what to release, what to restrict, where to invest, that governments have very little influence over. That’s not a grim conclusion. But it is an honest one. Worth paying attention to.


At a glance

The rest of today’s AI news, in brief: