OpenAI has quietly shelved Stargate UK, the flagship AI infrastructure project that was supposed to anchor a £31 billion commitment from US tech companies into Britain's economy. The official reasons are high energy costs and regulatory friction. The unofficial read is that the economics just did not add up, and the UK made it too complicated to say yes. This was not some minor pilot programme either. It was supposed to be a centrepiece of the Labour government's growth strategy, the proof point that Britain could compete as a serious AI destination.
Daily Briefing — April 10, 2026
OpenAI shelves Stargate UK in blow to Britain’s AI ambitions
The Guardian Tech →If your work touches AI infrastructure, public sector tech, or UK based digital investment, the ground just shifted under assumptions your industry was making about where the money and momentum were heading.
Anthropic keeps latest AI tool out of public’s hands for fear of enabling widespread hacking
The Guardian Tech →Anthropic built an AI model called Claude Mythos that is so good at finding software vulnerabilities that they decided not to release it publicly. We're talking thousands of zero day flaws exposed across commonly used applications, meaning software you probably interact with every day. That's not a minor finding. That's a "we need to make some calls before this gets out" finding.
The company is taking a genuinely unusual stance here. Instead of the standard "release it, iterate, apologise later" playbook that defines most of the AI industry right now, Anthropic is pre arming cybersecurity specialists and open source engineers with Mythos so defenders can patch weaknesses before attackers can exploit them. It's essentially triage at scale, handled quietly before the chaos starts.
What this signals is bigger than one unreleased model. We are entering a phase where AI systems can systematically scan codebases and infrastructure faster and more thoroughly than any human team ever could. That changes the entire threat surface calculation for every company running software, which is every company.
If your work touches software, infrastructure, or anything involving user data, the window between "vulnerability exists" and "someone exploits it" is about to get dramatically shorter, and your organisation's security posture needs to be ready for that new pace.
AI companies are tightening token limits
Fast Company Tech →For years, AI companies bundled tokens into flat-rate subscriptions, hid them behind generous caps, or priced them so low that nobody bothered counting. That era is ending. As the cost of serving frontier models eats into revenue, and as chip shortages, helium supply disruptions from the Middle East conflict, and data center bottlenecks constrain how much compute can come online, the major model makers are rationing access more aggressively.
This week, Meta took offline its internal "Claudenomics" leaderboard, which tracked employee productivity by how many AI tokens they consumed per month. The numbers were staggering: Meta employees used more than 60 trillion tokens in a single month — equivalent to roughly 80 million copies of War and Peace.
The dynamic is now a game of chicken: who can keep subsidising demand the longest? The last company to blink may get to dominate the market — but the subsidies are not free, and the infrastructure constraints are real.
If you're using AI tools at work or building workflows around them, the pricing you're paying today is almost certainly subsidised. Token limits, usage caps, and price increases are coming. The question is when, not whether.
No, it is Friday.