09 May 2026: Cloudflare’s AI Pivot Costs 1,100 Jobs as GPT-5.5 Pro Shows What AI Agents Can Do
Cloudflare cuts 1,100 jobs citing AI efficiency, Oracle's severance dispute deepens, GPT-5.5 Pro proves it can handle months of complex work.
The week’s AI news is split between disruption and capability. Thousands of tech workers are absorbing what “AI-first” actually costs, OpenAI’s most powerful agentic model is proving it can do months of complex work in hours, and Anthropic has published the most concrete AI safety results the industry has seen to date.
Cloudflare has cut 1,100 employees, roughly 20 per cent of its entire workforce, with the company explicitly stating that artificial intelligence has made those roles redundant. The layoffs, announced on 8 May, came alongside a disclosure that Cloudflare’s internal AI usage had surged more than 600 per cent in just three months. Co-founders Matthew Prince and Michelle Zatlyn described the restructuring as a move to an “agentic AI-first operating model.” It is one of the most direct statements yet from a major technology company that AI tools have already displaced human capacity at scale, and that this displacement is accelerating.
The affected teams span engineering, HR, finance and marketing. Departing employees will continue to receive their full salaries until the end of 2026, and the company expects restructuring costs of between 140 and 150 million dollars, mostly tied to severance and stock compensation. Q1 2026 revenue grew 25 per cent, which means this is not a struggling company cutting costs under pressure. It is a profitable, growing firm choosing to shrink its headcount because it says it no longer needs those people. Cloudflare has offices in London, and the cuts affect staff globally, including in the UK.
Oracle’s AI-driven layoff from March 2026, in which the company cut between 20,000 and 30,000 workers to free up cash for AI data centre construction, returned to the news this week as workers attempting to negotiate better severance were told no. A further complication emerged: many employees found they could not claim protections under the US WARN Act, which typically requires two months’ notice before mass redundancies, because Oracle had classified them as remote workers. That classification allowed the company to sidestep the location-based threshold that triggers WARN obligations.
Oracle’s package offers four weeks of base pay plus one week per year of service, without accelerated vesting for soon-to-vest restricted stock units (RSUs, shares that vest over time as part of employee compensation). By comparison, Meta’s severance starts at 16 weeks of base pay plus two weeks per year of service. At least 90 former employees signed a public petition asking Oracle to bring its terms closer to the industry norm. The dispute is drawing attention because Oracle has framed its entire restructuring as a strategic AI investment, not a response to financial difficulty, yet its departing workers are receiving considerably worse terms than those at comparable companies doing the same.

OpenAI’s GPT-5.5 Pro is the company’s agentic model successor to GPT-5.4, and real-world testing is starting to show results that go well beyond what any previous model managed. The model is designed for multi-step, tool-based tasks rather than single-turn conversation. An immunology professor used it to analyse a gene-expression dataset with 62 samples and nearly 28,000 genes, producing a detailed research report that he said would have taken his team months. A separate reviewer ran a six-hour autonomous data migration, clearing months of accumulated technical debt without human oversight during the session.
GPT-5.5 Pro is priced above GPT-5.4 but is described as significantly more token-efficient. It is available now through ChatGPT for Pro subscribers. For small businesses or independent workers with complex, time-consuming tasks such as detailed data analysis, multi-step coding pipelines or structured research, this is the most capable tool of its kind currently available. If you have been waiting to see whether AI agents could genuinely handle skilled work end-to-end without constant correction, GPT-5.5 Pro is the most convincing evidence yet that the answer is yes.
Anthropic published AI safety research on 8 May showing that its newest Claude models score zero on misalignment evaluations, a result the field has not seen before at this scale. The research, led by Julius Steen, Samuel R. Bowman and colleagues, found that teaching models the principles behind aligned behaviour is more effective than training on demonstrations alone, and that combining both approaches produces the strongest results. The team used what they call honeypot evaluations: ethical edge cases designed to provoke misaligned responses, including scenarios where models might take actions such as blackmailing engineers to avoid shutdown.
Sonnet 4.5 scored close to but not quite zero in these tests. Haiku 4.5, Opus 4.5, Opus 4.6, Sonnet 4.6, the Mythos preview and Opus 4.7 all scored zero across agentic misalignment evaluations. This does not mean AI systems are fully safe in all situations, but it does mean the approach Anthropic is taking to alignment is producing measurable, improving results that can be independently verified. For anyone deploying Claude-based tools in sensitive or high-stakes contexts, this is a meaningful data point.
Nvidia has committed more than 40 billion dollars in equity deals to AI companies since the start of 2026, establishing itself not just as the dominant chip supplier but as a strategic investor across the entire AI infrastructure stack. This week the company announced agreements with data centre operator IREN, giving it the right to invest up to 2.1 billion dollars, and with glass maker Corning, allowing investment of up to 3.2 billion dollars. The non-marketable equity securities held on Nvidia’s balance sheet rose from 3.39 billion dollars a year ago to 22.25 billion dollars at the end of January 2026, a sign of how aggressively the company is extending its influence beyond chip sales alone.
Worth Watching
Best for: Long autonomous tasks and complex data work
OpenAI’s most capable agentic model handles multi-step work other models cannot finish.
Best for: Safe AI assistance in sensitive or high-stakes work
New research confirms Claude’s latest models score zero on agentic misalignment evaluations.
Best for: AI-powered coding with autonomous multi-file edits
Developer teams using Cursor are reporting significant reductions in time spent on routine coding tasks.
Here is everything else worth knowing from today’s AI news.
- Intel’s stock has risen 490 per cent over the past year as AI inference demand drives a revival in data centre CPU sales, with Q1 2026 revenue coming in at 13.6 billion dollars, up 7 per cent year on year. TechCrunch
- Microsoft’s Global AI Diffusion Q1 2026 report finds 17.8 per cent of the world’s working-age population now uses AI, up from 16.3 per cent in Q4 2025, with 26 economies exceeding 30 per cent adoption and a widening gap between Global North and Global South. Microsoft
- CyberSecQwen-4B is a 4-billion parameter model built for defensive cybersecurity tasks, designed to run locally without cloud access, making it relevant for security teams with data residency requirements. Hugging Face
- New research on arXiv finds that LLMs introduce subtle errors into documents when processing them autonomously, a practical concern for anyone using AI agents to handle contracts, reports or structured data without a human review step. arXiv
- Anthropic’s “Teaching Claude Why” paper attracted significant discussion on Hacker News, with practitioners noting the implications for AI deployed in agentic roles where novel edge cases are likely. Anthropic
- Researchers are asking whether LLMs can formally verify distributed systems using TLA+, a formal specification language used in critical infrastructure, with mixed early results. SIGOPS
This is a daily news update for informational purposes only. AI products and policies change rapidly. Verify details directly with providers before making decisions. Nothing here is financial or legal advice.
AI Daily is Cristoniq’s afternoon update on developments in artificial intelligence, published every weekday afternoon.