29 April 2026: GPT-5.5 Arrives, Claude Agents Gain Memory, and a £850m Bet on Superintelligence
OpenAI ships GPT-5.5, Anthropic adds persistent memory to Claude agents, DeepSeek launches V4, and David Silver raises £850m for a superintelligence lab.
A week that ended with more launches than most months can claim. OpenAI shipped GPT-5.5 and unveiled a new enterprise agent platform. Anthropic gave its managed agents persistent memory, changing how businesses can build long-running AI workflows. DeepSeek returned with a new flagship model built for Huawei chips. And the man behind AlphaGo raised the largest seed round in European history.
OpenAI released GPT-5.5 on 23 April, its most capable model to date, rolling out to paid subscribers on Plus, Pro, Business, and Enterprise plans. The model is built for work that takes time rather than answering questions in a single exchange. OpenAI says the gains are especially strong in agentic coding, computer use, and knowledge work. “Agentic” means the model can take a high-level instruction, break it into subtasks, use tools like a browser or code editor, and deliver a finished result without constant prompting. For API developers, pricing starts at $5 per million input tokens and $30 per million output tokens, with a one million token context window.
TechCrunch reports that OpenAI is positioning GPT-5.5 as a step toward a unified AI super app handling most knowledge work end to end. A Pro version for scientific and enterprise workloads is available at $30 per million input tokens. For small businesses, the practical difference is a model that can be set to work on a project and return with an answer, rather than a suggested next step.
Anthropic added persistent memory to its Claude Managed Agents platform on 23 April, allowing enterprise AI agents to carry knowledge from one session into the next. Until now, agent memory in Claude reset at the end of each conversation. Memory is now stored as files on a filesystem that developers can inspect, edit, and export via the Claude Console or the Anthropic API. All changes are logged with a full audit trail, giving organisations the ability to roll back or redact agent memory at any point.
Anthropic cited early adopters including Netflix and Rakuten, reporting a 97 per cent reduction in first-pass errors in document verification workflows, according to the company. The feature is in public beta and available to all Claude Platform users now. SD Times reports that this marks a significant step in Anthropic’s push to make Claude a viable long-running enterprise platform. Agents can now accumulate institutional knowledge over time rather than being rebuilt from scratch whenever circumstances change.

OpenAI also unveiled Workspace Agents in ChatGPT on 22 April, replacing Custom GPTs with a more capable system for full enterprise workflows. Where Custom GPTs were pre-configured chatbots responding to prompts, Workspace Agents operate autonomously on a schedule or in response to events inside connected apps. The platform integrates directly with Slack, Google Drive, Microsoft 365, Salesforce, Notion, and Atlassian products. You define the workflow, the tools, and the approval steps once, and the agent runs it without waiting to be asked.
Workspace Agents are in research preview and free until 6 May, when credit-based pricing begins, for Business, Enterprise, Edu, and Teachers plans. VentureBeat describes the product as a successor to Custom GPTs that can gather context, follow approval chains, and improve over time. For UK teams that found Custom GPTs too limited, agents now trigger on events inside connected apps rather than waiting for a message.
China’s DeepSeek released a preview of V4 on 24 April, its first major model in a year, built to run on Huawei chips rather than Nvidia GPUs. V4 has been reworked from the software stack up to optimise for Huawei’s Ascend chips, a direct response to US export controls restricting China’s access to advanced Nvidia hardware. The V4 Pro series pushes a one million token context window and claims benchmark improvements in coding, reasoning, and agentic tasks via a technique the company calls Hybrid Attention Architecture.
Bloomberg reports that DeepSeek slashed its API pricing by 75 per cent for V4 Pro at launch and cut input cache fees across its model family to a tenth of their previous rates. For UK developers working with open-source AI, the pricing move makes DeepSeek increasingly competitive with closed frontier models. The Huawei shift signals that if Chinese AI labs can build frontier-grade models on domestic chips, the assumption that advanced AI depends on Nvidia’s supply chain becomes harder to sustain.
Former Google DeepMind researcher David Silver raised $1.1 billion in seed funding for Ineffable Intelligence, the largest seed round ever closed in Europe. Silver spent more than a decade at DeepMind leading the reinforcement learning team, building AlphaGo, the first AI to beat a professional Go player on equal terms. The round was co-led by Sequoia and Lightspeed, with Nvidia, Google, DST Global, and the UK Sovereign AI Fund. The company emerged from stealth at a $5.1 billion valuation, aiming to build a system that discovers all knowledge from its own experience without human-labelled data.
TechCrunch reports that Ineffable Intelligence is focused on reinforcement learning at scale, the approach that produced AlphaGo. The involvement of the UK Sovereign AI Fund signals that the UK government sees Silver’s lab as strategically significant. Silver holds a professorship at University College London, and UCL researchers now lead two of Europe’s three largest seed rounds ever closed. For UK readers, this is a domestically rooted venture at the frontier of AI research, backed by both the US venture capital establishment and the British state.
Worth Watching
Best for: Complex multi-step knowledge work
OpenAI’s most capable model excels at agentic tasks that span multiple tools and long-horizon research.
Best for: Enterprise AI with persistent context
Memory in public beta means agents retain institutional knowledge across sessions without manual rebuilding.
Best for: Developers building on open-source AI
Competitive frontier performance at 75 per cent lower API costs than previous DeepSeek pricing, with a 1M token context window.
Here is everything else worth knowing from today’s AI news.
- The Reasoning Trap: An ICLR 2026 paper found that training AI models to reason more deeply through reinforcement learning increases tool-call hallucination rates in lockstep with task performance gains. Making a model smarter does not automatically make it a more reliable agent. Source: arXiv 2510.22977.
- Meta Muse Spark: Meta released its first AI model under Superintelligence Labs, led by former Scale AI CEO Alexandr Wang. Muse Spark is competitive with current frontier models on multimodal tasks and is available free on the web and in the Meta AI app. Source: TechCrunch.
- UK FCA AI Live Testing: A second cohort of firms is entering the FCA’s AI Live Testing initiative this month. The programme gives regulated financial services firms a supervised sandbox to test AI tools before deployment. Source: Inside Global Tech.
- Stanford AI Index 2026: Stanford’s annual State of AI report finds AI advancing faster in practical applications than in foundational safety research, tracking benchmarks, investment, and global policy. Source: IEEE Spectrum.
This is a daily news update for informational purposes only. AI products and policies change rapidly. Verify details directly with providers before making decisions. Nothing here is financial or legal advice.
AI Daily is Cristoniq’s daily guide to developments in artificial intelligence — published every morning and evening.