AI Daily

13 May 2026: AI Models That Train Themselves, Amazon Overhauls Its Shopping Assistant, and Anthropic Leads on Business Customers

AI models that fine-tune themselves, Amazon's Alexa for Shopping replaces Rufus, and Anthropic overtakes OpenAI on business customers for the first time.

A startup called Adaption wants AI models to improve themselves automatically. Amazon has replaced its Rufus shopping assistant with a smarter, more personal experience. And for the first time, Anthropic has more business customers than OpenAI. Here is everything that moved in AI today.

A California-based startup called Adaption has launched AutoScientist, a tool designed to let AI models fine-tune themselves for specific tasks without constant human involvement. Conventional fine-tuning, the process of training a pre-built model on specialist data, typically takes weeks of engineering effort and significant compute budget. AutoScientist replaces much of that manual work with an automated pipeline, allowing models to identify their own weak spots and run targeted training cycles against them. The tool is aimed at developers and businesses needing models capable of handling narrow tasks, such as reviewing contracts, analysing medical records, or managing regulated customer queries, and it could make specialist AI deployment accessible to teams without dedicated machine learning engineering resource.

Amazon has replaced its Rufus shopping assistant with Alexa for Shopping, a personalised AI search experience embedded directly in the Amazon search bar and powered by Alexa+. Unlike Rufus, which was a standalone chatbot overlay, the new tool integrates into the main search interface and draws on Alexa+ to generate recommendations, compare products, and surface deals matched to a user’s past purchases and preferences. Rather than typing search terms and scrolling through listings, shoppers can ask questions in plain English and navigate to purchase with fewer steps. If you have not yet worked out which tasks you should hand to AI first, shopping research is one of the clearest starting points.

The broader competitive implication is significant. Google Shopping, Bing Shopping, and a range of AI-native commerce startups are all competing for the same user intent. Amazon’s decision to embed Alexa+ into the core search bar signals that it believes the assistant experience, not the product catalogue, is now the primary competitive advantage in e-commerce discovery.

Technology research lab workspace with screens and equipment
Photo by Jakub Zerdzicki on Unsplash

Meta has added an incognito mode to Meta AI conversations in WhatsApp, meaning messages sent in incognito chats are not saved and disappear once the user closes the conversation. The feature is designed for people who want to consult Meta AI without having their queries stored, linked to their account, or used to personalise future interactions. For UK users, this matters particularly in the context of data privacy expectations under UK GDPR, where uncertainty about data retention has made many people cautious about AI assistants inside messaging apps. Incognito mode gives users a concrete control that did not previously exist. Meta has not confirmed whether a similar feature is planned for Instagram or Facebook.

Anthropic now has more verified business customers than OpenAI, according to the latest AI Index published by Ramp, the fintech company that tracks software spending across thousands of US businesses. Ramp’s data, drawn from corporate card and expense management records, makes it a credible proxy for where companies are actively paying for AI tools. Claude’s reputation for following instructions precisely, its strong performance on enterprise tasks such as document analysis and code review, and Anthropic’s positioning around safety and reliability have resonated with procurement teams. If your business has not yet formalised how staff should use AI, our guide on how to create a simple AI policy for a small business is a practical starting point.

OpenAI retains dominance in consumer usage and API volume by most other measures. But losing the lead on business customer count to Anthropic is a meaningful competitive milestone, drawing attention from enterprise sales teams and investors on both sides.

A new US Medicare payment model called ACCESS is creating the first government mechanism to fund AI agents that monitor patients between clinical visits, and health systems outside the United States are paying close attention. Until now, there has been no public health reimbursement route for an AI system that follows up with a patient after discharge, coordinates medication collection, or flags housing instability as a clinical risk factor. The UK connection is direct: NHS England and the Integrated Care Boards face ongoing pressure to find scalable ways to deploy AI in care coordination, and the ACCESS model provides a regulatory and financial template that policymakers at NHS England, NICE, and DHSC will be studying closely. Healthcare AI firms operating across both markets will need to factor this development into their compliance and funding planning.

Worth Watching

Alexa for Shopping

Best for: Consumers who shop regularly on Amazon

AI-powered shopping assistant embedded in Amazon search, replacing Rufus with personalised product discovery.

View product →

Poppy

Best for: Anyone juggling calendar, email and messages

Proactive AI assistant connecting your digital life, surfacing reminders and tasks based on what is actually happening.

View product →

Needle by Cactus

Best for: Developers needing fast, lightweight tool use

Open-source 26M parameter function-calling model distilled from Gemini, running at 6,000 tokens per second on consumer hardware.

View product →

Watch how quickly Adaption’s AutoScientist attracts enterprise validation. The clearest early signal will be whether major cloud AI providers, those offering hosted fine-tuning services, respond by cutting prices or accelerating their own automation tooling. A competitive move in the fine-tuning market within the next four to six weeks would confirm that AutoScientist has genuinely shifted the economics of specialist AI deployment.

Here is everything else worth knowing from today’s AI news.

  • Needle by Cactus: An open-source 26M parameter function-calling model distilled from Gemini, running at 6,000 tokens per second on consumer hardware. GitHub
  • Musk and OpenAI: Sam Altman testified that Elon Musk mulled handing OpenAI to his children during negotiations over the for-profit structure, adding significant detail to the ongoing legal dispute. TechCrunch
  • Anthropic share warning: Anthropic has warned investors that any sale of its shares via secondary platforms is void and will not be recognised. The company is acting against unauthorised secondary market activity. TechCrunch
  • Orbital data centres: Google and SpaceX are in talks to build data centres in orbit, positioning space as a potential future home for AI compute. Costs today remain far higher than ground-based infrastructure. TechCrunch
  • Anthropic and small businesses: Anthropic is actively targeting small business owners as a distinct customer segment, separate from its enterprise API and large-organisation channels. TechCrunch
  • Gemini dictation on Gboard: Google has added Gemini-powered dictation to Gboard, its mobile keyboard. This may pressurise standalone dictation and transcription tools that charge subscription fees. TechCrunch

This is a daily news update for informational purposes only. AI products and policies change rapidly. Verify details directly with providers before making decisions. Nothing here is financial or legal advice.

AI Daily is Cristoniq’s afternoon update on developments in artificial intelligence, published every weekday afternoon.