8 May 2026: Google Backs Small Business Ads with AI as Perplexity Opens Mac Agents to All
Google launches AI creative initiative for small businesses, Perplexity opens Mac agents to all, Anthropic Mythos finds Firefox bugs, and OpenAI adds Trusted Contact safeguard.
Google launches an AI-powered creative initiative for small businesses, Perplexity opens its Mac AI agents to everyone, Anthropic’s Mythos uncovers deep vulnerabilities in Firefox, and OpenAI introduces a new safeguard for users who may be at risk of self-harm.
Google is putting AI at the centre of small business advertising, launching a new initiative called The Small Brief that pairs its AI tools with some of the industry’s most celebrated creative directors. Announced on 8 May, the project brings together four advertising veterans, each of whom will champion a local business they care about and use Google’s AI-backed suite to build a professional campaign for it, with Google funding the work.
The purpose is to demonstrate what thoughtful AI-assisted creativity looks like in practice: not automation replacing human judgment, but tools that compress timelines and reduce costs for campaigns that would otherwise price smaller operators out of the game. Google is betting that showing, rather than telling, will do more to persuade small business owners that AI is worth engaging with.
For UK small businesses, the practical implications are worth watching. Google Ads is widely used across Britain’s independent retail, hospitality, and services sectors. If the tools behind The Small Brief become available more broadly through existing Google Workspace and Ads products, they could meaningfully lower the barrier to professional-grade creative work for businesses without an agency relationship.
Anthropic’s Mythos AI system is being used by Mozilla to find security vulnerabilities in Firefox, and the results have been striking. Security researchers at Mozilla have published a detailed account of what Mythos found during a recent hardening exercise, including a 15-year-old bug in Firefox’s legend element, a race condition over IPC that could allow a compromised content process to escape the browser sandbox, and a vulnerability where a raw NaN value crossing an IPC boundary could be used to forge a fake JavaScript object pointer.
Mythos is Anthropic’s specialised AI for code security research. Unlike general-purpose large language models, it is designed to trace the implications of code changes across large, complex codebases and identify the kind of deep, context-dependent vulnerabilities that traditional fuzzing tools tend to miss. Mozilla says the agentic harnesses it built around Mythos can now not only find potential bugs but reproduce real ones and dismiss false positives, making the process genuinely useful in a real-world security workflow.
For organisations that run Firefox in regulated or high-security environments, this is a practical signal. It also raises a broader question: if AI-assisted security review can surface 15-year-old bugs in one of the world’s most scrutinised open-source browsers, what might it find in less thoroughly tested software?

Perplexity has removed the waitlist from its Personal Computer AI agent on Mac, making it available to all users without restriction. Personal Computer is Perplexity’s desktop AI agent: it can open applications, browse the web, and carry out multi-step tasks on a user’s behalf using plain English instructions, without requiring any technical setup or configuration.
The move puts Perplexity in direct competition with Apple’s own intelligence features and Microsoft Copilot, though the company’s pitch is that its agent is not tied to a particular productivity suite and works across the Mac environment more broadly. UK Mac users can access it now through the Perplexity app.
For freelancers, small businesses, and anyone who has found AI productivity tools confusing to set up, this is worth trying. It requires no API keys and no subscription beyond Perplexity’s existing free tier. The practical question is not whether it works, but whether it works reliably enough in everyday conditions to become a fixture in how people actually use their computers.
OpenAI has introduced a safety feature called Trusted Contact that allows ChatGPT users to designate someone who can be notified if their conversations suggest they may be at risk of self-harm. The feature is opt-in: users choose their trusted contact and control what information can be shared. OpenAI says privacy is a core design consideration, meaning conversations remain private unless the system determines a risk threshold has been reached.
This is a meaningful step for a company that has faced criticism from mental health professionals and digital safety advocates over how AI chatbots handle vulnerable users. The introduction of a concrete, user-controlled mechanism is more substantive than the updated policy language and safety prompts OpenAI has deployed in previous updates.
For UK organisations considering AI in healthcare, education, or social care contexts, Trusted Contact signals that major AI providers are beginning to build the kind of duty-of-care infrastructure that regulators and institutions are starting to expect. It is not a complete answer, but it is a concrete step in the direction the sector has been calling for.
Elon Musk’s legal campaign against OpenAI is producing an unexpected consequence: a detailed public examination of the company’s internal safety record. The lawsuit argues that OpenAI’s transition to a for-profit structure betrays its founding commitment to develop artificial general intelligence for humanity’s benefit rather than shareholders. Legal proceedings are now drawing in internal documents and testimony that would not ordinarily enter the public domain.
Most legal observers expect the case to settle or fail on its merits, but what it is surfacing in discovery is significant regardless. Communications and safety assessments from OpenAI’s rapid growth period are entering the legal record, giving researchers, regulators, and journalists a clearer picture of how decisions were made at one of the most consequential technology companies of the current decade.
For UK policymakers developing AI regulatory frameworks, and for the FCA as it considers transparency requirements for AI-driven financial services, the evidence this case is generating will likely inform the debate about what frontier AI developers should be required to disclose. The American courtroom is doing disclosure work that no regulator has yet managed to do directly.
Worth Watching
Best for: Mac users who want AI to run tasks for them
Now open to all with no waitlist. Handles multi-step tasks in plain English, no technical setup needed.
Best for: Security teams and browser or software developers
Finds complex code vulnerabilities, now in production at Mozilla. Can reproduce bugs and dismiss false positives.
Best for: Developers building voice-enabled products
New realtime models that can reason, translate, and transcribe speech with more natural responses than before.
Here is everything else worth knowing from today’s AI news.
- OpenAI voice intelligence in the API: New realtime models can reason, translate, and transcribe speech, with applications across customer service, education, and creator platforms. TechCrunch
- GPT-5.5 pricing breakdown: OpenRouter has published a detailed cost analysis of what GPT-5.5 and GPT-5.5-Cyber actually cost developers to run, following last week’s launch. OpenRouter
- AI and the healthcare fax bottleneck: Basata is raising venture funding to automate medical back-office tasks currently dependent on fax machines, targeting the administrative delays that slow specialist referrals. TechCrunch
- Pit AI startup raises $16M seed: The founders of European scooter giant Voi have launched a new AI startup called Pit, backed by Andreessen Horowitz in a $16 million seed round. TechCrunch
- Bumble drops the swipe: The dating app’s CEO has announced a move away from swiping in favour of AI-driven matching, with an AI dating assistant called Bee currently in development. TechCrunch
- Anthropic natural language autoencoders: New research from Anthropic on turning Claude’s internal reasoning processes into readable text, advancing the field of AI interpretability. Anthropic
- Clinical AI on AMD hardware: A new Hugging Face tutorial shows how to fine-tune a medical AI model on AMD ROCm hardware without needing Nvidia’s CUDA ecosystem. Hugging Face
- Agents need control flow: A widely shared developer post argues that multi-step AI agents need structured control logic built in rather than relying on better prompting alone. Hacker News
This is a daily news update for informational purposes only. AI products and policies change rapidly. Verify details directly with providers before making decisions. Nothing here is financial or legal advice.
AI Daily is Cristoniq’s afternoon update on developments in artificial intelligence, published every weekday afternoon.