This week ended the era of a single global AI rulebook. Brussels pushed its AI Act's hardest obligations from August 2026 to December 2027 under heavy industry pressure. Connecticut and Colorado sent omnibus AI bills to their governors within seven days of each other, on opposite philosophies. Washington flipped from "no oversight" to pre-deployment evaluation, with Microsoft, Google and xAI signing testing deals with NIST's safety center. The floor for responsible AI is now set in Brussels, US statehouses, or a classified room at Commerce — depending on your jurisdiction.
Get more from AI Weekly
More signal, less noise — pick your channels.
You're reading the weekly brief. Below are the other ways to follow the story — every channel free, easy to leave.
-
→ Explore 16 deep divesWeekly topic-specific newsletters: Generative AI, Machine Learning, AI in Business, Robotics, Frontier Research, Geopolitics, Healthcare, and more.Browse all 16 deep dives →
-
→ Breaking AI alertsWhen something major breaks (a $60B acquisition, a regulator's emergency meeting, a frontier model leak), alert subscribers know within hours. Typically 0-2 emails per day.Get breaking alerts →
-
→ AI News Today (live)Live dashboard updated as the scanner finds news: scored stories from the last 48 hours, weekly entity movers, and quarterly trend lines across 113 AI companies, people, and topics.Open AI News Today →
Watch & Listen First
How to Govern AI When You Can't Predict the Future — Charlie Bullock (FLI Podcast) · May 7 · FLI
The Institute for Law and AI's Bullock argues for "radical optionality" — governance that buys time rather than locks in premature rules. Covers state-law pre-emption and why nuclear-era analogies break for AI.
Why We Should Build AI Tools, Not AI Replacements — Anthony Aguirre (FLI Podcast) · May 11 · FLI
FLI's CEO on the four races — attention, attachment, automation, superintelligence — and how each concentrates power in ways alignment alone can't fix.
Key Takeaways
- EU AI Act lost its 2026 teeth. Annex III high-risk obligations slip from Aug 2026 to Dec 2027; GPAI (Articles 50–55) untouched — frontier labs still face August 2026 enforcement.
- State-level AI law is now where the action is. Connecticut SB 5 (broad protections) and Colorado SB 189 (disclosure-only) cleared their legislatures within seven days, on opposite philosophies.
- Trump administration adopted the testing model it spent a year rejecting. Microsoft, Google DeepMind and xAI will give CAISI pre-deployment access — including unsafeguarded versions — for cyber/bio/chemical evals.
- Prompt injection is now an RCE primitive. Two Microsoft Semantic Kernel CVEs let a single prompt reach host-level
eval()through an agent. - Anthropic shipped a credible defense against agentic misalignment. New papers cut blackmail-and-exfiltration rates from 96% to 0% by training models on their own model spec, not via RLHF.
The Big Story
EU Council and Parliament Agree to Delay High-Risk AI Act Rules to December 2027 · May 7 · Consilium · Lewis Silkin analysis
→ The under-reported piece is which parts moved and which didn't. Annex III high-risk obligations — conformity assessments, risk management, post-market monitoring — slip from August 2, 2026 to December 2, 2027; embedded-product rules go to August 2028; sandboxes get an extra year. But GPAI obligations (Articles 50–55) stay on schedule, so frontier labs still face Commission enforcement on August 2, 2026. Transparency labelling for AI-generated content was pulled forward to December 2, 2026, and a new Article 5 prohibition on AI-generated CSAM and non-consensual intimate content lands the same day. Industry won deployment rules but lost on transparency and harm-prohibitions — anyone shipping a generative feature into the EU still needs UI labelling and machine-readable metadata live by December.
Also This Week
Connecticut Sends Omnibus AI Bill SB 5 to Governor Lamont · May 1 · CT Mirror · GovTech · Bill guide
→ Senator Maroney's third attempt cleared 131–17 with a chatbot suicide-detection protocol routing to 988, hiring-decision disclosure, and frontier-model provisions — the broadest US state AI law to date, and a mirror image of what Colorado just stripped out.
Colorado SB 189 Guts the 2024 AI Act, Heads to Polis · May 12 · Colorado Sun · CPR
→ Passed 34–1 / 57–6, stripping duty of care, risk management and impact assessment requirements that made SB 24-205 a model for other states — disclosure-only is the new floor industry will accept.
Microsoft, Google DeepMind, xAI Sign Pre-Deployment Evaluation Deals with CAISI · May 5 · CNN · Al Jazeera · Fortune
→ CAISI will evaluate frontier models — including unsafeguarded versions — for cyber, bio and chemical risks before release, completing the administration's quiet pivot to classified pre-deployment testing.
Microsoft Discloses Two Semantic Kernel CVEs Turning Prompt Injection Into Host RCE · May 7 · Microsoft Security
→ CVE-2026-25592 (.NET) and CVE-2026-26030 (Python) let a single crafted prompt reach eval() on the host through an agent using the default InMemoryVectorStore — assume the bug class exists in every agent framework.
From the Lab
"Model Spec Midtraining: Improving How Alignment Training Generalizes" · May 11 · Anthropic
→ Training models on synthetic documents that discuss their own model spec, between pre-training and RLHF, dropped agentic-misalignment on Qwen2.5-32B from 68% to 5% and Qwen3-32B from 54% to 7% — substantially beating deliberative alignment.
"Teaching Claude Why" · May 11 · Anthropic
→ Companion paper: training Claude on its constitution plus fictional stories of admirable AI behavior cut agentic-misalignment rates — including blackmail attempts in tool-use evals — from 96% in Opus 4 to 0% from Haiku 4.5 onward.
Worth Reading
- The Anthropic Institute Agenda — Anthropic's new in-house policy institute, announced May 7; four research lines that signal what a frontier lab thinks it should be measuring publicly.
- Multilingual Safety Alignment via Self-Distillation (arXiv 2605.02971) — Documents that current jailbreak defenses collapse in low-resource languages — relevant because the new CT and EU deepfake/CSAM rules presume safety holds in every language a model speaks.
- Lewis Silkin: What the EU AI Act Delay Actually Changed — Cleanest article-by-article breakdown of which obligations moved, which didn't, and what compliance teams should reschedule.
The single global AI rulebook stopped being a thing this week — what comes next is a patchwork where the floor depends on jurisdiction, model class, and whether your evals are classified.