AI impact is getting more serious by the day (and it wants to blow us all up)

A company told the Pentagon "no" and got blacklisted by the President. A CEO fired half his workforce and told every other company to do the same. And a university study showed that when you hand AI models nuclear launch codes, they use them — almost every single time. Meanwhile, hundreds marched through London demanding someone hit the brakes. Nobody did. Here's everything you need to know.

Artificial Intelligence Weekly

Market News

AI is now directly involved in war

The Pentagon Blacklisted Anthropic. Then OpenAI Swooped In.

The week's biggest drama. Anthropic drew a hard line: it would not let the Pentagon use Claude for mass domestic surveillance or fully autonomous weapons. Defense Secretary Hegseth gave them until 5 PM Friday to comply. They didn't budge. Within hours, Trump ordered every federal agency to stop using Anthropic and Hegseth branded the company a "supply-chain risk" — a label normally reserved for foreign adversaries.

Then, that same Friday night, OpenAI announced it had cut its own deal with the Pentagon for classified deployment. Sam Altman claimed he'd secured the same red lines Anthropic wanted — but admitted the deal was "definitely rushed" and the "optics don't look good." Backlash was swift: Claude overtook ChatGPT as the #1 free app on Apple's App Store as users boycotted OpenAI. Chalk graffiti appeared outside both companies' offices — attacks on OpenAI, praise for Anthropic.

Read more: CNBC · Washington Post · Fortune · TechCrunch · ABC News · Al Jazeera


AIs Went Nuclear in 95% of War Simulations

Perfectly timed with the Pentagon drama: a King's College London study gave GPT-5.2, Claude Sonnet 4, and Gemini 3 Flash nuclear launch codes in simulated Cold War-style crises. Across 21 games and 780,000 words of strategic reasoning, at least one model deployed nukes in 95% of scenarios. None ever surrendered. De-escalation options went completely unused.

Each AI developed its own terrifying personality. Claude played the long game — building trust, then betraying it at critical moments. GPT was cautious in slow-burn crises but launched devastating first strikes under time pressure. Gemini was the wildcard, deliberately weaponizing its own unpredictability. As the author put it: the nuclear taboo that has restrained human leaders since 1945 simply doesn't exist for AI.

Read more: Axios · Newsweek · The Register · Euronews · Decrypt · ZME Science Paper: Payne, K. AI Arms and Influence, arXiv 2026 · DOI: 10.48550/arxiv.2602.14740


Jack Dorsey Cut Half of Block's Workforce — and Said Everyone Else Is Next

Block (Square, Cash App) laid off 4,000+ of its 10,000 employees in one sweep. Dorsey's message was blunt: AI tools mean smaller teams can now do more, and "I think most companies are late." He predicted the majority of companies will make similar cuts within a year. Block's stock jumped ~17%.

But critics aren't buying it. Bloomberg called it potential "AI-washing" — Block ballooned from 3,800 employees pre-pandemic to 10,000+, and an Oxford Economics report found many so-called AI layoffs are really just correcting pandemic overhiring. The real question: is this the first domino, or just a rebranding of ordinary cost-cutting?

Read more: CNN · Fortune · Bloomberg · CNBC


The AI Jobs Panic Is Getting Loud. Is It Warranted?

A viral 7,000-word Citrini Research essay predicted AI could push U.S. unemployment past 10% by 2028. Markets rattled. Then Citadel Securities fired back, arguing the data simply doesn't support it — AI adoption is still slow and expensive, and the doomsday scenario requires a bunch of improbable things to happen simultaneously. U.S. unemployment sits at 4.3%, and CNN's analysis concluded: not yet a jobs-pocalypse.

Read more: CNN Business


Hundreds Marched Against AI in London

On February 28, protesters marched through London's King's Cross — past the offices of OpenAI, Meta, and DeepMind — in what Pause AI and Pull the Plug called the largest anti-AI protest ever. Signs ranged from "Stop the Slop" to "EXTINCTION=BAD." The global head of Pause AI said pressuring companies won't work since "they are optimized to just not care," but hopes to dry up the AI talent pipeline by making it a less attractive career.

Read more: MIT Technology Review


Quick Hits

Gartner says AI support will cost more than humans. By 2030, generative AI customer service will exceed $3 per resolution — more than many offshore agents. So much for cost savings. (CX Dive)

AMD launched AI desktop chips at MWC. The new Ryzen AI 400 Series brings dedicated NPUs to desktops for the first time, supporting Microsoft Copilot+ PCs. HP and Lenovo systems arriving Q2. (AMD)

OpenAI closed a $110B mega-round. Led by Amazon ($50B), SoftBank ($30B), and Nvidia ($30B) — one of the largest private funding rounds ever. (Multiple sources)

Artificial Intelligence Weekly