Things are pretty serious. AI is totally a geopolitical issue now - and more AI companies have 1 billion seed rounds now
Anthropic got blacklisted. OpenAI is the bad guy as users revolted. LeCun walked away from Meta with a billion dollars and a thesis that everyone else is wrong. Oracle said it out loud: we're firing people to build data centres. None of this is theoretical anymore.
Sponsor
If you’re only using AI to rewrite emails, you’re doing it wrong.
Become AI-proficient in 8 weeks. The AI for Business & Finance Certificate Program teaches practical, everyday AI for nontechnical professionals—and earns you a certificate from Columbia Business School Exec Ed. Starts March 16.
In the News
Online Age-Verification Tools for Child Safety Are Surveilling Adults
CNBC · Mar 8
Half of U.S. states now force every user through AI-powered identity gates to protect children — creating a mass surveillance infrastructure that caused site traffic to collapse in states where it was enforced.
#QuitGPT: 2.5M Users Flee ChatGPT; Claude Hits #1 on App Store
TechCrunch · Mar 1
OpenAI's Pentagon deal triggered the largest consumer AI backlash ever — ChatGPT uninstalls surged 295%, Claude's daily signups quadrupled, and Anthropic's app overtook ChatGPT for the first time in the U.S. App Store.
Anthropic Claims Pentagon Feud Could Cost It Billions
TechCrunch · Mar 9
Anthropic filed two federal lawsuits challenging its "supply chain risk" designation — a label never before used against a domestic company — arguing the Pentagon is retaliating for its refusal to allow unrestricted military AI use.
UK Eyes Sweeping Powers to Regulate Tech
Computing.co.uk · Mar 9
The UK government is seeking broad executive authority to regulate online harms through the Children's Wellbeing Bill — and legal experts warn those same powers could be weaponized by future populist governments.
Yann LeCun's AMI Labs Raises $1.03B to Build 'World Models'
TechCrunch · Mar 10
Turing Award winner LeCun left Meta, called LLMs "complete nonsense" as a path to real intelligence, and raised $1B at a $3.5B valuation for a Paris-based startup building AI that understands physical reality — backed by Bezos, Nvidia, Schmidt, and Cuban.
Fake AI Content About the Iran War Is All Over X
CNN · Mar 10
AI-generated fake war footage is racking up tens of millions of views on X while monetized accounts profit from it — and X's own chatbot Grok made it worse by confirming fabricated content as real.
Oracle Plans 20,000–30,000 Layoffs to Fund AI Data Centres
Yahoo Finance / Bloomberg · Mar 5
Oracle is cutting up to 18% of its workforce to free $8–10B for AI infrastructure — the starkest example yet of a company explicitly firing humans to pay for AI, while U.S. banks pull back from financing the expansion.
Kids Would Be Banned from Using Chatbots in Minnesota AI Bills
CBS News · Mar 10
Minnesota's bipartisan AI bills — banning minors from chatbots, blocking surveillance pricing, and restricting AI in health insurance — signal that states are becoming AI's de facto regulators in the absence of federal action.
OpenAI and Google Workers File Amicus Brief for Anthropic
TechCrunch · Mar 9
Over 30 employees from rival firms — including Google's chief scientist Jeff Dean — told a federal court that if the Pentagon can blacklist a company for setting safety boundaries, no AI developer is safe.
Microsoft: Hackers Abusing AI at Every Stage of Cyberattacks
Microsoft Security Blog · Mar 7
Threat actors are now using generative AI across the entire attack lifecycle — one Russian-speaking hacker breached 600+ firewalls in five weeks using AI, a scale previously requiring a full team.
DOJ Lawyer Resigns Before Judicial Scolding for AI-Filled Brief
Bloomberg Law · Mar 10
A 30-year federal prosecutor resigned after filing a brief with AI-fabricated citations — calling it the worst decision of his career and underscoring that even experienced professionals are falling for AI hallucinations.
Anthropic Seeks to Undo 'Supply Chain Risk' Designation
NPR · Mar 9
The first-ever use of a supply chain risk designation against a domestic U.S. company sets a precedent that could reshape how every AI firm negotiates with the military — all because Anthropic drew two red lines on surveillance and autonomous weapons.