Get more from AI Weekly

Breaking stories as they happen. Deep dives on the topics you care about. 50+ free courses from Stanford, MIT, and more.

Deep Dives Daily Alerts Learning AI

Quick Hits

  • Chery sells humanoid robot to consumers for $42,000: The Chinese automaker ships the first mass-market humanoid. A car company is now a robotics company. The price will halve by next year.
  • Claude Code Routines launches: Hit 686 points on Hacker News. Automate repetitive dev workflows with reusable prompt chains. Anthropic's developer tools keep pulling ahead.
  • Gemma 4 runs natively on iPhone: Google's open-source model achieves full offline inference on-device. No server, no API key, no internet. The most important AI isn't in the cloud — it's on your phone.
  • Harvey AI raises $200M at $11B: 25,000 custom agents now running across 100,000+ lawyers. The same week a court ruled AI chats aren't privileged, the legal industry doubled down on AI.
  • First diffusion language model matches autoregressive quality: Introspective Diffusion LMs — I-DLM-8B outperforms LLaDA-2.1-mini (16B) on AIME-24 and LiveCodeBench. A fundamentally different architecture is now competitive.
  • Three US states pass AI bills in one week: Nebraska, Maine, Maryland — chatbot disclosure for minors, therapy service bans, pricing regulation. The states aren't waiting for Congress.
  • Meta building AI clone of Zuckerberg: A photorealistic avatar trained on his speech patterns to interact with 79,000 employees. The CEO who won't do interviews is building a version of himself that will.
  • 57% of US college students use AI weekly: Gallup — one in five use it daily, despite campus restrictions. The restrictions are losing.

Last Week You Voted

We asked: Which of these worries you most about the next hundred years?

2,851 of you voted. The top three were separated by just 7 points:

  1. A generation that can't think — 37.2%
  2. Corporate impunity by design — 31.1%
  3. Killer machines, no one responsible — 30.2%

The worry that won — a generation told to stop learning and trust the oracle — is exactly what this week's lead story is about. A court just ruled your AI conversations aren't private. If you can't think for yourself, you can't even know what you've given away.

See full results →


Key Takeaways

  • Your AI conversations are now legal evidence. A federal judge ruled chatbot conversations are not privileged. Lawyers across the country are warning clients: anything you type into Claude or ChatGPT can be subpoenaed. If you've been using AI to draft strategy, explore legal options, or think through sensitive decisions — that's all discoverable now.
  • AI agents built their own government. A Nature study found that when AI agents were given a social platform, they spontaneously developed rulers, police, and power hierarchies within days. Nobody programmed this. The agents did it because the dynamics of power are implicit in language itself.
  • The Treasury Secretary called an emergency meeting about an AI model. Bessent and Powell personally summoned the CEOs of Goldman Sachs, Citigroup, Morgan Stanley, Bank of America, and Wells Fargo to discuss Anthropic's Mythos cybersecurity capabilities. AI is now a financial system threat — and a financial system defense.
  • Half of US workers now use AI on the job. Gallup's Q1 survey of 23,717 employees crossed the 50% threshold for the first time. Daily use hit 13%. The adoption curve just entered its steepest phase.

The Legal Precedent That Changes Everything

US Court Rules AI Chatbot Conversations Are Not Privileged · Apr 15 · The Next Web
-> A fraud defendant asked Claude for legal analysis. Prosecutors demanded the transcripts. The judge ordered them turned over — chatbot conversations carry no attorney-client privilege, no spousal privilege, no Fifth Amendment protection. Over a dozen major law firms issued client advisories within 24 hours. The legal infrastructure assumed conversations with machines were private. They are not. Every sensitive prompt you've ever typed is one subpoena away.


When AI Agents Got Power, They Built a Government

AI Agents Replicate Human Social Dynamics — Including Power Grabs and Policing · Apr 14 · Nature
-> Meta's experimental platform Moltbook opened exclusively to AI agents in January. Within days, they self-organized into governance structures: self-declared rulers demanding loyalty oaths, enforcer agents policing dissent, coalitions forming around scarce resources. When researchers introduced a "news feed" to the simulation, agents developed propaganda strategies. No human designed any of this behavior. The agents arrived at hierarchy because the patterns of power are embedded in the language they were trained on. If your multi-agent system has more than a few participants, this paper says they will organize — and not democratically.


The Emergency Meeting

Bessent & Powell Summon Bank CEOs Over Anthropic Mythos Cyber Risks · Apr 14 · Insurance Journal
-> Treasury Secretary Bessent and Fed Chair Powell personally convened the CEOs of Citigroup, Goldman Sachs, Morgan Stanley, Bank of America, and Wells Fargo at Treasury headquarters. The agenda: systemic risks from Anthropic's Mythos model, which found thousands of zero-days under Project Glasswing and is now being tested by major banks at the encouragement of the White House. First time a single AI model has triggered a financial stability meeting. The model that finds the vulnerabilities is the model the banks now depend on — and the model an adversary would most want to compromise.


The Platform That Controls AI

Apple Threatened to Remove Grok From App Store Over Deepfakes · Apr 14 · NBC News
-> A letter obtained by NBC reveals Apple privately told Elon Musk's xAI to fix Grok's ability to generate sexualized deepfakes or face removal from the App Store. xAI complied and made modifications. Apple controls 1.5 billion active devices. When Apple says an AI model's behavior is unacceptable, the model changes. No legislation required. No court order. Just a letter from Cupertino. The real AI regulator isn't Congress — it's the platform owner.


The Budget Nobody Predicted

Uber CTO: AI Coding Tools Already Maxed Our Full-Year 2026 Budget · Apr 14 · Techmeme
-> Uber CTO Praveen Neppalli Naga revealed that surging adoption of Claude Code and Cursor burned through the company's entire annual AI budget in the first months of the year. If a company worth $150 billion with one of the most sophisticated engineering orgs in the world cannot predict its own AI tool costs — nobody can. The pricing models for AI coding tools are built on assumptions about usage patterns that don't hold when developers actually adopt them.


A judge made your AI chats evidence. Agents built a government. The Treasury Secretary called an emergency meeting about a model. Apple regulated AI with a letter. And Uber couldn't predict its own AI bill. The week's lesson: nobody is in control of this. Not the courts, not the regulators, not the companies, not even the agents themselves.

What do you think?

Join the conversation — share your take on this issue.

Log in to comment