This is 100 Years From Now. Once a week we skip a century and try to picture what life actually looks like when the stuff we're building now has had time to settle in. This week: the last vote.
More from AI Weekly: 16 free deep dives · breaking AI alerts · AI News Today
In Ireland in 2025, a deepfake showed the eventual president withdrawing from the race days before the vote. Fake footage of national broadcasters "confirming" it. Spread fast enough to matter.
In the 2024 UK general election, over half of voters said they saw misleading info about candidates. A quarter saw a deepfake. By 2026, the glitches are gone. Anyone with a phone can make one.
Think about what that means. A politician says something on video and you genuinely don't know if it happened. The detection tools are always one step behind. Platforms take hours to pull stuff that travels in minutes. And every time you watch anything now there's this low hum going maybe this isn't real.
Lying didn't disappear. It became impossible to catch. And once you can't catch lies, you can't prove truth either. The whole floor falls out.
Democracy always ran on this assumption that voters, given decent information, could make decent calls. That was never fully true — propaganda is ancient, politicians have always lied. But there was a baseline. You could fact-check a speech. Footage was footage. That baseline is gone and I don't think we're getting it back.
The weird thing is people aren't angry about this. They're tired. A WEF survey found a quarter of Europeans would prefer AI to run governance over politicians. Nobody loves algorithms. People are just done. Politicians lie, deepfakes lie, the media lies, your uncle on Facebook lies. The algorithm at least gives you the same answer twice.
That preference is only going one direction. An academic paper on SSRN already argues that democracy should be replaced by AI governance — human-led democracy is too riddled with "cognitive biases, susceptibility to misinformation, and slow decision-making" to run a complex society. The argument isn't even pro-AI. It's anti-us. You're too broken to do this anymore. Once enough people buy that, everything else is logistics.
I keep thinking about what the singularity actually is. We imagine some dramatic moment — a machine wakes up, alarms go off. I think it's going to be way quieter. Some Tuesday, people are exhausted from trying to figure out what's real, and they vote — maybe for the last time — to let something else handle it. Nobody storms anything. People just stop showing up.
Now I need you to follow a specific thread here, because this is the part that should really bother you.
The tools generating the deepfakes and the companies offering to "fix" governance are the same companies. And I don't mean that vaguely. I mean specifically.
OpenAI's own report admitted it disrupted "more than 20 operations" that used its models to interfere with elections. Deepfakes created with generative AI surged 900% in a single year. During the 2024 US election alone, OpenAI had to reject over 250,000 requests to generate deepfakes of political figures. A quarter million attempts. On one platform.
That's the product.
Now here's the pivot. That same company — the one whose tools are being used to attack elections — signed a $200 million contract with the Pentagon through "OpenAI for Government." It then gave ChatGPT to every federal agency for $1. Sam Altman said it out loud: "One of the best ways to make sure AI works for everyone is to put it in the hands of the people serving our country." This is the same man offering you compute tokens as a replacement for the salary your job used to provide.
Palantir is deeper in. Alex Karp — the philosopher who told you to stop thinking — runs the company that won a $10 billion Army contract, runs Project Maven (the Pentagon's AI surveillance and targeting system), and whose technology the UN Special Rapporteur has linked to surveillance of Palestinians in Gaza and the West Bank. Critics including the economist Yanis Varoufakis have called Karp's vision "technofascist logic" — not because of the rhetoric, but because of the governance model: AI systems that expand state capacity while civil constraints lag behind.
Anthropic and OpenAI are competing for classified defense contracts. OpenAI literally titled its Pentagon announcement "Our agreement with the Department of War."
So trace the full arc. These companies build the tools that generate the deepfakes. The deepfakes destroy public trust in information. The collapse of trust makes people give up on democracy. And the same companies show up offering to run governance instead, backed by military contracts, surveillance infrastructure, and lobbying budgets that dwarfed anything Big Tobacco ever spent.
They killed the one accountability bill that passed. They capped their liability at $100. They told you to stop coding so you can't audit the output. And now they want the keys to the state.
This isn't conspiracy. Every link in this piece is a press release, a government filing, a news report, or a peer-reviewed paper. It's all public. That's maybe the worst part.
The singularity, when it comes, is going to be political. A civilization deciding that governing itself is too much work and handing the keys to something that can't be voted out.
Nobody ever voted for a dictator because the trains were running late. They voted for him because they were exhausted and he promised to make the mess go away. Democracy was always about choosing to live with the mess — because the process was yours and the clean answer never really was.
In 100 years that choice might look the way hand-plowing looks to us. Something people used to do before they knew better.
The last election won't be stolen or hacked. It'll be the one where enough people just stop caring. And something that can't be voted out will quietly take the chair.
More from AI Weekly: 16 free deep dives · breaking AI alerts · AI News Today
If you want to go deeper
On deepfakes and elections:
- How cognitive manipulation and AI will shape disinformation in 2026 — World Economic Forum
- Electoral Commission deepfake detection pilot — UK Electoral Commission
- Deepfakes in the 2026 Elections: Why Certified Proof Matters — Truescreen
- Gauging the AI Threat to Free and Fair Elections — Brennan Center for Justice
- Battling deepfakes: How AI threatens democracy — The Conversation
On AI companies in government and defense:
- OpenAI for Government — $200M Pentagon contract — Breaking Defense
- OpenAI gives ChatGPT to federal agencies for $1 — CNBC
- Palantir's $10 billion Army contract — CNBC
- Pentagon expands Palantir's Maven AI — Military.com
- OpenAI's "Agreement with the Department of War" — OpenAI
- OpenAI expands government footprint with AWS deal — TechCrunch
On AI governance and the end of democracy:
- Why Democracy Should Be Replaced by AI Algorithms — SSRN
- The AI Democracy Dilemma — Journal of Democracy
- Would AI be better at governing than politicians? — World Economic Forum
On AI liability, lobbying, and accountability:
- OpenAI's lobbying spend: $260K to $1.76M in one year — MIT Technology Review
- SB 1047 — the bill the industry killed — NPR
- Palantir's technofascist vision and ethical backlash — BizTech Weekly
- UN Special Rapporteur on Palantir and Palestinian surveillance — SEC filing
- OpenAI floats federal support for AI infrastructure — Brookings Institution
On AI, UBI, and technocratic dependency:
- Symbolic violence and AI UBI — peer-reviewed — Frontiers in AI
- Sam Altman's "universal extreme wealth" — Yahoo Finance
- Alex Karp on destroying humanities jobs — Fortune
- Jensen Huang: don't learn to code — Tom's Hardware
This week's poll
When you see a political video clip now, what's your first reaction?
Last week, 203 of you voted:
Anthropic ran the week. What does it signal for the next 12 months?
When you see a political video clip now, what\u0027s your first reaction?
Thanks for reading AI Weekly. Forward this to one person who needs to read it.
More from AI Weekly: 16 free deep dives · breaking AI alerts · AI News Today