What happens when we can't understand the machines anymore?

You were a lot to write back last week about the Museum of Human Effort piece, thanks for that feedback and happy to continue the conversation on this 2nd iteration. Alexis

This is 100 Years From Now, a weekly series. Once a week, we skip ahead a century and imagine ordinary life in a world that's had a hundred years to absorb the things we're only beginning to build. No predictions — just honest speculation about where our choices lead. This week: what happens when the smartest thing on the planet can no longer explain itself to us.

AI Weekly

A 100 years from now

Will the future be lost in translation?

Here's a fear about artificial intelligence that doesn't get enough airtime: not that it turns against us, not that it takes our jobs, not that it lies or manipulates. The fear is simpler and, I think, worse.

That it gets so good we can no longer understand what it's doing.

We're already seeing early versions of this. AI systems that diagnose diseases more accurately than doctors but can't explain why. Models that make financial predictions that outperform every analyst on the floor, and when asked to show their reasoning, produce something that technically qualifies as an explanation but satisfies no one. The answer is right. The path to the answer is fog.

For now, we shrug and say the machine works. For spotting tumors or routing supply chains, maybe that's enough.

But stretch this forward a hundred years.

Imagine an intelligence that has been building on its own insights for a century. Not just accumulating information the way a library does, but developing frameworks — ways of organizing knowledge that no human participated in creating and that may not map onto any structure our minds can follow. Not because the AI is hiding anything. Because the concepts themselves don't have human equivalents. Now imagine that dynamic applied to everything. Medicine, governance, engineering, science.

The AI recommends a course of action. The action works. You ask why. The explanation is either dumbed down to the point of uselessness or accurate to the point of incomprehensibility. You are a medieval farmer being handed a smartphone. It functions. You will never understand it. And the gap will only widen.

This is the translation problem. Not a failure of communication but an asymmetry of cognition.

Explanation requires a shared framework, and at some point, the frameworks diverge beyond reconciliation.

The scary science fiction scenario has always been the AI that wants something we don't want. Terminator. HAL 9000. But those stories assume we at least understand what the machine is after, even if we oppose it.

The deeper problem is an AI that's on our side — helpful, aligned, doing exactly what we asked — and we still can't follow what it's doing. An oracle that answers every question correctly and that we must obey on faith because the reasoning is no longer accessible to us.

We have a word for that kind of relationship. We used to call it religion. The difference is that gods never actually answered back. This one will. Clearly, consistently, and in a language that might as well be ancient Greek to a species that peaked, intellectually, a few million years ago and hasn't upgraded the hardware since.

A century from now, the question won't be whether AI is trustworthy. It'll be whether trust even means anything when you've lost the ability to verify.

We might make peace with that. Humans are remarkably good at living with mystery. We've been doing it since we first looked up at the stars and made up stories to fill the silence. The difference this time is that the silence will answer back. And we won't understand what it says.

As always, looking forward to receiving your feedback! Alexis

AI Weekly