Our problem with predicting the future isn't just about what we know - it's about the words we have to describe it. We're hitting the limits of language itself.
This limitation shapes not just how we talk about the future, but how we think about it. When venture capitalists and founders try to describe transformative AI systems, they fall back on metaphors about "digital brains" or "thinking machines" - metaphors that are probably as inadequate as describing a car as a "horseless carriage." We lack the vocabulary for what's actually emerging.
I was reminded of this recently while helping a friend with a query. She casually mentioned "asking her AI" about a historical fact, the way my generation would have said "looking it up." For her, AI isn't technology - it's a background utility, like electricity or running water. Her mental model, and thus her language, has already evolved beyond mine.
The really interesting changes aren't the ones we can predict, but the ones we can't even properly describe yet. Before the internet, nobody was worrying about "digital privacy" or "online identity" - not because these weren't going to be important issues, but because we lacked the conceptual framework to even imagine them.
Look at how our language around technology has evolved just in the past decade. Terms like "neural network" and "machine learning" have shifted from academic jargon to everyday vocabulary. But these terms are still rooted in our current understanding - they're us trying to describe the future by analogizing to the past.
Eventually, we'll develop the vocabulary to describe what's coming. But by then, it won't be prediction anymore - it'll be observation.
The next great leap in AI might not look like anything we're predicting, not because we're imagining the wrong things, but because we're thinking with the wrong words. Like those medieval scholars trying to predict the modern world, our biggest blind spots might be linguistic rather than technological.