August 21, 2025 7 minutes minutes read Dimitris

The Algorithm

The recommendation engine doesn't predict what you'll like - it's showing you what you need to see to become who you've already become in the aggregate data. Everyone's being steered toward their statistical destiny.


I keep seeing the same reels as my friends. Not similar ones - the exact same videos, in roughly the same order. Yesterday three different people sent me the same video. We all thought we'd discovered it independently.

Why are we "discovering" the same thing on different days?

Let's think about what's actually happening here. You open an app and see content. You assume this content was selected for you based on your preferences. This assumption is wrong in an important way. The content wasn't selected primarily because you'll like it. It was selected because people who behaved like you in the past engaged with it at a rate that exceeds some threshold, factoring in freshness, creator signals, safety scores, and inventory needs. Those are different things.

But "behaved like you" doesn't mean "watched the same videos." It means you exhibited the same pattern of micro-behaviors across thousands of dimensions. Did you let the video loop? Did your scroll speed decrease? Did you turn up the volume? Did you immediately search for similar content? Did you screenshot at 0:14? Each of these actions places you in a more specific behavioral cluster. You're not just "someone who likes cooking videos." You're someone who watches about half of cooking videos, screenshots the ingredient list, never watches the serving suggestions, and is likely to search for the recipe within minutes.

When you watch that video, you don't just consume content - you move into a new behavioral cluster. The system knows with high confidence what this cluster watches next. So that's what it shows you.

Here's the crucial part: by showing you that next video, the system isn't predicting your behavior. It's causing it. The map creates the territory. Yes, the system still probes with exploration, testing new content types. But once it finds a profitable path, it exploits it relentlessly, shaping what you'll "choose" next.

The algorithm knows things about you that you don't know about yourself. It knows you rewatched the first three seconds of that dance video four times. It knows you scrolled past that political post in 0.3 seconds but lingered on the reply for 2.7 seconds. It knows your thumb hesitated for hundreds of milliseconds over the share button before moving on. It knows you watched most of that cooking video but rewound twice to see the knife technique. It knows you started typing a comment, deleted it, typed again, then scrolled on, a behavioral sequence that correlates with engaging with similar content tomorrow. Every micro-gesture becomes data. You're not just what you watch. You're how you watch it, measured in milliseconds.

Consider video rental stores. I'm thinking of one manager who controlled what movies entered my awareness. When teenage me kept renting The Matrix and Blade Runner, he didn't just hand me more sci-fi action. Instead, he said, "Try Stalker. Tarkovsky. Russian. Three hours, barely anything happens. You'll either fall asleep or it'll change how you think about science fiction."

That's curation versus optimization. He wasn't pattern-matching my behavior against other customers. He was making a judgment about my development as a viewer. An algorithm tracking engagement metrics would never recommend a three-hour Russian film where people stare at water. The completion rate would tank.

But here's what the algorithm would know instead: I spent several seconds looking at the Stalker thumbnail. I scrolled back up to look at it again thirty seconds later. These micro-signals get fed into models trained on billions of similar micro-decisions. The system learns which content tends to convert "almost-finishers" into finishers in my specific cluster.

At scale we get the same education, and we call it personalization.

The algorithm has a different goal than the video store manager. It doesn't want to develop your taste. It wants to maximize engagement. And it's discovered something important: sparse data creates noisy forecasts, while concentrated behavioral patterns generate reliable yield. Funneling users into high-signal clusters with proven engagement patterns isn't about computation. It's about predictable returns.

Think about the mechanics. When users scatter across infinite content paths, prediction becomes gambling. But concentrate most users into the same thousand videos, and you have rich signal on the pathways between them. The system shows you what worked on the millions who walked this path before you.

You can test this. Create two fresh accounts in the same city, follow identical creators for 48 hours without searching or exploring, then screen-record and compare. You won't just see similar content. You'll see many of the same videos, in nearly the same order, offset by hours or days. Global trending and creator copycats produce some baseline convergence; the algorithmic funneling amplifies it far beyond natural virality.

These aren't conscious choices. You don't decide to watch most of a video rather than all of it. But that distinction contains information. The system has learned which variations tend to convert partial viewers into complete viewers.

Real human interests are messier than this. They jump around. They contradict themselves. They get obsessed with something random for three weeks then never think about it again.

The algorithm doesn't capture these inconsistencies. It smooths them out, interprets them as noise in the signal of your "true" preferences. But those inconsistencies aren't noise. They're the actual texture of human curiosity. By filtering them out, the system isn't revealing who you really are - it's replacing you with a simplified model of yourself.

And you start to become that model. You watch what it recommends, which reinforces its predictions, which makes its recommendations more confident, which makes you more likely to watch them. You're training the algorithm, but it's also training you. You're converging toward each other, meeting somewhere between who you were and what the math needs you to be.

Sometimes the algorithm does surface something like Stalker, not because it got curious, but because engagement metrics temporarily spiked when a microtrend made difficult content profitable. These exceptions underscore the objective function: the system only shows you difficulty when difficulty drives engagement.

The video store is gone now. So is the particular kind of intelligence that could look at a teenager renting The Matrix repeatedly and know they needed Stalker, not another action film. That intelligence didn't scale. But scaling, it turns out, requires simplification. To serve billions, you pretend billions of people are variations on a few thousand behavioral templates.

We traded the video store manager who'd seen everything for a system that's seen everyone. The manager could tell you what you specifically needed to watch next. The algorithm can only tell you what someone like you watched next. These aren't the same thing, but we've forgotten the difference.

The real tragedy isn't that we lost human curation. It's that we lost it so gradually we forgot it existed. We think recommendation algorithms are doing what those humans did, just better and faster. But they're doing something fundamentally different. In short-form video, under pure engagement objectives, they're not curating culture. They're collapsing it.

So why do we "discover" the same thing on different days? Because the path was laid first. The map has become the territory, and we're all walking the same paths, thinking we're explorers.