4 min read

I Am, Therefore I Agent

A modern living room facing a subtly digital cityscape through a large picture window.

In Ready Player One, humanity has inhabited an extension of reality called the OASIS. People go to school there. They work there. They build identity, earn status, and survive economically inside it.

The only question left is who decides its future.

The real tension

In the movie, IOI is a large corporate conglomerate whose stated objective is to acquire control of the OASIS. The plan is simple: own the platform, define the operating rules, and monetize participation at scale.

Under IOI’s plan, users would still have avatars. They would still have access. The OASIS would continue to function. But control over defaults and incentives would shift to a single centralized operator, with decisions optimized around efficiency, predictability, and revenue.

At the center of the story is a struggle over who will control the OASIS now that its founder is gone.

What Halliday understood

James Halliday, the reclusive creator of the OASIS, did not leave his creation to the market. In the film, Halliday is the architect of the virtual world where much of society now lives, works, and learns. Knowing the OASIS had become essential infrastructure, he understood something fundamental. Once people depend on a system at that scale, its direction becomes difficult to change unless its purpose has already been protected.

Before his death, Halliday made a deliberate choice about the future of the OASIS. Rather than appointing a successor or selling control, he embedded a mechanism directly into the system. Control would pass not through inheritance or acquisition, but through a challenge designed to test how deeply someone understood the world he built and the values behind it. The contest was Halliday’s way of shaping the outcome without remaining in the picture.

The contest exists because the window for stewardship is narrow and closes quickly. By the time everyone relies on a system, the rules are already in place. The defaults are already embedded. The remaining question is whether those rules serve the people who use the system or the entities that control it.

Halliday’s test was not about finding the most skilled player or the best gamer. It was about finding someone willing to keep the OASIS aligned with its original intent. Open. Human. Broadly accessible.

The heroes are not opposed to progress. They are opposed to enclosure.

Whether we know it or not, we are having a version of that same conversation today.

A real-world inflection point

In recent interviews, will.i.am has described a view of where AI is heading that closely mirrors Halliday’s concern.

His perspective comes from watching what happens when creators lose control of their digital output. Music, art, likeness. All of it scraped, trained on, reproduced without clear ownership. He is describing the future of AI agents through the lens of someone who has already experienced what it means to lose control of a digital self.

Not a future defined by generic chatbots, but one shaped by personal AI agents. Digital selves that act on your behalf, manage work, and represent you across systems.

His position is straightforward. If these agents are going to exist, individuals should own both the agents and the data that powers them.

He also makes a related point that is easy to miss. The most consequential effects of technology rarely appear at the beginning. They emerge as systems become persistent, normalized, and embedded in daily life.

His view is that we need to prepare for a future where having a personal AI agent will be as essential as having an email address or a bank account.

The critical question is not whether agents become common. It’s whether people own them, or whether they depend on large shared systems controlled by platforms.

An avatar represents you in the OASIS.
An agent represents you across real systems.

Once that layer exists, opting out becomes impractical, much the same way opting out of email eventually became impractical.

Two plausible futures

There are two broad ways this can unfold.

In one version, agents are owned by individuals. They are portable. Their data is transparent. Expectations are explicit. Moving between platforms does not require abandoning your digital identity.

In the other, agents are owned by platforms. They work on your behalf until incentives shift. Data is opaque. Switching costs accumulate gradually, then suddenly feel absolute.

The difference between these outcomes is not technical. It is about who defines the rules and when those rules are set.

Will.i.am calls this the “wild wild west” phase, a generous way to describe building infrastructure that will govern billions of people’s digital lives without anyone agreeing on who owns what.

Early design decisions have a habit of becoming long-term defaults.

Why the stakes increase over time

Today’s AI tools are visible and bounded. They are systems you engage with deliberately and intermittently.

The more significant shift occurs when agents become persistent, trusted, and capable of acting over time. When they stop being something you consult and start being something that represents you, even when you are not actively involved.

At that point, questions about ownership, data rights, and defaults stop being abstract. They begin shaping everyday behavior.

We have seen this pattern before. Once platforms become foundational, rules introduced later tend to formalize existing structures rather than meaningfully alter them.

If there are going to be clear expectations around AI agents and the data they rely on, those expectations are most effective while participation is still a choice rather than an assumption.

When was the last time you tried downloading all your photos from iCloud?

What product teams are building

Many teams still approach AI as a feature. A button. A panel. A helper you activate when needed.

That framing misses the shift underway.

If users are going to have digital selves, product teams are no longer just designing tools. They are designing environments those digital selves inhabit. Decisions about ownership, portability, and data rights stop being edge cases.

They become the architecture.

Deferring those decisions is not neutral. It simply means they are made implicitly rather than deliberately.

The OASIS moment

The story is less about the existence of the OASIS than about what happens when control of something essential changes hands.

Halliday understood that once direction drifted, it would be difficult to recover. That is why he created the contest. Not as entertainment, but as a safeguard. A way to protect the system’s purpose while the opportunity still existed.

As AI agents mature into persistent, personal digital selves, will they reach a similar moment? The open question is whether those selves primarily serve individuals or the platforms that host them.

If you are building AI products today, you are not only shipping features. You are establishing defaults. And defaults tend to outlast the intentions that created them.

If we are not careful, we may find ourselves on omnidirectional treadmills, running in place just to stay in a game we never agreed to play.

© 2025 Tales of Product. All rights reserved.