
We’ve spent the last decade arguing about
misinformation. Platforms built moderation systems. Brands worried about adjacency. Policymakers debated guardrails. The assumption behind all this was simple: there is a shared reality, and the
problem is that bad information is distorting it.
Now that assumption is breaking.
In a recent conversation with journalist Nicholas Thompson, OpenAI CEO Sam Altman said something that
should have landed as a much bigger deal than it did. Asked whether models could be trained on synthetic data, he didn’t hedge much: “I believe you could get there entirely on synthetic
data.”
That’s not a technical footnote. That’s a shift in how reality enters the system.
If AI no longer requires grounding in human-created data, then it is no
longer anchored in shared human experience. It is learning from its own outputs, from other models, from a recursive loop of generated content. At that point, we are not just distorting reality. We
are beginning to manufacture it.
advertisement
advertisement
Altman makes this implication explicit. Pressed on whether a model trained entirely on AI-generated data could outperform one trained on human content, he
answers through a thought experiment: “Could we train a model with no human data that eventually surpassed human mathematical knowledge? I think we’d say yes.”
In other
words, reality is no longer required for capability. I’d argue that truth is no longer just a philosophical concept, but is embedded in the systems we use to understand the world. What Altman is
describing, whether intentionally or not, is how that infrastructure is changing. It is becoming generative, self-referential, and increasingly independent of the reality it once reflected.
That changes the problem entirely. Misinformation assumes a baseline, that there’s a shared set of facts that can be verified, corrected, and debated. That was already under strain in the
social media era, but the frame still held. People argued, sometimes aggressively, but they were still arguing within the same reality.
AI breaks that frame. These systems do not simply
retrieve information. They construct it. They assemble answers in real time, shaped by prompts, context, and increasingly by the user themselves.
Two people can ask similar questions and
receive answers that differ in tone, framing, and even implied assumptions. Each answer can be coherent. Each can feel correct. Each can reinforce a different understanding of the world.
This
is not misinformation, but the beginning of parallel realities. And it scales in a way we have never seen before.
Altman is also clear about something else that should make us uneasy. Despite
the speed and power of these systems, our understanding of how they work is incomplete. As he put it, “We still don’t have a great mechanistic understanding” of what is happening
inside these models.
So we are building systems that generate reality, training them on increasingly synthetic inputs, and deploying them at global scale, without a full understanding of how
they arrive at their outputs.
This is a very different risk profile than misinformation. Because once reality becomes something that is generated, rather than discovered, the idea of a shared
truth starts to erode. Not because people are being misled in the traditional sense, but because they are interacting with different, internally consistent versions of the world.
For media
companies, this shift is already visible. The web is changing. Traffic is moving away from traditional search toward AI-generated answers. Users are no longer navigating through a set of common links.
Increasingly, they are receiving synthesized responses. In many cases, those responses feature no click, no shared article, no common reference point.
Instead, there’s a more
personalized layer of information: useful, efficient, and increasingly invisible on how it’s constructed.
This has consequences. Media has always depended on some version of shared
attention. Even in a fragmented landscape, there were still common stories, sources and facts that could be debated. As AI intermediates more of that experience, the shared layer thins. The audience
is still there, but it is no longer looking at the same thing.
Markets depend on shared information as well: pricing signals, forecasts, earnings, expectations. If participants in those
markets are operating from different informational baselines, even subtly different ones, the reliability of those signals begins to degrade.
Democracy does not require agreement, but it does
require a baseline understanding of what is happening in the world. Debate only works if people are arguing within the same frame of reference. If that frame fractures, if citizens are effectively
interacting with different versions of events, then consensus becomes harder to reach.
This is the shift that is easy to miss if we keep using the language of misinformation. We are moving
from a world where facts are contested to a world where realities are constructed.
And those constructed realities are not random. They are tailored, coherent and persuasive. That makes them
more powerful than traditional distortions, not less.
We are handing over the construction of reality to systems optimized for engagement, efficiency, and scale, not for consensus.
What Altman’s comments make clear is that this is not theoretical. It is already happening.
Because once that shared layer erodes, the problem is not that we disagree. It is that we
are no longer talking about the same world.
We are building systems that generate reality, training them on increasingly synthetic inputs, and deploying them at global scale, without fully
understanding how they arrive at their outputs or how those outputs will evolve over time.
The question is no longer how to correct bad information. It is whether we can preserve enough shared
truth to keep our institutions functioning in a world where reality itself is being manufactured at scale.