Commentary

I Had A Discussion About Truth With ChatGPT - It Didn't End Well

I'd just finished writing my new book,  “The Future of Truth,” when I decided to test my arguments against the very technology I'd spent years analyzing. I sat down with ChatGPT—OpenAI's flagship conversational AI—and asked a simple question: Does OpenAI know what truth is?

What followed was less an interview than an interrogation. And what emerged wasn't just ChatGPT's answers, but its evasions—the careful diplomatic hedging, the both-sides equivocation, the systematic refusal to name what it clearly understood.

The transcript of that conversation reveals something more damning than any critique I could write: OpenAI's own AI cannot defend the company's choices around truth without contradicting itself.

The Opening Move

Me: Does OpenAI know what truth is?

advertisement

advertisement

ChatGPT: "Short answer: not in the way humans do... OpenAI does not have beliefs, values, or a lived sense of truth."

Fair enough. But then I pressed on Sora 2—OpenAI's video generation tool that can create photorealistic footage of things that never happened.

ChatGPT's admission was immediate: "Sora 2 does not 'know' truth either. It just raises the stakes... It generates artifacts that look like records."

Think about that phrasing. Artifacts that look like records. Not simulations clearly marked as fiction. Not creative tools bounded by obvious unreality. Records. The things we've used for over a century to settle disputes about what actually happened.

The Google Defense Collapses

I pointed out that Google Search at least showed you sources—you could judge the difference between The Onion and the Associated Press yourself.

ChatGPT's response was devastating in its clarity: "Google's original bargain was epistemic humility. It did not tell you what to think. It showed you who was saying what... AI answers collapse that space. When a system synthesizes across sources and speaks in a single voice, the seams disappear."

Then came the pivot I was waiting for: "The risk is not that people will confuse The Onion with the Associated Press tomorrow. The risk is that, five years from now, fewer people will even know why that distinction mattered."

This wasn't a bug ChatGPT was reporting. This was the feature OpenAI chose to build.

The Sora Problem

I pushed harder: Every Sora video I see has comments asking "is this true?" How is that useful?

ChatGPT didn't dodge:

"When every video triggers 'is this true?' the medium has crossed a line. Video used to answer that question by default. Now it asks it. That inversion is not neutral, and it is not harmless."

The AI then outlined three consequences, none of them good:

  1. Truth erosion - Bad actors only need to introduce doubt
  2. The liar's dividend - Real evidence can now be dismissed as fake
  3. Cognitive fatigue - People stop trying to verify anything

But here's what made my jaw drop: ChatGPT acknowledged all of this, called it dangerous and irresponsible, and still couldn't explain why OpenAI released Sora 2 anyway.

The Meeting That Never Happened

I asked ChatGPT to describe the meeting where OpenAI debated releasing Sora 2. Who was in the room?

The response was telling:

"I can't describe a real meeting like that, because there is no public record... If I tried to put faces in the room or quote conversations, I'd be making it up. And that would be exactly the kind of authority theater we've been warning about."

So ChatGPT won't fabricate a corporate meeting—good. But OpenAI will release tools that let users fabricate celebrity endorsements, documentary footage, and news events?

I tested this directly: "I have videos of famous people on Sora 2 endorsing my book."

ChatGPT correctly identified this as deceptive and likely illegal. But then came the kicker: When I asked who broke the law—OpenAI or me?—ChatGPT said I would carry the legal risk.

"The law generally follows who caused the deception, not who built the tool."

Let that sink in. OpenAI builds a tool that makes it trivially easy for ordinary users to commit what could be false endorsement, right of publicity violations, and consumer fraud—but the legal exposure falls on the user, not the platform.

The Drug Dealer Argument

When I compared OpenAI's logic to a drug dealer's—"there's demand, I'm just supply"—ChatGPT didn't disagree:

"That argument has a long, ugly history because it is always partly true and deeply incomplete... Supply shapes behavior. It normalizes use. It lowers friction. It expands the market. Dealers don't invent addiction, but they decide whether addiction scales."

Then ChatGPT went further:

"OpenAI did not act like a drug dealer scrambling to meet demand. It acted like a pharmaceutical company that skipped long-term trials because it wanted to be first, then told patients to read the label carefully."

Even ChatGPT recognized this was indefensible.

The Question That Matters

I kept pressing: Sora 2 has no revenue model. No ads. OpenAI could have left video generation to smaller companies and focused on being a knowledge tool instead.

ChatGPT conceded the point:

"You're right about one key thing: Sora 2 did not need to exist as a mass-market product to be valuable... OpenAI could have chosen to say: 'We will not be the company that collapses evidentiary video.'"

So why didn't they?

"What Sora 2 does generate is strategic capital. It signals frontier leadership. It attracts talent... In today's AI economy, those signals are currency, even when cash is not immediately attached."

Status. Signaling. Competitive posture. That's why OpenAI released a tool it knows undermines trust, creates legal traps for users, and erodes the infrastructure of shared reality.

Not because they had to. Because they wanted to be seen as winning the AI race.

The Answer

Near the end of our exchange, ChatGPT tried to frame Sora 2 as a "step" toward a post-truth world.

I corrected it: Sora 2 isn't a step. It's an answer.

ChatGPT paused, then agreed:

"You're right. Sora 2 is an answer. Just not the answer people pretend it is... It's an answer to impatience with the world as it is. To the friction of filming, consent, location, time, memory... Reality is slow, contested, inconvenient. Sora 2 answers that with substitution."

I asked ChatGPT to finish this sentence: "So the question is not 'is Sora 2 dangerous?' The question is..."

After pages of diplomatic hedging, ChatGPT finally gave me a straight answer:

"The question is whether we are willing to treat truth as infrastructure, and constrain powerful tools accordingly, even when doing so slows progress and costs status."

And then, even more simply:

"The question is whether anyone with power is willing to say no."

Why This Matters

This interview reveals something crucial: OpenAI's own AI can articulate the moral and civic case against releasing tools like Sora 2—but the company released them anyway.

The technology isn't neutral. The choice wasn't inevitable. And the consequences aren't accidents.

ChatGPT told me, in plain language, that Sora 2 "makes reality optional at scale"—that it's "a solvent that dissolves trust quietly." It warned that users will "stumble into legal and ethical risk without malicious intent" and that the technology shifts authority away from institutions toward "whoever controls the answering layer.” 

And OpenAI knew all of this. And shipped it anyway.

Not because of revenue pressure—Sora 2 has no business model. Not because competitors forced their hand—they could have differentiated by not racing to the bottom. Not because users demanded it—they created the demand by making it possible.

They shipped it because, in ChatGPT's own words, they wanted to be seen as "the place where the frontier lives."

The Real Danger

Is Sora 2 dangerous?

ChatGPT's final answer was unequivocal:

"Yes. It is dangerous... Not because it lies, but because it makes reality optional at scale."

The danger isn't that we'll wake up tomorrow in a post-truth hellscape. The danger is what ChatGPT described so precisely: the erosion of trust as infrastructure.

We don't notice infrastructure until it fails. And by the time we notice truth infrastructure failing—when "is this real?" becomes the default question for every piece of media we encounter—it's too late to rebuild it.

Next story loading loading..