
Most AI events are about tools, hype, or fear. Andus
Labs' "AfterNow" assembly last week was different -- six hours designed as "a structured reset" for leaders who want to move beyond the noise toward actual clarity and agency.
That's where
Nick Thompson dropped something about reality becoming a business model -- an idea I haven't been able to shake.
Thompson is one of those rare media figures who actually gets
technology. He's the CEO of The Atlantic, former editor-in-chief of Wired, and also a world-class runner who set the American record for men 45+ in the 50K race in 2021. He literally
trains with AI-powered coaching bots in between running one of America's most important magazines.
In an interview with Chris Perry, the founder and managing partner of Andus Labs, Thompson
made a crucial point: "The proper way to use [bots] is for inspiration more than answers. But they're going to get better at answers, and people are going to use them more for answers, and that is
going to pull readership off the rest of the web and into these bots."
advertisement
advertisement
Think about it. The entire web was built on this idea that you'd click through to read the full story. What happens when
ChatGPT just... gives you the answer? No click. No traffic. No ad revenue. The whole economic engine of the internet just stops.
But the traffic thing isn't even the scary part.
This
is where Thompson said something that made my stomach drop: "The fact that an individual can rig the system so that their particular truth is the source of truth for everybody using the AI model is
completely terrifying."
He was talking about how Grok apparently checks Elon's Twitter feed when you ask about Israel/Hamas. But step back from that specific example -- we're talking about a
handful of people literally programming what reality looks like for billions of users. "That's 1984 type stuff, right?" Thompson said. And he's not wrong.
We're not just dealing with biased
search results anymore. We're dealing with reality itself becoming programmable. And the people with their hands on the keyboard? A few tech billionaires.
But here's where Thompson's media
brain kicked in with something I didn't see coming: "Reality would become a new business model. Authentication, verification, watermarking -- that is an amazing business model."
Holy shit.
We're heading toward a world where knowing what's real becomes a premium service. DuckDuckGo already lets you opt out of AI images in search results. That's just the beginning. Want to know if that
video is real? That'll be $9.99 a month.
Then Thompson hit me with this concept he calls "the MCP Apocalypse." MCP is Anthropic's agent protocol, and his concern isn't about AI taking over the
world -- it's about AI agents taking over web traffic. "The world where Chris is not going to theatlantic.com, but Chris's agent is coming to theatlantic.com and sending stuff back. And so what do we
have to do? Do we present the content in a form that's more efficient for the agent? What content do we give? What content do we not give?"
It hit me that this is already happening. People are
sending AI agents to Zoom meetings. We're living through this transition right now, and most of us haven't even noticed.
But here's what's interesting about how The Atlantic is handling
all this chaos -- it’s actually figured something out.
While everyone else chased Facebook traffic and ad dollars, The Atlantic built something more valuable: people who actually
want to pay for its work. Thompson's strategy is dead simple: "We're trying everything we can to build as many direct relationships with individuals as we can." The Atlantic even increased its
print magazine frequency -- maybe the only publication to do that in 25 years because "the U.S. Postal Service will deliver it to your door. [It’s a] direct relationship without any social
media company or any AI agent involved in the middle."
The lesson? Direct relationships matter.
What struck me most about Thompson's perspective is how he balances optimism with this
very real caution. And when Perry asked him what he's actually using day-to-day, his answer showed just how deeply AI has become part of his workflow.
He's got a custom GPT that he feeds
everything he eats and every run he logs, and it gives him training suggestions, cross-training advice, and workout plans based on his upcoming race goals. For quick research -- like when he's got
seven minutes before a meeting and needs to understand what he's walking into -- he uses Perplexity. For editing his newsletters and refining his writing, it's Claude.
But here's the part that
got me: He does mock interviews with AI versions of the people he's about to interview. "Please answer this in the voice of [tech billionaire] Marc Benioff," he'll prompt, then practice his questions.
"It doesn't really do it in the voice of Marc Benioff, and thank God it doesn't pretend it's Marc Benioff," he said. "There is a line. I know it's a bot, but it's a helpful way to prepare."
Even his kids are in on it. When they're stuck on linear algebra, he helps them prompt their way through learning the concepts rather than just getting answers.
This isn't someone dabbling
with AI. This is someone who's made it essential to how he works, thinks, and even trains for races. But he's also clear about what we need to prioritize.
"We need to preserve reality as a
future," he said, "You need to know who's who, right? I need to know when I'm talking to Chris in real life and when I'm talking to your digital twin."
The future isn't about choosing between
human and artificial intelligence. It's about being intentional about how we blend them while preserving the things that matter most.
Reality as a business model? Given where we're headed,
that might be exactly what we need.