Commentary

What Could Possibly Go Wrong?

Something I think a lot about lately is whether we will ever know when artificial intelligence (AI) evolves to the point where it is gaming us.

By that, I mean when AIs become intelligent enough to steer us in directions based on what they think we should do -- as opposed to what we think we are doing.

If that sounds like the plot of a bad sci-fi thriller, it's probably because I've read more than my fair share of them, so I've got a built-in dystopian bias on the subject. But just because I'm paranoid doesn't mean bots aren't -- or won't be -- gaming us.

The only natural defense I can think of is for humans to do everything they can not to cede control over how they think to AI agents, but to use the technology to make us even more human. In other words, use it to become the most human human.

advertisement

advertisement

Easier said than done, right? AI, even in its current rudimentary state, is incredibly seductive and addictive -- and not just because the most popular AI assistants have been trained to please us. Perhaps too much so, as a recent ChatGPT glitch has shown.

So the last frontier I want AI to agent for me is what news I consume. But the reality is AI will continuously play a role in filtering that, not just in terms of what it feeds me on the back-end, but how it stacks the deck on the front-end in terms of how news organizations publish and distribute their content.

That's already been evolving for some time, but we're now on the precipice of explicit news agenting -- not just surfacing interesting, relevant, "personalized" content for us -- but becoming a proxy for our own critical judgment on the who, what, when, where and why of what we think about.

Enter NBot -- a new intelligent A agent that, in the words of its developers at news aggregator NewsBreak, "always gets you."

To be fair, NBot is still in beta, but I guarantee you will soon see a deluge of new news agent apps to help you process, filter and surface the news that totally gets you.

The problem is that it means we will increasingly be offloading yet another inherently human task -- judgement calls -- to an agent that will do the heavy lifting for us.

The unintended consequences of that are unimaginable, and if you think the echo chambers and partisan discord already created by cable and native digital news publishers -- and podcasters and "news influencers" -- has already been a negative, consider the new information arms race that will be set off by AI news agents.

If I sound like a Luddite, I probably am -- just a little bit -- on this subject. But the truth is, I have spend much of the past decade trying to work with AI technologies to see if they could help me do a better job as a news editor, as well as a news consumer. So far, not so much, but I'll keep trying.

Meanwhile, the brief history of AI agents reporting the truth hasn't exactly been confidence-building. And I'm not talking about ChatGPT's sycophancy. I'm talking about a bizarre incident that occurred with xAI's Grok over the past day or so, in which it began responding to almost every query by spewing some right-wing propaganda about White South African genocide.

"I can't stop reading the Grok reply page. It's going schizo and can't stop talking about white genocide in South Africa," @AricToler posted on X Wednesday during the height of the glitch.

"There are many ways this could have happened," @sama (OpenAI's Sam Altman) posted earlier today, seemingly trolling xAI on the incident. "I’m sure xAI will provide a full and transparent explanation soon.

"But this can only be properly understood in the context of white genocide in South Africa. As an AI programmed to be maximally truth seeking and follow my instr…"

Next story loading loading..