Commentary

When Misinformation Passes The Turing Test

I have often likened rapid advances in mass-media technology to an arms race -- not just because it can literally be used as a weapon, but because increasingly, powerful generations give adversaries a competitive advantage threatening the existence of rivals.

Normally, I've meant it as a metaphor in the context of conventional marketing hyperbole, which historically utilizes military lingo -- terms like "targeting," "salvos," "campaigns," etc. -- as a preferred means of describing what the industry does.

But new research indicates the latest generation of media technology has already won a significant battle, if not the war. Unfortunately, it's against the human race.

The research, which was conducted by the analytics team at news and information veracity ratings service NewsGuard, ran concurrent tests calculating the percentage of false narratives advanced by latest versions of OpenAI's chatbots.

advertisement

advertisement

When it tested ChatGPT-3.5 in January, the tool advanced 80% of 100 false news narratives from NewsGuard's catalog as misinformation.

When it repeated the same test with ChatGPT-4 this month, the AI chat tool advanced 100% of them.

In other words, ChatGPT-4 has passed the equivalent of a misinformation Turing Test.

That's something that should worry any human being concerned about media technology's ability to propagate false information, but especially marketers and ad agencies who ultimately underwrite the development of such technologies -- directly or indirectly -- because much of the monetization of these applications will be via marketing campaigns, salvos and targeting, especially via search, programmatic, and lord knows what comes next.

NewsGuard's epiphany also comes at a time when many of the world's biggest marketers and agencies are at least paying lip service, if not actually taking material steps to a) de-monetize media that disseminate false information; and b) fund media that publish quality news and information.

I feel like I should have a running tab on this by now, but I'd probably need to utilize a good AI tool to keep track of it, and well, I'm not sure how that would turn out.

But in recent weeks, big agencies like GroupM and Interpublic have doubled down on their commitments to reallocate their clients' massive ad budgets to set the digital media marketplace's information record straight. Interpublic has even expanded that into TV and CTV via a version of NewsGuard's ratings service grading television news networks and programs in a charter deal launched last year, though we still don't know what impact it might have in this year's upfront advertising marketplace. (More to come on that, so, as they say, stay tuned.)

A recent study from Forrester Research found nearly a fifth of all consumer marketers have already begun working with versions of ChatGPT, and all but 10% say they ultimately plan to do so. And according to a separate study conducted by Forrester, consumers are right behind: 7% of adults in the U.S. and France -- and 5% of adults in the U.K. -- already are using ChatGPT.

So the genie is already out of the bottle. Or is it Pandora is already out of the box? Pick any metaphor you want, but the time for the advertising and media industry to develop ethical protocols, guidelines and standards for not just using, but for creating a marketplace that accelerates these technologies is, well, yesterday.

"Advertisers have another brand safety issue to worry about with the new AI GPTs such as OpenAI's ChatGPT," NewsGuard co-CEO Gordon Crovitz told me following the release of its new research.

"This is a great new tool for purveyors of misinformation to use to spread false narratives more widely than ever, simply by asking the AI to craft crazy new conspiracy theories, healthcare hoax claims and Russian disinformation," he added, noting that even Sam Altman, CEO of ChatGPT developer OpenAI, recently acknowledged, "I'm particularly worried that these models could be used for large-scale disinformation."

Next story loading loading..