Commentary

Artificial Truth

At a time when the advertising and media industry seems transfixed -- and maybe a little torn -- about the rapid acceleration of artificial intelligence, we’re also starting to see some accountability put in place.

On Thursday, news and information reliability rating service NewsGuard announced it has launched an auditing service for AI providers, including a new generation of chatbots, and generative AI services that are becoming baked into news, information and marketing tools, including both search and publishing content.

The new auditing service comes a little more than a week after NewsGuard released findings of a tracking study analyzing the propensity of OpenAI’s chatbots to output false news and information narratives in response to queries.

The study found that its latest version (ChatGPT-4) advanced 100% of 100 false narratives benchmarked by NewsGuard -- up from 80% when it analyzed an earlier version (ChatGPT-3.5) in January.

advertisement

advertisement

While machine learning, AI and the use of increasingly sophisticated algorithms to organize, parse and spread information is not new, its power and sophistication is advancing at rates that may be greater than many humans can detect.

One of the big problems is not just that hostile actors might be using the technology for nefarious, disruptive purposes, but that the technology itself doesn’t necessarily have the same safeguards about truth and reality that people do -- including the ones that are training them.

Generative AI is great at iterating possible outcomes to queries and questions, but it needs to be vetted -- at least for now -- by people for truth and veracity, because as well trained as the technology currently is, it’s simply synthesizing information that is available to it. And sometimes that information is erroneous, or synthesized in ways that aren’t real.

“The good news about ChatGPT is that it will always give you an answer,” begins a joke AI experts like to tell about it. “The bad news is it will always give you an answer -- even if one doesn’t exist.”

At a time when the advertising and media industry are tackling a wide array of ethical issues about the use of media -- everything from diversity, equity and inclusion to the role media plays in climate change -- it’s probably a good time for the industry to begin adopting standards, protocols and the means of evaluating the role AI plays in the information health of the ecosystem.

NewsGuard is a logical player to step forward, so good luck to them. Despite some disinformation -- ironically -- being circulated about them, they are good people and serious media-industry professionals with a strong track record for quality journalism.

Full disclosure, I worked for one of its co-founders --  Steve Brill -- back in his Brill Media and Inside.com days, so I may be a little biased, but I admired his strong ethical pursuit of the truth long before and long after he was my boss.

“For all the extraordinary promise of generative AI, it also presents a great threat to trust in information,” Brill said in a state announcing the new AI auditing service Thursday, adding: “The early launches of these services often respond to prompts about topics in the news with well-written, persuasive, and entirely false accounts of the news. This could become a force multiplier for those wishing to spread harmful conspiracy theories, healthcare hoaxes, and Russian disinformation at unmatched scale.” 

Other organizations -- most notably the Media Rating Council (MRC) -- have been developing media industry standards for evaluating and measuring the validity of AI, though those likely will relate explicitly to how they impact audience measurement and ratings services, including whether they are accurately representing audiences, or if they have some machine biases baked in based on the way their AI was trained.

Maybe it’s time for other industry authorities -- yes, I’m alluding to the Association of National Advertisers, the World Federation of Advertisers, or maybe even the Institute for Advertising Ethics -- to step up and set some baseline standards as well.

All the pressing issues confronting the advertising and media industry have one common ingredient necessary to advancing their causes: the truth.

Without that, we’re all just crawling down rabbit holes.

1 comment about "Artificial Truth".
Check to receive email when comments are posted.
  1. Jonathan May from HorseTV Global, April 1, 2023 at 10:33 a.m.

    While your concerns are lofty and valid, you know that the millennials developing this care nothing for the proper management of proprietary data. "Isn't this cool?" takes precedence over the use and proprietary administration of data and information.  Simply look at the violation and monetized distribution of personal data; at one time, personal privacy mattered.  In just a few minutes, I discovered your age, where you live, your address, your friends and family members, telephone and email contact information and even a Google Maps picture of your home.  All available to anyone, anytime- you have no privacy anymore, and at one time it was passionately guarded information. Please, no umbrage towards me for looking; blame those who made it available without your knowledge or even consent.  The same careless and reckless use and monetization of personal data will apply to AI- count on it.  The genie is already out of the bottle, and they aren't going back in.  How can you regulate the use of something, when the people developing don't care about the propriety of it?  AI is already making hiring and firing decisions at many companies; do you really want a computer deciding YOUR future and fate?  All of the apocalyptic movies showing a future where machines control the great unwashed, are coming true.  For every upside of technology development, there are two downsides, but no one cares.  I'm honestly glad I am not 20 years old, looking perhaps at another 50-60 years of this kind of uncontrolled management of data manipulation. We have become a society that now prides itself on manufacturing lies, and obfuscates the truth in the name of technology. We pride ourselves on taking something totally false, and selling it to the great unwashed as the truth- and they believe it.  Sad, very sad.

Next story loading loading..