
Fun fact: When DeWitt Wallace founded Reader's
Digest magazine it was because he believed there was more news and information content being published than humans could possibly process. That was 1920. Before TV. Before the internet. And
certainly before AI.
So as I listened to Emily Christner, Chief Product Officer of Readers Digest publisher Trusted Media Brands, outline how the company is dealing with AI at
MediaPost's Publishing Insider Summit in New Orleans Wednesday morning, I couldn't help wondering what Wallace would think about today.
Christner's presentation -- "Task Forcing AI" --
outlined how the company is organizing its internal teams to assess, inform and prepare for the impact AI will have on the way it publishes, as well as the environment it publishes into. You know, the
world.
It led into a panel discussion in which two other publishers -- IGN's Justin Davis and Gray Jones Media's Richard Jones -- gave reactions and also shared the steps they are taking to
adapt to an AI-powered information world.
advertisement
advertisement
During the Q&A, I couldn't resist pointing out the irony of DeWitt Wallace's inspiration for conceiving Reader's Digest, and asking the
panel -- Christner in particular -- whether AI wouldn't exacerbate the published information problem in magnitudes previously unknown, and perhaps more troubling, whether the kind of synthetic,
non-human-generated information it publishes might not begin to "crowd out meaning."
I'll get to their responses in a minute, but first I'd like to share something else with you, which
explains why I put the chart you see at the top of this column. As I was flying back from New Orleans late last night, I received an early look at an announcement NewsGuard was making this morning
about a new AI Tracking Center to help publishers, advertisers, agencies, and governments -- and I would assume, other kinds of people too -- understand the magnitude with which AI is generating
publications, especially the kind whose reliability is questionable.
Utilizing site domains as its denominator, NewsGuard's Tracking Center estimates that as of Wednesday, it had detected 150
"unreliable AI-generated news sites."
I'm not sure what that number means in abstract, but in terms of progression, it means the number of such sites has increased threefold from just 49 that
NewsGuard had detected just about a month ago.
And that's when I heard Wallace rolling over in my mind.
It's also when a new question entered my mind: What was the corresponding growth
(or decrease) of human-generated news sites (reliable or otherwise) over that same period of time?
I posed that question to the NewsGuard team this morning, and I'm waiting to see how they
reply, even if it's just anecdotal.
As I wait, something else also came to my mind. It was a mind-blowing stat that one of the world's foremost experts on digital information -- then Microsoft
Bing chief Stefan Weitz -- told Brian Monahan and I when we interviewed him for an issue of MEDIA magazine that Monahan
guest-edited in 2011.
"As we become more reliant on the information that is accessible over the Web, there’s this sense that the rest of the information sort of doesn’t exist," I
asked him.
His response: "There’s this concept of the 'dark web,' which is that we think we only have access to about 5% of all the knowledge actually stored out there on the web. So
much has been firewalled, or it’s on paper, and it’s on company databases — those sorts of things. It’s just a matter of the 'dark web' becoming light."
The first part
of his response -- that Microsoft knew how much of all human information had already been digitally indexed -- shocked me. The second part convinced me that, yes, -- digitally accessible information
was beginning to crowd out the supply of total human information.
A couple of years later, I re-interviewed Weitz and asked him how much human information digital search engines had indexed by
then, and that answer also shocked me, because he said it went down -- not up -- because there was so much new, unstructured micro information content being published online, thanks to the
proliferation of social media, user-generated content, etc.
I have no idea how much has been indexed as of today, or whether it has gone up, down or sideways -- but if any of you know, please
post a comment below or contact joe@mediapost.com.
But my point is that given the acceleration of AI-generated information benchmarked by NewsGuard, I think it raises so many questions about
what indexing information even means. Heck, it raises questions about what information is.
The reality is that unless there are some major changes soon in the way we as a society begin to
factor that, more and more synthetic information will begin to crowd out the good old-fashioned kind. What we think about as "truth," or human-generated versions of it.
I don't know how we
even begin to crack that code, but NewsGuard's new center and task forces like the one Trusted Media and other publishers are beginning to put in place are a good start.
What I don't hear a
lot about is what the Big Tech players -- the ones actually engineering all the new generative AI tools: Microsoft, Google, OpenAI, etc. -- are doing to help with that.
Turning back to
Wednesday's publishing summit AI panel, I'm not sure whether they actually understood my question or whether they were just dodging it, because their responses were mostly about why AI would never
disrupt the kind of human-generated content they publish, because readers would always gravitate toward the kind of human-created, "journalism with a capital J," kind of content they published.
I'm not so sanguine.
But as you may already know, I am fairly sarcastic, so as I was leaving the summit I heard a few publishers kibitzing while checking out in the hotel lobby.
"We're thinking a lot about AI," I heard one say.
"The real question is," I jumped in, "is AI thinking about you?"
