In Thursday's edition of "Red, White & Blog," I wrote about the rapid acceleration of AI-generated "news" sites detected in NewsGuard's new AI Tracking Center and posed the question about how that compares to the proliferation of human-generated news sites.
During a follow up interview with NewsGuard co-CEO Steven Brill, we agreed the best source would be Comscore, but that they probably didn't actually know.
As a proxy, I asked Brill how many new news sites has NewsGuard been indexing in its database since it launched.
"Ten to 15 each month," he said.
Taking the high number in that range, I deduced there is a 10x multiple of AI-generated ones since NewsGuard began tracking them a little more than a month ago, and did some simple math to show how that would accelerate over time (see chart below).
advertisement
advertisement
Obviously, we have no idea what the actual rate of new news-site creation might be over time, but assuming there aren't changes in technology, regulations, laws and/or publishing industry protocols, I'm going to guess it will not go down and will only go up.
By the way, if I carry the math out to 2045 -- the year Ray Kurzweil predicts we will reach the Singularity -- the number of AI-generated news sites added will be 39,600, but it won't matter, because once we fuse with AI we'll all instantly know everything about anything as soon as it happens. Even the stuff that never actually happened.
So no need for news publishers anymore, right?
All joking aside, I had a pretty substantive Q&A with Brill as part of my query, so I am publishing it here for anyone who cares.
MediaPost: It’s remarkable what you’ve found -- that in a little over a month, AI-generated, unreliable news sites has tripled from 49 to 150. My follow-up question is, during the same period of time, what would be the normal number of human-generated news sites that have been created?
Steve Brill: It’s a Comscore question that Comscore probably wouldn’t be able to answer, but we probably add – over the period of a month – 10 new news sites, maybe 15. That doesn’t mean that some of them don’t generate zero traffic that nobody sees. But this is extraordinary.
The most important thing is that it’s really a harbinger of what is yet to come. I can imagine some political consultant -- or dozens of political consultants -- are going to figure out that you can create whole websites in the run-up to the 2024 election. And if I’m in a candidate in a XYZ swing congressional district, I can create a website that is just directed at my opponent in that campaign in which the news is that incumbent congressman so-and-so attended a ribbon-cutting ceremony for a new center supplying food for veterans. And the next news item is that his or her opponent was caught in a cheating scandal when they were in college.
Now it takes no effort to create persuasive news sites that look like real news sites and you can target it to a specific demographic.
Just like you can ask ChatGPT to “write me an explanation of what network television used to be like for a 15-year-old high school student, it can create a website for me in Peoria, IL, directed at people there that has news of Peoria and includes a story about a scandal involving so-and-so.
MediaPost: So this is like Russia’s Internet Research Agency on steroids. That was human-powered, cheaply under Russian labor.
Brill: That’s exactly right. This is really exponentially weaponizing the internet.
MediaPost: Do you have any idea how this scales and when we’ll get to the point where people just don’t believe anything, anymore?
Brill: Right, and it could be that people don’t even believe the truth anymore. I mean, just imagine if the “Access Hollywood” tape came out now, instead of in 2016, Trump would just say it’s fake and a lot of people would believe it’s fake. Because guess what -- you can fake that tape today.
The issue is that you really need to train these machines to pay more attention to some websites vs. others. Because right now, what the machines do is they just read everything online and they treat The Economist the same way they treat Alex Jones, or Pravda even.
ChatGPT will say whatever they publish, whereas Bing chat -- which is using NewsGuard now -- will say something different.
MediaPost: You’re not talking about willful disinformation, per se, but misinformation from bogus money-making news sites scraping bad information?
Brill: Well, it’s the same thing. You can train your generative AI machine that even if someone prompts it to do it, that it won’t say the Parkland shooting was staged by crisis actors.
MediaPost: Again, any projection for how this might scale over time?
Brill: It will scale over time and it will make the Internet Research Agency look like amateurs.