Commentary

AI And Content Creation: The Coming Content Avalanche


If you’re like me, the growing drip, drip,  drip of the content faucet is pushing you to the edge: posts, pings, notifications, alerts. The sheer volume of content demanding your attention chews at the edge of your consciousness with a grinding urgency. 

Tech journalist Charles Arthur makes a compelling argument that future content is at a tipping point. Arthur is the author of the substack blog “Social Warming,” about social networks’ effects on society. 

“The approaching tsunami of addictive AI-created content will overwhelm us” warns Arthur. 

The tsunami he points to is the creation of what academics call synthetic media, media that is created entirely by artificial intelligence. I’ve been keeping a close eye as these content-creation robots have moved from the labs to publicly accessible test programs. 

OpenAI has released DALLE-2 and GPT-3 to the public. DALLE creates images with simple text inputs, and GPT-3  writes in full paragraphs. There’s even a demo site that can create a video from simple text.

You can try it out here.

There are already areas where AI is writing news reports published for public consumption. At the Miami Herald, AI is writing local real estate stories.

It costs an AI virtually nothing to make one thousand versions of a story with the same basic facts, and then test them in GANs (generative adversarial networks) to make the best versions that humans will like. 

The text of a story can be turned into speech (think of podcasts produced without any humans involved) or into videos, using fully digital meta humans that present information, or act in digital movies, again without humans. 

I want to be clear: This is not science fiction we’re talking about. It is science “fact.” These machine-created content outputs are all here now. 

Here are a few examples of AI-generated text. 

Q: Write a paragraph of what synthetic media is.

A: Synthetic media is a term for the artificial generation of media content and artifacts. It is a rapidly growing field with applications in many areas, including computer vision, natural language processing, and digital media synthesis. In computer vision, synthetic media can be used to generate realistic images or videos from scratch. In natural language processing, it can be used to generate text that sounds like it was written by a human. And in digital media synthesis, it can be used to generate realistic sounds or 3D models.

The thing about AI is that it answers with such confidence and precision that it presents its approach as an unassailable truth. 

For example: 

Q: Tell me about Steven Rosenbaum.

A: Steven Rosenbaum is an American businessman and author. He is the CEO of Magnify.net, a video curation and publishing platform, and the author of Curation Nation.

Rosenbaum was born in New York City and raised on Long Island. He graduated from the University of Pennsylvania in 1983 with a degree in economics. After college, he worked in the advertising industry for several years before starting his own company, Rosenbaum and Associates, in 1989.

Rosenbaum is the author of Curation Nation, a book about the power of curation in the digital age. He is a frequent speaker on the topic of curation and its role in the future of media.

So, here's the thing about the AI answer above. It's not only wrong, but it’s dangerously wrong. It mixes old facts, new facts, and incorrect information without any indication of the source or the quality of the result. 

Here are the correct details:

A: Steven Rosenbaum is an American businessman and author. (TRUE) He is the CEO of Magnify.net, a video curation, and publishing platform, (TEN YEARS AGO), and the author of Curation Nation. (TRUE)

Rosenbaum was born in New York City and raised on Long Island. (SORT OF) He graduated from the University of Pennsylvania in 1983 with a degree in economics. (WRONG!)  After college, he worked in the advertising industry for several years before starting his own company, Rosenbaum and Associates, in 1989. (WRONG!) 

Rosenbaum is the author of Curation Nation, a book about the power of curation in the digital age. (TRUE) He is a frequent speaker on the topic of curation and its role in the future of media. (OLD)

So, imagine a future just around the corner, where AI content is generated, then turned into text, then transferred to voice, then published as a believable human digital character, and arrives in your social feed. 

I asked OpenAI’s GPT-3 a final question:

Q. What is digital overload?

A. Digital overload is a term used to describe the feeling of being overwhelmed by the sheer amount of information that is available online. It can be caused by too much time spent online, trying to consume and process too much information at once.

Here, at least, AI seems to get the answer exactly right.  

Still, the main issue with AI-created content is that the distinction between right and wrong is nuanced, and often requires some amount of intuition. For example, there are multiple Steven Rosenbaums: a number of law firms, a music entrepreneur, and an actor. To sort through the data, determine which of the many doppelgangers are me, and publish, is a non-trivial problem. 

So, given the coming avalanche of robotic content, it may well be that the value-add in content that you’ll be willing to pay for may be… humans! That seems like a reasonable prognosis, at least in the near term.  

*Note: the graphic for this article was produced by asking DALLE-2 to create a photograph of an empty newsroom, and then to create a robot journalist. I superimposed one on the other in Photoshop.

2 comments about "AI And Content Creation: The Coming Content Avalanche".
Check to receive email when comments are posted.
  1. Steve Lundin from bigfrontier, August 29, 2022 at 1:36 p.m.

    Very interesting piece - however the inclusion of your own resume gives it an advertorial flavor. You didn't address the impact that this will have on distorted, self serving "fake" news - which should be profound. And what about AI filters that can sniff out the spam and tame the tsunami?  As the content pool has widened exponentially, it has become correspondingly shallow - IMHO. 

     

  2. steve Rosenbaum from SustainableMedia.Center, August 29, 2022 at 2:39 p.m.

    Good point didn't really mean to be self-promotional at all. I was more curious as to how the AI would respond to a question that I knew the facts of without any possibility of being wrong. What interests me most is that there is no attribution (ie: Wikipedia) and no way to correct misinformation. So the AI simply delivers wrong information without any way to check its veracity. But I do see your point. 

Next story loading loading..