It starts with a promise: “Here’s how I went from 0 to 7 million followers in 30 days and you can too.”
It unravels step by step: “Start by using TopicBeastAI to find out what topics are popular. Then use QopyQat to find out which posts on those topics get the most engagement. Then use FormulaiQ to derive the underlying post structures. Then use PostFaker to generate your own version of each post.”
Set aside whether the suggested tools are effective. Set aside whether the process generates the sought-after followers. The entire idea of it is fundamentally, vomitously broken.
I know: It’s not new. But it is enabled and accelerated by generative AI.
It’s bad enough to outsource creativity to ChatGPT. What this type of process does is outsource our lack of creativity.
When we ask our AI tools to find a “proven formula,” by definition, what they come up with is not new. It’s a reconstitution of what everyone else has done. The irony of it all is that generative AI itself functions in this way: by digesting and regurgitating content. And where that gets us, as Azeem Azhar pointed out this week, is “reversion to a bland mean.”
advertisement
advertisement
The bland mean is an issue even before we get to the concept of model collapse: “the degenerative process that large language models like ChatGPT can experience when they're trained on AI-generated junk data.”
Model collapse is what you get when AI ingests all the existing data, pulps it, and then spits out so much rehashed content that it begins to feed on itself.
We’re not far off. A couple weeks ago, OpenAI CEO Sam Altman tweeted, “open ai now generates about 100 billion words per day. all people on earth generate about 100 trillion words per day.”
He said that like it’s a good thing.
Last month, Vice reported that, already, “a ‘shocking’ amount of the internet is machine-translated garbage, particularly in languages spoken in Africa and the Global South.”
(All of this, of course, is before we even get into hallucinations or tools designed to “poison” content to protect copyright.)
In the face of this impulsion towards pre-chewed dreck, rather than striving to be more creative and innovative, we’re trying to copy the very reduction in creativity generated by AIs themselves. Instead of developing our own positions, opinions, points of view, we’re absorbing existing content and spitting out what we think is expected from us.
We’re turning ourselves into human versions of generative LLMs.
The reason to share on LinkedIn is not that you’ve found a way to digest and regurgitate popular content. The reason to share on LinkedIn is that you have something of value to offer.
If you are crafting what you share to reverse-engineer popularity, you’ve already lost.
If the AI reguritated content is generating impressions and ad dollars they have WON.
A sage post Kaila. Thank you.
And by the way regarding JW's post, the content regurgers may be winners in ad dollars (if it sustains), but the loser is us the reader in the long run.
Thanks, John and JW. So true: winning in the short term might destroy all the value in the long run.