When Merriam-Webster announced that its 2025 Word of the Year was "slop," it made a disturbing amount of sense -- at least from the perspective of someone (me) who reports on the ins and outs of
social media.
Throughout the publication’s 30-year history, no writer at MediaPost had used the word "slop" in relation to AI-generated content garbage in a news story or commentary until
2025, when contributor Kaila Colbin wrote a piece imagining an AI-prevalent
future.
The historical dictionary manufacturer defines "slop" as “digital content of low quality that is produced usually in quantity by means of artificial intelligence,” or
“all that stuff dumped on our screens,” according to Merriam-Webster’s online announcement.
“The flood of slop in 2025 included absurd videos, off-kilter advertising
images, cheesy propaganda, fake news that looks pretty real, junky AI-written books, 'workslop' reports that waste coworkers' time… and lots of talking cats. People found it annoying, and
people ate it up,” Merriam-Webster added.
advertisement
advertisement
Despite the widespread cross-platform social media slop problem piling higher every day, the only times I have written about "slop" for
MediaPost is in reference to Pinterest, a popular platform that has become a target for user complaints about unwanted worthless content.
Over the past year, low-quality AI-generated spam
content has plagued Pinterest, ranking in the top results for popular searches across all categories, often linking confused users back to AI-powered content-farming sites and leading to the use of an
even more damaging term: "enshitification" -- when a platform decays over time.
Unfortunately, for Pinterest, its users and the legitimate human-run brands trying to capture users’
attention, the tools put in place to help label and cut out AI slop on the app’s feed don't stand up against the platform's need to make money. In November, Pinterest's shares fell 20%.
And it's not just Pinterest. Other major social media platforms and apps are also becoming hubs for low-grade non-human content.
Over the weekend, The Guardian published a report stating that over 20% of the videos YouTube's
algorithm shows that new users are indeed AI slop, generating over 63 billion views, 221 million subscribers, and $117 million in yearly revenue.
Here is another fun word:
“brainrot.” This is a new category of videos that includes AI slop and other low-quality output specifically designed to monetize user attention.
The report found that one-third of
the 500 videos recommended in a new users’ feed fell into this category.
Another report from CNET blames generative AI tools like OpenAI's Sora for making the
social-media user experience even more isolating, detailing the increase in AI-generated slop and ads across all social platforms, including both Instagram and TikTok.
What were once online
spaces developed and designed for human users to connect with other human users are quickly becoming polluted by content that neither users or advertisers want to see.
While AI slop may boost
revenue intake for apps and streaming services per their attention-based monetization models, it hurts brands' overall goals of appealing to audiences, trusting performance data, and running effective
campaigns.
When AI-generated misinformation tricks the algorithm and authentic engagement fades from these online social spaces, brands can easily waste ad spend and risk safety ratings.
Right now, social media companies have yet to roll out tools and policies to properly cut down or label AI slop.
Part of the reason is that generative-AI tools like OpenAI's Sora are so
powerful and easy to use that slop content can be created and shared quickly and effortlessly by anyone. Furthermore, as yet there is no ecosystem-wide effort to address deceptive AI-generated content
online.
Another part of the reason is that social-media companies are not prioritizing the deletion of AI slop because they are making money from its existence. However, it’s possible --
and perhaps even likely -- that these tech giants will try harder to cut out AI slop once it truly affects their reputation and bottom line.
“In the long term, once 90 percent of the
traffic for the content in your platform becomes A.I., it begs some questions about the quality of the platform and the content,” Alon Yamin, the chief executive of Copyleaks,
an AI content detection company, told The New York Times, adding: “So
maybe longer term, there might be more financial incentives to actually moderate A.I. content. But in the short term, it's not a major priority.”
It doesn't take a report or
commentary like this to showcase social media's current plunge into AI slop.
Just scroll any app for a few minutes and you will see for yourself.
For now, AI slop is not going anywhere.