
In lieu of its internal moderation team, which has been largely
cut by CEO Elon Musk, Twitter has expanded “Community Notes” -- the platform’s crowdsourced moderation feature -- to images in posts, with the goal to flag more “misleading
media.”
“From AI-generated images to manipulated videos, it is common to come across misleading media,” wrote Twitter in describing the new feature, “Notes on
Media.”
“Notes attached to an image will automatically appear on recent & future matching images,” the company added.
The impact score measures the
helpfulness of contributors' notes. Community Notes contributors with an impact score of 10 will be able to write notes about the specific image included in a tweet -- not just the tweet
itself, ensuring that the community notes remain the same if other users tweet the same image.
Twitter also mentioned that it is working to expand the feature for videos and tweets with
multiple images and videos -- acknowledging that the feature, which is algorithmically “intended to err on the side of precision” will not match every image that looks like a match to
users.
“We will work to tune this to expand coverage while avoiding erroneous matches,” the company stated.
Notes on Media is being announced days after an AI-generated
image included in a tweet appeared on Twitter that captured a falsified attack on the Pentagon. The image spread quickly across the platform, with prominent accounts retweeting it.
Community
Notes have become Twitter's solution to a mass cut of moderators at the company since Elon Musk's takeover last fall. Without a dedicated team working on combatting disinformation, especially in the
budding age of AI-generated content and deep-fakes, it’s difficult to know how effective relying on users will be.
Over the weekend, Twitter officially dropped out of the EU's Code of
Practice, which was devised to prevent major online platforms from profiteering from disinformation and fake news.