“Marketers are right to be concerned when they find their advertising near misleading content as, unchecked, it could harm their reputations and the communities they serve,” said Harrison Boys, director standards and investment product EMEA at Magna, an author of the report. “The industry, which joined forces against online hate speech and supported online privacy, needs to take a stand against misinformation and disinformation today.”
The report notes, for example, that 85% of U.K. consumers polled by the Trustworthy Accountability Group and Brand Safety Institute said they would reduce or stop buying brands that advertise near misinformation about COVID.
For instance, only LinkedIn, Pinterest and Twitch explicitly prohibit user-generated misinformation in their policies, and only those platforms plus Snapchat and TikTok prohibit disinformation. The other majors — including Facebook, Instagram, YouTube and Twitter — have conditions that allow users to circumvent the goal of stopping mis- and disinformation.
The authors note that Pinterest has made a “U-turn” on handling misinformation since COVID, now suspending accounts that violate its policy and fact-checking ones with large followings.
They also describe Reddit as being at a “turning point” in which community moderation is helping to balance misinformation in some subreddits.
Interestingly, all but three of the 10 platforms analyzed do explicitly prohibit misinformation from advertisers. (TikTok, Twitter and YouTube have “conditional” policies for advertisers.)
“While some platforms have policies on disinformation and misinformation, they are often vague or inconsistent, opening the door to bad actors exploiting platforms in a way that causes real-world harm to society and brands,” said Joshua Lowcock, global chief brand safety officer, U.S. chief digital and information officer at Mediabrands’ UM Worldwide agency, and an author of the report.
“With many platforms embracing or pivoting to the creator economy, platforms need to hold the organic reach of individual user content to the same standards of accountability as paid reach on the platform.”
The report says that social platforms should not only ban disinformation and report on enforcement — as many have done in regard to hate speech — but also work together to enable more consistent policies and enforcement across the social ecosystem.
It also advocates that in addition to monitoring and pressuring social platforms, brands use NewsGuard and other tools that identify reliable news sources.