Three of the biggest platforms responsible for the spread of disinformation and divisive content online -- Facebook, YouTube and Twitter -- have taken an important step toward a new self-regulatory process negotiated by the World Federation of Advertisers (WFA).
The platforms have each agreed to adopt a common set of definitions for hate speech and other harmful content and said they will collaborate on the development of industry monitoring efforts that would help curtail harmful content being distributed in the future.
The agreement is the result of 15 months of “intensive talks” among marketers, agencies and the digital platforms operating under the auspices of the Global Alliance for Responsible Media (GARM), a cross-industry initiative that was organized by the WFA, and supported by other ad trade organizations including the Association of National Advertisers and the American Association of Advertising Agencies.
advertisement
advertisement
The group noted that at least three other platforms -- social networks TikTok, Pinterest and Snap -- have also given “firm commitments” to provide plans for developing similar controls by year-end.
Meanwhile, the agreement with Facebook, YouTube and Twitter contains four key areas for action, including:
Adoption of GARM common definitions for harmful content
Development of GARM reporting standards on harmful content
Commitment to have independent oversight on operations, integrations and reporting
Commitment to develop and deploy tools to better manage advertising adjacency.
“As funders of the online ecosystem, advertisers have a critical role to play in driving positive change and we are pleased to have reached agreement with the platforms on an action plan and timeline in order to make the necessary improvements,” WFA CEO Stephan Loerke stated, adding: “A safer social media environment will provide huge benefits not just for advertisers and society but also to the platforms themselves.”
As important as the common definitions of harmful content are, the next step -- developing a “harmonized” approach to reporting and monitoring the distribution such content -- will be crucial for industry self-regulation.
Importantly, the WFA said “independent oversight” will be essential for compliance, noting in a statement: “With the stakes so high, brands, agencies, and platforms need an independent view on how individual participants are categorizing, eliminating, and reporting harmful content. A third-party verification mechanism is critical to driving trust among all stakeholders.”
The WFA said the goal is to have all major platforms “fully audited or in the process of auditing” by year end, but it did not disclose what entities would be conducting such audits.
Lastly, the WFA said the reporting and monitoring system will also lead to new “advertising adjacency solutions” that will help advertisers and agencies avoid placing ads adjacent to harmful content in the future
Let me understand. In the future, I may say something deemed hateful if I mock the religious teachings and followers of Mohammad but will be celebrated if I say Jesus and his followers are murderers? YES, the standards have not been finalized...but WHO is developing these standards? Jon Garth Murray, Madalyn's son? AOC? Bet there will be zero Christian conservatives or any conservative thought there. Have they have asked JOE if he will be a reviewer yet?
@Michael Pursel: I have not seen GARM's definitions yet, but speaking for MediaPost's guidelines, I don't think mocking constitutes hate speech (your comment being a case in point).
It is abusive or threatening speech that expresses prejudice against a particular group, especially on the basis of race, religion, or sexual orientation.
In terms of other forms of harmful content, we'll have to see how GARM defines that too, but I would guess it is anything that is clearly false and intended to cause harm by misinforming others.
GARM's "standards" are likely to be selections from the usual menu of Politically Correct fascism that has overtaken the media, and everyone involved in media production should immediately review the GARM standards and take action to expose and challenge the new "standards" if they prove to be more of the same.
It's time to force social media to conform to the Bill of Rights.
Ahh, the delicious irony of proposing the "forcing" someone or something to "conform", in this case the US Bill of Rights.