Moral outrage — whether merited or made up — is often the lifeblood of social-media platforms.
It’s a safe bet Donald Trump spent more time on Twitter than reading critical briefs. It was the communication medium of revolt on Jan. 6, when insurrectionists stormed the Capitol and built a scaffold to punish then-Vice President Mike Pence for presiding over a legal election tally.
But what is the corrosive effect of such incessant information over time?
Fake news about COVID-19 or vaccines has prevented people from getting vaccinated. Even former Attorney General Bill Barr said he could not find any evidence of widespread voter fraud — yet the “Big Lie” about a stolen presidential election continues to feed outrage in the social-media sphere — and in new voter-suppression legislation in Georgia and Texas.
Researchers from Yale University used AI to scan the tweets of more than 7,000 people to determine how moral outrage impacted behavior.
The findings from the new Yale study show social media, such as Facebook and Twitter, jacks up expressions of moral outrage because users who employ such language earn more “likes” and “shares.”
And that feedback feeds on itself — the more positive the reinforcement, the more intense future content becomes. That explains how some users, often those who connect with politically moderate networks, may be radicalized over time, according to researchers.
Molly Crockett, the paper’s author and an associate professor at Yale University, was even more pointed. “Amplification of moral outrage is a clear consequence of social media’s business model, which optimizes for user engagement.
“Given that moral outrage plays a crucial role in social and political change, we should be aware that tech companies, through the design of their platforms, have the ability to influence the success or failure of collective movements,” she noted.
The results also underscore how easy it can be to advance political polarization.
“Social media’s incentives are changing the tone of our political conversations online,” said William Brady, a postdoctoral researcher in the Yale department of psychology and co-author of the study.
However, based on their original analysis, the team also found that outrage was more widely expressed among members of politically extreme online networks.
The Yale findings aren’t news to the White House, which has already taken note of social media's perils.
On July 20, the White House’s communications director Kate Bedingfield said social-media companies should be held accountable for publishing misinformation. That could include amending the Communications Decency Act, or Section 230 of the act. President Biden was particularly concerned about disinformation about Covid vaccines shared on these platforms.
Section 230 protects internet service providers and website companies from being held liable for content created by users. Facebook has long argued it is not a publisher and therefore, not responsible for postings.
Many on Capitol Hill and in Congress disagree — but it’s unclear how far they will push to change legislation.