
To combat online misinformation, Congress may have to revisit web
companies' legal protections, Federal Trade Commissioner Rohit Chopra suggested Thursday.
Specifically, Chopra said Congress may need “to reassess the special privileges afforded to tech
platforms, especially given their vast power to curate and present content in ways that may manipulate users.”
He issued the statement issued in conjunction with a new FTC report to Congress on social media bots.
“The viral
dissemination of disinformation on social media platforms poses serious harms to society,” he writes. “Public health and national security leaders are rightfully concerned about the spread
of disinformation related to COVID-19. Social media platforms have become a vehicle to sow social divisions within our country through sophisticated disinformation campaigns.”
Chopra
adds that one solution to disinformation could involve increasing platforms' “accountability.”
Currently, Section 230 of the Communications Decency Act generally provides that web
companies aren't responsible for posts by outside parties -- real people as well as bots.
That law, considered the foundation of the modern internet, is now facing challenges by lawmakers in
both parties.
One bipartisan bill introduced late last month would
allow federal regulators to prosecute web companies for violations of federal civil laws; currently, Section 230 has an exemption for federal criminal laws.
The most prominent attack on
Section 230 came in May, when President Trump issued an
executive order that aims to task the FCC with creating regulations that would link companies' protections under the law with their content moderation policies.
Trump, like other conservative
politicians, appears to believe that tech companies aim to suppress right-wing views -- despite a complete lack of empirical evidence.
Chopra doesn't accuse social media companies of
suppressing political views. Instead, he argues that the platforms have little incentive to crack down on fake accounts or bots that spread lies, because those accounts often succeed in increasing
engagement.
“For major social media platforms, bots can be a boon, and a consensus is forming that they cannot be trusted to solve this problem on their own,” he writes.
“Bots and fake accounts contribute to increased engagement by users, and they can also inflate metrics that influence how advertisers spend across various channels.”
Chopra adds:
“Given the failures of platform policing, a comprehensive solution may require the imposition of specific requirements to increase accountability and transparency.”