Last week, former Facebook employee Frances Haugen urged Congress to revise Section 230 of the Communications Decency Act, the 25-year-old media law that protects companies from lawsuits over content created by users.
Haugen specifically argued that people should be able to sue companies for algorithmically promoting potentially harmful user-generated content -- such as posts linked to eating disorders.
On Thursday, four Democratic lawmakers introduced the “Justice Against Malicious Algorithms Act” which, as Haugen suggested, takes aim at content-recommendation engines.
The bill -- introduced by Reps. Frank Pallone, Jr. (D-New Jersey), Mike Doyle (D-Pennsylvania), Jan Schakowsky (D-Illinois), and Anna Eshoo (D-California) -- would strip companies of Section 230 protections for content that's been recommended to users based on "personalized algorithms," meaning algorithms fueled by information about individual users.
At first glance, it might seem like bills that regulate algorithms don't raise the same kinds of free speech concerns as laws regulating content -- mainly because laws that only restrict companies' use of algorithms don't appear to interfere with users' ability to post material to Facebook and other online services.
But in reality, attempts to regulate algorithms might well be unconstitutional. That's because the First Amendment doesn't just protect companies' right to publish any lawful speech, including speech that's racist, sexist or otherwise objectionable. It also protects companies' right to distribute that speech.
Stanford Law School's Daphne Keller, who has extensively studied platform regulation, wrote earlier this year that bills regulating amplification algorithms wouldn't be “automatically unconstitutional,” but would “definitely face major uphill battles.”
Keller also noted the Supreme Court opined on a similar issue more than 20 years ago, when Justice Anthony Kennedy wrote: “The distinction between laws burdening and laws banning speech is but a matter of degree. The Government's content-based burdens must satisfy the same rigorous scrutiny as its content-based bans.”
The industry-funded advocacy group Chamber of Progress raises another problem with the proposed bill: It could worsen the very problems Congress hopes to solve.
“By prohibiting companies from using personal data to recommend relevant content to users, platforms could be forced to rely more heavily on metrics like viral engagement that result in the spread of bad content,” Chamber of Progress CEO Adam Kovacevich stated. “That’s a disaster for the newsfeed.”
The watchdog Fight for the Future adds that the bill “would function more like a 230 repeal than reform, because it opens the floodgates for frivolous lawsuits.”
The organization adds that the best way to address “surveillance-driven algorithms” is by passing privacy laws.