House Bill Would Require Platforms To Offer Opt-Outs From Personalized Content

Lawmakers in the House have introduced a bill that would regulate how Facebook and other big tech platforms personalize content for users.

The bipartisan “Filter Bubble Transparency Act” would require large platforms to allow users to opt out of personalization based on “opaque” algorithms -- meaning ranking systems based on data people don't provide in order to use the service.

Examples include “the history of the user’s connected device, including the user’s history of web searches and browsing, geographical locations, physical activity, device interaction, and financial transactions.”

The proposed law wouldn't force companies to allow users to opt out of algorithms based on some data provided directly by users -- including information about their preferred language, and the accounts they follow.

Representatives Ken Buck (R-Colorado), David Cicilline (D-Rhode Island), Lori Trahan (D-Massachusetts) and Burgess Owens (R-Utah) are sponsoring the bill.

Its requirements would only apply to companies with at least 500 employees and $50 million in annual revenue, that collect data from at least 1 million users a year.

The industry-funded think tank Chamber of Progress opposes the measure, arguing that algorithms help filter out objectionable content.

“Algorithms are what protect people online from spam, scams, and hate speech, so requiring Facebook and Twitter to provide algorithm-free feeds is like demanding that they offer a ride in the Internet’s worst filth,” Chamber of Progress CEO Adam Kovacevich stated this week.

The House bill is a companion bill to one proposed two years ago by Senators John Thune (R-South Dakota), Mark Warner (D-Virginia), Marsha Blackburn (R-Tennessee), Jerry Moran (R-Kansas) and Richard Blumenthal (D-Connecticut).

The proposed law is just one of several bills that attempt to regulate online algorithms.

Among other examples, four Democratic lawmakers recently introduced the “Justice Against Malicious Algorithms Act,” which would strip companies of the protections of Section 230 of the Communications Decency Act protections for content that has been recommended to users based on "personalized algorithms," meaning algorithms fueled by information about individual users.

Section 230 of the Communications Decency Act protects websites from liability for most forms of content posted by users, including speech that is unlawful because it is defamatory.

The First Amendment separately protects companies' right to publish all legal speech, including speech that is racist, sexist or otherwise offensive.

It's not clear whether laws restricting the use of algorithms would survive a First Amendment challenge.

Stanford Law School's Daphne Keller, who has extensively studied platform regulation, wrote earlier this year that attempts to regulate amplification algorithms would “definitely face major uphill battles.”

Next story loading loading..