Google is urging the Supreme Court to rule that Section 230 of the Communications Decency Act, which the company calls “the central building block of the internet,” not only protects platforms from liability for hosting terrorism-related created by users, but also for recommending that material to others.
A contrary interpretation “could have devastating spillover effects,” Google writes in papers filed with the court on Thursday.
“Recommendation algorithms are what make it possible to find the needles in humanity’s largest haystack,” Google writes.
The company's papers come in a lawsuit brought the family of Nohemi Gonzalez, who was killed at age 23 in a November 2015 terrorist attack in Paris.
Her family sued Google, alleging that the company not only allowed ISIS to post videos to YouTube, but also helped ISIS spread terrorist propaganda by recommending its videos to other users.
A trial judge dismissed the lawsuit, and the 9th Circuit Court of Appeals refused to revive the family's case.
The 9th Circuit said in its ruling that Section 230, a 1996 law that immunizes web sites from lawsuits over material created by third parties, protected Google.
The Supreme Court agreed in October to take up the matter, along with a related lawsuit over terrorist content on social media services. The court will hear arguments in both cases in February.
Last month, the Gonzalez family urged the Supreme Court to rule that Section 230 doesn't protect web companies for recommending clips. The family essentially argues that YouTube recommendations came from the company, not the ISIS members who posted the videos, and should therefore be treated as YouTube's speech.
“YouTube provides content to users, not in response to a specific request from the user, but 'based upon' what YouTube thinks the user would be interested in,” lawyers for the Gonzalez family wrote.
But Google counters that recommendation algorithms are essential to the modern internet.
“Travel websites like Expedia use algorithms to recommend cheap flights by examining all possible routes, airlines, prices, and layovers,” Google adds. “Streaming services like Spotify and Netflix use algorithms to recommend songs, movies, and TV shows based on users’ listening or watch histories, their ratings of other content, and similar users’ preferences.”
The battle has drawn interest from a wide range of outside groups and individuals, including some who suggested in friend-of-the-court briefs that Section 230 shouldn't protect websites from liability for distributing content they have reason to know could be harmful.
Google argued that these friend-of-the-court interpretations of Section 230 would “upend the internet” by encouraging websites to either remove anything that could be considered objectionable, or disabling any filtering tools in order to “take the see-no-evil approach.”
Three years ago, the Supreme Court refused to hear an appeal in a similar dispute involving Facebook. In that case, attorneys for victims of terrorist attacks in Israel argued that Facebook's alleged use of algorithms to recommend content should strip the company of immunity.
A panel of the 2nd Circuit Court of Appeals voted 2-1 in favor of Facebook.
Circuit Judge Christopher Droney, who authored the majority opinion, wrote that web publishers typically recommend third-party content to users -- such as by prominently featuring it on a homepage, or displaying English-language articles to users who speak English. Basing those recommendations on algorithms didn't make Facebook liable, Droney wrote.
Since then, Supreme Court Justice Clarence Thomas has repeatedly suggested that lower court judges have interpreted Section 230 too broadly.
What the Supreme Court should consider in the Gonzalez case is that Google does not have guard rails to where the terrorist video should be blocked before it get's posted. Meaning, the YouTube video could and should be summitted then determined to be valid. This would a very fair understanding of how Section 230 should work.
Instead, Google takes the position that Section 230 should only be enforced AFTER the video is posted.
The oblivious problem for Google and YouTube is they don't want to filter out the bad videos either by charging the sender a fee for posting the video and reviewing the material beforehand. Google has committed to a total automation system that no employee is involved in the process before the video is posted. There is problem. Google uses Section 230 to save money by not hiring employees. The same happens in Google Search with spam links that could be filtered out by employees once the links goes up but there is only a very weak effort to stop the attacks.