The major social-media platforms are urging a federal judge to dismiss a sprawling class-action complaint alleging that Facebook, Instagram and others have injured teens' mental health.
The lawsuit represents “the most recent in a long series of cases asking courts to impose product liability rules on speech and technologies that communicate it, including book publishing, movie and television distribution, video games, and, more recently, online services,” attorneys for Meta, TikTok, Google and others say in a motion filed Monday with U.S. District Court Judge Yvonne Gonzalez Rogers in Oakland, California.
She is presiding over litigation brought by dozens of parents, teens and others, who claim Facebook, Instagram, TikTok and other social media platforms have harmed minors. The complaint raises numerous claims, but largely center on the theory that social media companies designed their services to be addictive, and then served minors with potentially harmful content that other users had posted.
The companies say they plan to argue they are protected by the First Amendment as well as Section 230 of the Communications Decency Act, but are awaiting a decision from the Supreme Court about the extent to which Section 230 applies when companies algorithmically promote content. Section 230 generally immunizes online companies from liability for material posted by users.
For now, the social media platforms argue that the allegations in the complaint, even if true, wouldn't prove the various negligence and product liability claims against the companies.
For instance, the companies say they can't be held liable for negligence based on publishing content, because publishers “have no duty to protect their audience from the effects of content publication or consumption.”
The companies add that there is no legal obligation to prevent “addictive” services.
“No court has created a duty of the type plaintiffs allege here in cases involving video game makers, magazine publishers, or television producers, and there is no basis for one here,” the companies write.
The platforms also say the allegations in the complaint, even if proven true, wouldn't establish that the companies caused teens' injuries.
“Plaintiffs do not include a single allegation attempting to link the alleged injuries to any individual plaintiff, or even to identify which plaintiffs used which features of the various services,” the companies write.
The companies add that the plaintiffs are attempting to "commandeer" discussion about teens' mental health by attempting to apply legal principles applicable to physical products -- such as safety requirements for cars -- to online speech.
“Time and again ... courts have rejected such attempts to transform the body of law governing dangerous 'products' into a tool to regulate how information is disseminated. For good reason: such claims are unsupported by any precedent anywhere and threaten to chill all manner of protected expression,” the tech companies write.