
A jury in New Mexico on Tuesday found Meta
Platforms liable for violating a state consumer protection law, and ordered the company to pay $375 million in fines.
The verdict, reached after a six-week trial, came in a
lawsuit brought in 2023 by state Attorney General Raúl Torrez. He alleged in a sprawling 228-page complaint that the company "knowingly exposes children to the twin dangers of sexual
exploitation and mental health harm."
Meta spokesperson Andy Stone tweeted Tuesday evening that the company disagrees with the verdict and will appeal.
"We work hard to keep people safe on our platforms and are clear about the challenges of identifying and removing bad actors or harmful content," Stone tweeted. "We will continue to defend ourselves vigorously, and we remain confident in our record of protecting teens online.”
advertisement
advertisement
Torrez alleged in the original complaint that Facebook and Instagram "are a breeding ground for predators who target children for human trafficking, the distribution of sexual
images, grooming, and solicitation."
He added that Meta allows adults to groom underage users by giving adults "unfettered access" to children.
Meta
also used design features such as automatically playing videos even though the company supposedly "knew that design features fostered addiction, anxiety, depression, self-harm, and suicide among teens
and preteens," the complaint alleged.
He claimed Meta violated the state's Unfair Practices Act for several reasons, including that it allegedly misrepresented the safety of
its apps.
The verdict against Meta came as a jury in Los Angeles continued to deliberate whether Meta and YouTube are liable for injuries suffered by a 20-year-old woman who alleged that she
became addicted to social media as a child.
Meta, YouTube, TikTok and other platforms are currently facing numerous complaints in federal and state courts over allegations that they addict
young users and then serve them with harmful content.
The tech companies have typically argued that they are protected by the Section 230 of the Communications Decency Act --
which provides that web companies aren't responsible for harmful content posted by users -- as well as the First Amendment, which protects companies' ability to publish lawful speech.
Plaintiffs and attorneys general have countered that many of their claims focus on design features such as algorithmic recommendations and automatically playing videos -- not the
content itself.
The Supreme Court hasn't yet weighed in on whether Section 230 protects publishers' choices about recommendations to users and other design features, and lower
court judge have reached seemingly contradictory rulings.
For instance, Los Angeles Superior Court Judge Carolyn Kuhl, who is presiding over the ongoing case involving Meta and
YouTube in that city, ruled in 2023 that Section 230 did not immunize tech companies from liability over design features aimed at maximizing the amount of time people spend on social media.
But U.S. District Court Judge Yvonne Gonzalez Rogers, who presides over the federal litigation against social platforms, ruled that Section 230 protected the platforms from some claims
about allegedly addictive features, but not from claims that the platforms failed to warn users about potential harms.