A federal appeals court on Friday upheld a block on key provisions of California's Age-Appropriate Design Code -- a 2022 law that regulates online companies' ability to display content to minors and harness their data.
The ruling partially affirmed a decision issued last year by U.S. District Court Judge Beth Labson Freeman, who struck down the entire law.
Among other provisions, the statute requires online companies likely to be accessed by users under 18 to evaluate whether the design of their services could expose minors to “potentially harmful” content, and to mitigate that potential harm.
The law also includes privacy provisions, such as a requirement to configure default settings in a privacy-protective way, unless the business can show a “compelling reason that a different setting is in the best interests of children”
In a unanimous 44-page ruling, a three-judge panel of the 9th Circuit Court of Appeals said the law's mandate to report on and mitigate potential harms to minors' well-being likely violates the First Amendment.
advertisement
advertisement
That mandate -- referred to in the opinion as the “report requirement” -- “deputizes covered businesses into serving as censors for the state,” Circuit Judge Milan Smith said in an opinion joined by Mark Bennett and Anthony Johnstone.
Smith added that the report requirement reflects an attempt to “indirectly censor the material available to children online, by delegating the controversial question of what content may [harm children] to the companies themselves.”
Smith also wrote that California lawmakers could have addressed concerns about potential online harms to minors without requiring companies to make determinations about content.
“A disclosure regime that requires the forced creation and disclosure of highly subjective opinions about content-related harms to children is unnecessary for fostering a proactive environment in which companies, the state, and the general public work to protect children’s safety online,” he wrote.
“For instance, the state could have developed a disclosure regime that defined data management practices and product designs without reference to whether children would be exposed to harmful or potentially harmful content or proxies for content,” he added.
The ruling comes in a challenge to the law brought by the tech industry group NetChoice, which said the statute violates the First Amendment. The Reporters Committee for Freedom of the Press and 14 news organizations backed NetChoice, arguing in a friend-of-the-court brief that the law would unconstitutionally restrict news organizations' ability to publish lawful material.
“As the Supreme Court has repeatedly reaffirmed, the publication of even hateful, violent, and profane speech is fully protected by the First Amendment,” the groups wrote.
They added that a requirement to mitigate potential harms is unconstitutionally vague.
“One regulator may consider 'potentially harmful' editorial content that includes candid and graphic coverage of gun violence or the wars in Ukraine or Gaza,” they wrote. “Another may consider 'potentially harmful' editorial content that describes opioid addiction or teen suicide in a manner the regulator disapproves of.”
The youth advocates and tech industry critics Fairplay, the Center for Digital Democracy and Accountable Tech had urged the 9th Circuit to uphold the law, arguing it was a valid regulation of “data capitalism.”
While Freeman blocked the entire law, the appellate panel said more evidence was needed to make a determination about some of the privacy restrictions, and sent the matter back to Freeman for further hearings about those provisions.
For instance, one of the privacy-related terms prohibits companies from using “dark patterns” -- broadly meaning manipulative user interfaces -- to encourage minors to provide more personal data “reasonably expected” to provide a particular service.
The appellate court essentially said Freeman struck down that provision prematurely.
“Based on the record developed so far in this litigation, it is unclear whether a 'dark pattern' itself constitutes protected speech and whether a ban on using 'dark patterns' should always trigger First Amendment scrutiny, and the district court never grappled with this question,” Smith wrote.