
For nearly three decades, Silicon Valley has relied
on a simple legal defense when harm shows up on its platforms: “We didn’t create the content.”
That argument, grounded in Section 230 of the Communications Decency Act, has
been one of the most effective liability shields in modern corporate history. It allowed platforms to scale at global speed while avoiding responsibility for what users post, share, and amplify.
Lawsuits were routinely dismissed before they reached discovery. Executives rarely had to testify. The system worked.
What changed last week is not the statute. It’s the reality around
it.
In K.G.M. v. Meta Platforms, Inc., et al., a Los Angeles jury found Meta and YouTube negligent in the design and operation of their platforms, concluding that those failures were a
substantial factor in causing harm to a young user. The jury awarded $3 million in damages and assigned 70% responsibility to Meta and 30% to YouTube.
advertisement
advertisement
This was not a case about a single
post. It was not a case about whether a platform failed to remove content quickly enough. It was a case about how the system itself works.
And the jury found that system wanting.
“This verdict puts a stake in the ground,” I said in a statement following the decision. “A jury reviewed how these systems are built and concluded that harm was not incidental.
Profiting from misinformation just got a lot harder to do.”
That distinction matters more than the dollar amount. Because it signals a shift that cuts directly through the core defense
platforms have relied on for decades.
For years, the argument has been that platforms are passive, that they simply host what users bring to them. This verdict rejects that. It recognizes that
these systems are engineered environments, and that engineering carries responsibility.
The legal system is beginning to ask a different question. Not what was said, but how the product was
built.
That shift, from speech to product, changes everything.
If a platform is merely hosting content, Section 230 still applies. It remains a powerful shield when the claim is that a
company should be treated as the publisher of user-generated speech. That protection has not disappeared.
But when a platform is designed in a way that predictably drives harmful outcomes,
particularly for minors, the legal analysis starts to look very different. It begins to resemble product liability.
And product liability does not turn on who wrote the content.
It
asks whether harm was foreseeable.
It asks whether the company understood the risks.
It asks whether those risks were mitigated or ignored.
In this case, the focus was not on
individual posts but on the architecture of the platforms themselves: recommendation systems that learn what keeps a user engaged and deliver more of it. Design features built to maximize time,
repetition, and emotional response. Feedback loops that can push vulnerable users deeper into harmful content.
“When a product is designed to maximize time, repetition, and emotional
response, you can’t separate the design from the outcome,” I said. “That connection is now on the record in a court of law.”
For years, comparisons between social media
and Big Tobacco have been easy to dismiss as rhetorical. Now they are being tested as legal strategy.
The New York Times, in covering the case, drew a direct line to the tobacco
litigation of the 1990s, when companies were accused of hiding what they knew about harm while continuing to market aggressively, particularly to young users. That litigation reshaped an industry. It
also exposed internal documents that fundamentally changed public understanding of the issues.
Something similar is now beginning to happen in the technology sector.
The most immediate
shift is procedural. Claims that once would have been dismissed under Section 230 are now surviving long enough to reach discovery. That means internal research, product decisions, and executive
communications are no longer theoretical. They are evidence.
It also means that juries, not just judges, are beginning to weigh in.
And juries do not think in terms of safe harbors.
They think in terms of responsibility.
Did the company know?
Did it design for this outcome?
Did it profit from it?
Those questions are now being asked in courtrooms.
None of this means Section 230 has disappeared. It continues to provide broad protection when platforms host third-party content or make moderation decisions. If a user posts something defamatory
or false, the platform is generally not treated as the speaker. That remains a foundational part of internet law.
But that protection was built for a different kind of system.
Section
230 assumed a world in which platforms functioned primarily as conduits. It drew a line between hosting content and creating it. That line becomes harder to maintain when platforms are actively
curating, ranking, and recommending content in ways that shape behavior.
“What young people have been describing for years is now being validated in a different arena,” said Emma
Lembke, director of Gen Z advocacy at the Sustainable Media Center. “This case acknowledges that these platforms don’t just host behavior, they shape it in ways that can have real
consequences.”
Once a case is framed around those design choices rather than the underlying speech, Section 230 becomes less decisive. It may still apply in part, but it no longer ends
the analysis at the outset. Plaintiffs do not need to dismantle the statute. They need to demonstrate that the harm arises from the system itself.
That is the end run now taking shape in the
courts.
Calling Section 230 “dead” is not a literal claim about the statute. It is a recognition that its function has changed. It no longer operates as a universal shield capable
of shutting down entire categories of claims before they begin. It is becoming narrower, more conditional, and more dependent on how a case is framed.
“There’s a shift happening
from awareness to accountability,” Lembke said. “And once that shift happens, it becomes much harder for companies to dismiss harm as anecdotal or unavoidable.”
The broader
significance of the verdict is not limited to a single case. It is expected to shape how thousands of similar claims are evaluated across the country. It signals that the legal system is willing to
examine platform design, not just platform content.
That alone changes the balance.
For years, the technology industry has argued that it is in the business of hosting speech. That
framing carried both legal protection and cultural legitimacy. But as the architecture of these systems becomes better understood, that claim is being tested.
Platforms do not simply transmit
information. They structure attention. They influence behavior. They optimize for engagement at a scale that has no historical precedent.
The law is beginning to catch up.
Section 230
remains on the books. But the world it was designed to govern has changed.
And in that sense, its power, as it was originally understood, is already in decline.