Another week, another set of lawsuits against Meta, alleging ad fraud that benefited the Meta bottom line. That bottom line? Meta reported net income of $26.8 billion for Q1 of 2026, a massive 61%
year-over-year increase. Total revenue was $56.3 billion, a 33% increase from 2025.
One case comes from the Center for Countering Digital Hate (CCDH), which found that Meta was failing to curb
malicious Medicare-related advertisements, which earned the platform $14.3 million in ad revenue in 2025. The other is from the Consumer Federation of America and alleges that Meta Platforms is
"knowingly targeting and profiting" from fraudulent ads, and misleads Facebook and Instagram users about the risk of scams
We have always known that social media platforms are highly
sophisticated machines built to optimize for engagement and revenue. And apparently, when the machine sees that scammy, high-urgency Medicare ads are driving clicks, it does what it was designed to
do: It scales them. The CCDH lawsuit alleges that Meta essentially functioned as a "scam-as-a-service" provider.
advertisement
advertisement
The frequency of these types of cases indicates a systemic, multibillion-dollar
platform allegedly profiting from highly coordinated scams and fraudulent schemes.
But the legal landscape for platforms has shifted from theoretical debate to high-stakes accountability. Two
groundbreaking judgments in March 2026—one in New Mexico and one in California—indicate that platform design now carries massive liabilities.
A California jury found Meta (along
with Alphabet's Google) liable in a landmark lawsuit, ruling that they designed their platforms to be addictive to teens and failed to warn about the risks, contributing to youth mental health crises.
Meanwhile, a New Mexico jury found that Meta (parent company of Facebook, Instagram, and WhatsApp) violated state consumer protection laws by failing to protect children from sexual exploitation on
its platforms.
Both cases signaled a huge shift in legal responsibilities. For the first time, a court agreed that the design feature itself, not just the content, is the problem. Lawsuits
targeted the platforms on their technology causing the harm, which makes all of them vulnerable. These court cases will no doubt continue to play out, but the shift is important. The new fraud cases
will undoubtedly look at child protections cases and might explore if Meta’s tech can be identified as harmful in these instances as well.
The Medicare fraud case alleges an earned
benefit to Meta of just over $14 million dollars. What I do not understand is why, if Meta is making almost $27 BILLION -- with a “B”! -- in a quarter, it does not address a fraud issue
that is just 0.05% of quarterly earnings? It can mean only one of two things: (1) it doesn't know, or (2) it doesn't care.
Both possibilities are concerning for marketers. If Meta does not
know, it means that all advertisers run the risk of participating in fraud enablement. I do not need to explain why answer (2) is maybe even worse.
When a platform’s business model is
found to be "malicious" or "fraudulent" in court, your media spend is effectively subsidizing a consumer safety risk. If Meta knew its products were addictive but "deliberately designed" them to
entice young users, you have to ask if your brand values align with that level of risk. If, on top of that, the company has no way of knowing about fraudulent ads, or does not care to know, that risk
runs even higher.