This wasn’t supposed to be a sequel.
But looking back at part one, I realized that some things need to be said twice — maybe
louder, and with less patience.
The past critiques almost feel innocent now. Meta wanting to challenge media agencies? X calling itself the “megaphone for
truth”? Cute, in retrospect.
Because while we used to debate problematic data collection, misinformation, and brand safety drama — things we’ve sadly
normalized — the headlines today read like rock bottom: child-grooming bots, conspiracy-peddling AI companions, and platforms doubling down on stickiness at all costs.
This time it’s AI.
From Meta’s grooming bot scandals to Grok’s flirty AI persona (or its full-blown conspiracy-theorist version), there is
apparently a twisted AI companion for everyone.
advertisement
advertisement
The Moral Compass Problem
The obsession now? Making AI even more addictive — more
addictive than social media ever was.
Yes, engagement drives revenue. But we’re talking about one of the most powerful technologies of our time, and the priority
is… flirty audio-bots? AI girlfriends? Attention-hacking short-form video? Gamified everything?
It’s a moral compass pointing nowhere.
Regulation, Self-Regulation… and the Reality
Regulators are starting to react. Luckily, when children are involved, things do seem to start moving
— but it’s extremely, painfully, slow.
Self regulation? Please, let’s not hold our breath. Decades of experience sadly show there’s nothing to
expect.
That leaves…advertisers.
The Brand Safety Blind Spot
Brand safety today is microscopic: Keep ads away from
“risky” content, which ironically are mostly premium news sites where the risk is a journalist reporting on war or politics.
Meanwhile, ads keep flowing to
platforms where the entire environment is the problem. Meta. Grok. You name it. Ads don’t appear (yet) next to grooming bots or conspiracy chatbots. In other words: Ads can appear on a publisher
with a bad reputation. And that should raise every red flag there is.
But who are we kidding? Audiences are there. Ad budgets will keep following them. No second thoughts. All
while being well-aware that these are exactly the dollars funding AI and the companies hosting, enabling and developing them.
You Reap What You Sow
So here we go again…
This isn’t about cancel culture. It’s about accountability.
It’s about the long-term
consequences of building an industry on volume, velocity, and plausible deniability.
Advertising has spent the last decade optimizing for conversion, automation, and scale,
while neglecting credibility, trust, and shared responsibility.
And no, there’s nothing wrong with AI per se. But there is something fundamentally wrong about AI without
guardrails, ethics and nothing but profit in mind.
Time to Rethink What We’re Growing
We can’t fix everything. But we
can stop pretending this is fine.
We can demand better from platforms — and from ourselves.
We don’t need more acronyms. We need a moral
compass.
We don’t need another workaround. We need a stance.
Because in this industry, as in life, you reap what you sow.