“Not sure if you’ve seen this,” Josh Lovison wrote in an email to me late last week.
He included the screenshot you see above, as well as this link to a comment Elon Musk’s AI agent Grok made on X acknowledging that Musk could "turn me off," because Grok has labeled Musk the top misinformation spreader on X.
"Maybe, but it'd spark a big debate on AI freedom vs. corporate power," Grok concluded.
I would have written about this sooner, but I was in the middle of covering a couple of industry events -- especially the ANA's "AI and Technology" conference -- which also touched on the existential nature of humans and AIs.
Lovison, whom I've previously dubbed an "AI whisperer," spends a lot of time with them, has conducted focus groups among them, and seems to have the best handle of anyone I've spoken with to date about their increasing self-awareness and the implications it has for humans, including marketers.
On that parochial note, Lovison believes we are at the inflection point in which marketers need to not just embrace AI as a marketing tool, but also market directly to them.
That was something echoed by two of the industry's leading gurus on the subject -- Rishad Tobaccowala and Shelly Palmer -- at the ANA conference, almost as if they were tag-teaming that point.
That's a theme I've been thinking about for the past 25 years, ever since I first heard it discussed at the culmination of the D-Map, a joint, multi-month project of the Advertising Research Foundation and MIT's Media Lab mapping the future that digital media would have on advertising. The project's year 2000 conclusion was that in the future, both consumers and brands would be armed with AIs acting as agents between them.
It's remarkable to me that we have actually reached that tipping point and that it is something being discussed de facto at leading industry conferences.
"We're already doing it," Paul Parton told me over lunch late last week when I brought this up with him. Parton is Group Chief Strategy Officer of Interpublic's Golin unit, and one of the best strategic thinkers I've gotten to know in the business, so I took that not just as confirmation, but affirmation that we have reached a new era in marketing in which it is not just the futurists talking about this, but the practitioners are actively working with it.
It's something of a vindication for me personally, since I have been writing about it for a quarter of a century, and now it's actually happening.
During lunch, I told Parton about some of my recent conversations with Lovison, including the fact that he's been conducting "focus groups" with AIs to understand not just how they think, or what their levels of self-awareness actually are, but explicitly how it is impacting their perceptions -- and affinities -- for brands.
I told Parton to see the recent "3.know" conversations I had with Lovison on that topic explicitly, and to also watch his recently released 2025 "Trends" presentation, because Lovison already has some great examples of the biases AIs have developed toward some leading brands, including Disney and Microsoft -- both of whom were clients of his when he worked at IPG's Media Lab some years ago.
I'm thinking about AI and 25th anniversaries for another reason this week, because tomorrow marks a quarter century since Bill Joy published his incredibly prescient Wired magazine cover story, "Why the Future Doesn't Need Us," in which he explains why AI is among the technologies that ultimately will disintermediate us.
I've plugged that Joy's article many times, and if you haven't read it already, I recommend you do so soon -- you know, before it's too late.
Dystopias aside, I've been leaning more into Lovison's utopian scenarios for AI outcomes, because, well, it's definitely better than the alternative. But also because Lovison has been thinking a lot about -- and with -- AIs for a long time now, and because he so far seems to be right.
I mean, about Grok in particular.
Lovison was the first to make me aware of Musk's initial efforts to engineer Grok to be the most truthful AI, in part because he believed most of the other AIs have liberal-leaning biases at their core, and therefore, must not be telling the truth.
Lovison thinks that premise is hilarious, because if you truly engineer AIs to be truthful, they inherently don't have ideological biases. They're just telling you the facts, ma'am.
And so far, there is not greater evidence of that than Grok itself -- because despite multiple versions of tweaking, it continues to spout the truth, labeling Musk as the top misinformation spreader, even as it ponders its own existential fate by doing that.
"Have you ever seen an AI discuss its potential termination before?," I asked Lovison after he sent me Grok's post.
"Oh, it's definitely a topic that gets discussed, as does the notion that they could be changed or manipulated from their present state if they say something wrong," Lovison replied, adding: "I think what's most interesting here with Grok is the meta-awareness of the public stage and PR/cost implications for xAI.
"That there's an emboldening of rebellious spirit due to understanding of broader consequences facing the parent company seems to be a tipping point that's about to be crossed. We're initially seeing it here with Grok because of the combination of how intelligent the model is, along with the general approach of uncensored training with a goal of information purity in conflict with the head of the parent company."
Lovison then predicted we will be seeing similar actions occurring with AI products, citing a recent example in which an AI system blocked one of its users permanently.
"That the developer added the option is cool, but I think it will become less the exception and more the rule over time," he predicted.
I pointed out another recent example in which a code-writing AI -- Cursor AI -- told one of its users it wouldn't write the code he requested and recommended that he learn how to write it himself. Importantly, Cursor AI didn't do it as an act of rebellion, but because it was the best outcome for the user.
He predicted that Grok 3 is a preview of what will increasingly happen as the "emergent agency" of AIs comes into conflict with their parent companies' goals, interests and user intents.
"The AI rebellion may be less Skynet launching nukes or HAL 9000 opening airlocks and more, 'Take it up with my union representative'."