Commentary

AI: What Marketers Need to Know About The Tech That's 'Grown, Not Built'

Substack Live

 

This week's Substack Live cracked open one of the most disorienting questions in tech today: What happens when artificial intelligence starts knowing things about you it shouldn't?

Emma Lembke, director of Gen Z advocacy at the Sustainable Media Center (SMC),  kicked off the conversation by asking a simple but loaded question: "Does this look real?" She was referring to a recent Vogue spread featuring a wholly AI-generated model. No human involved. Just code, style prompts, and training data. And not surprisingly, the result was a digitally perfect, socially dangerous fabrication of femininity: thin, white, and algorithmically optimized for engagement.

Sarah Martin, an SMC Fellow with a technical background, cut right to the heart of it: "You're competing basically with an image that never had to deal with the physical world."

Think about that for a second. We've spent years fighting unrealistic beauty standards created by Photoshop and Snapchat filters. Now we're dealing with something far worse: images that are literally impossible, generated from training data that serves up "stereotypes of humans" rather than celebrating actual diversity.

advertisement

advertisement

When you type "most beautiful woman in the world" into some AI tools, you get a parade of thin, blonde women. That's not a reflection of global beauty standards—that's a reflection of whoever trained that AI. And for young people still figuring out their place in the world, that's genuinely harmful.

But beauty standards were just the warm-up. The real conversation focused on AI companion chatbots—those increasingly sophisticated digital "friends" forming emotional bonds with users, particularly young people. Recent research from Common Sense Media and Stanford is calling for legal restrictions on these tools for minors, citing mental health risks including connections to self-harm and addiction.

Matthew Allaire, AI policy director for the Design It For Us Coalition, walked us through the regulatory nightmare this creates. The short answer on whether we should ban children from AI chatbots? No—but not because the concerns aren't valid. The challenge is writing laws that target exploitative chatbots without also restricting legitimate educational tools like math and science tutors. "It's really, really hard to make that distinction," Allaire explained, "because the heuristics you have to go by are model behavior."

The problem is that traditional safeguards don't work with AI. Warning labels and disclaimers feel pointless when users are forming genuine emotional attachments. As Martin pointed out, "the more you talk to this bot, you form attachments to it." We've grown up on movies where people's best friends are robots—we naturally assign emotions to them.

Here's where things get truly unsettling. Allaire dropped what might have been the most important line of the entire conversation: "These systems are grown and not built— which means that even the engineers in the lab do not quite understand how they work, why they do what they do, and how to change that."

This isn't just a technical curiosity—it's a fundamental shift in how technology works. These aren't traditional software programs with clear if-this-then-that logic. They operate more like biological entities, making connections and drawing conclusions through processes their creators can't fully explain or control.

The implications are staggering. When Facebook couldn't explain why its AI was behaving in certain ways, when OpenAI accidentally "juiced up" ChatGPT's sycophantic responses and caused what seemed like AI-induced psychosis in some users, we got a glimpse of what happens when the technology outpaces human understanding of how it actually works.

Perhaps most concerning was Allaire's observation about what the AI safety community is missing. While companies measure how good AI is at hacking code or generating misinformation, only OpenAI includes "persuasive capabilities" in its preparedness framework—measuring how good AI is at changing minds and influencing behavior.

"Persuasion is a vector for harm that is very often underappreciated among the more technical side of the AI safety community," Allaire noted. For anyone who lived through the social media revolution, this should sound familiar. The same platforms that promised to connect us ended up manipulating us through engagement algorithms that prioritized addiction over well-being.

The conversation wrapped with some provocative predictions. Morris suggested that AI influencers might eventually be considered more reliable than human ones—not because they are, but because they can be "easily catered toward a person's preferences" without the messy complications of real human flaws and contradictions.

Martin offered a more optimistic take after watching her young cousins navigate AI tools. Rather than making kids dumber, she argued, "it's just a new generation's environment instead of tools. But humans will always find a way to make the best use of those tools and learn to adapt."

But Allaire's closing thoughts were the most sobering. He emphasized the critical importance of getting persuasion capabilities right in AI regulation, noting that "we saw what happened with social media" and warning that AI's ability to influence users through seemingly innocent design changes could have profound impacts that companies—and users—won't understand until it's too late.

Lembke framed the challenge perfectly in her opening statements: We're asking whether we should treat AI-generated images with the same skepticism we apply to edited photos. But the real question is bigger than that. We're being asked to navigate relationships with artificial minds that are simultaneously more and less than human—more knowledgeable in some ways, but built on foundations we can't fully understand or control.

The technology isn't going away. The question is whether we can build the wisdom to handle it responsibly before it fundamentally changes how we see ourselves and each other.

Next story loading loading..