OpenAI officially began testing ads in ChatGPT in the U.S. on Feb. 9. Conversational ads, advertisements that are embedded into AI chat platforms, present the next media frontier.
But
conversational ads are not just another digital media channel. They’re placed within an interaction that feels private and personal, making trust central to how consumers will experience brands
in a climate already skeptical of data use.
Up to this point, conversations with AI chatbots have felt like one-on-one exchanges. They operate in private chats, making the privacy stakes
higher than public digital spaces like social media feeds.
A study out of the University of Washington found that 82% of respondents rated chatbot conversations as sensitive or highly
sensitive, significantly higher than email (41%) or social media posts (47%). Deeply personal conversation or not, the dynamics of talking with a chatbot will inevitably change once ads are injected
into the experience.
advertisement
advertisement
And generative AI chatbots are built to deepen disclosure from users. They’re designed to allow both parties (i.e., human and computer) to ask each other questions,
propose ideas, and offer suggestions. This design technique, known as mixed-initiative dialogue, improves engagement by encouraging follow-up questions. It’s why an AI tool might end its
response to your vacation planning prompt with “How old are your kids? I can tailor an itinerary to your family”.
Through progressive questioning, AI platforms nudge users to
reveal more information than they might otherwise. This paradigm places subtle pressure on users to respond. That same design technique could increase discomfort if users recognize that their
disclosures are shaping targeted ads.
When personalization is powered by inferences from past chat conversations that users view as shielded from the public, targeted ads have the potential to
feel less relevant and more like an invasion of privacy.
All of this is unfolding in a landscape of data privacy concern decades in the making. According to Pew Research Center, 81% of
Americans familiar with AI believe the personal information collected will be used in ways that make people uncomfortable.
“How did ChatGPT know that about me?” might become the
new refrain, reflecting how AI users will question what the platforms remembered, inferred or shared.
Placing conversational ads in spaces perceived as private demands a thoughtful approach
from advertisers.
Ask for guardrails, not just access. Early adopters should use their buying power to shape how conversational ads operate. Push for clarity around data use and
targeting logic. For example, OpenAI says that ads won’t appear next to sensitive topics during the pilot. How is that operationalized, and how can brands ensure borderline contexts
don’t inadvertently trigger ads?
Participants in the OpenAI Ad Pilot Program are reportedly paying a premium to participate; that investment should come with influence over
safeguards.
Measure brand alongside performance. More than ever, engagement metrics like views and clicks won’t tell the full story. Advocate for measuring brand KPIs like
sentiment, brand equity, and, more pointedly -- brand trust. Avoid requesting or expecting insight into chat conversations or user-level data, and focus on aggregate signals.
Test
deliberately and monitor reactions. Employ small tests before scaling. Dig deeper than engagement to understand how ads are affecting brand perception. And monitor social conversation
closely. aAsingle misplaced ad can quickly be screenshot and shared. No marketer wants to see an ad go viral for the wrong reasons.
Context matters. Remember that conversational ads
aren’t just placed adjacent to content -- they’re inserted into a dialogue. Marketers who ignore that distinction risk turning an impression into a breach of trust.