Commentary

Synthesizing Human Experience

As someone who has closely covered media and marketing research for nearly half a century, one of the most surprising recent developments has been the use of artificial intelligence to synthesize human experiences that are becoming the basis for humans (advertisers, planners, buyers, etc.) to make decisions on what, when and where to place ads to engage other people (consumers).

The first use case I came across was how big media and marketing research suppliers are starting to use AI to create synthetic respondents -- AI-generated proxies of real people -- to create panels for survey-based research that is faster, more cost-effective and -- this is the important part -- possibly more accurate and with less inherent human bias than panels comprised of actual people.

I don’t know how many survey-based consumer research suppliers have actually deployed this, but I’m pretty sure most if not all of them are at least testing it for all of the reasons stated above.

But to me, it’s the last one that potentially is most significant -- and likely disruptive -- to the world of media and marketing research, because if AI-generated respondents can be more accurate predictors of how actual humans will behave, why are we even going through the time and expense of recruiting humans to it?

The answer is we probably need both, because real human experiences are necessary to train AIs how to understand and anticipate how humans will experience other things in the future. In the case of what I’m writing about here today, that’s advertising and media, including all of the various machinations and combinations contributing to reach, frequency, engagement, wear-out, and of course, performance.

So I’m sold that synthetic respondents can be as good as, and in some cases, possibly better than human ones, as long as they’re trained on actual human experiences, because they can overcome the inherent biases that come with human respondents.

Recently, I got a chance to see another method of using AI to synthesize human experiences that may be even more profound for media and marketing research than simply creating synthetic respondent panels.

It was during the Association of National Advertisers recent Measurement and Analytics Conference in Chicago, where Realeyes Chief Growth Officer Max Kalehoff made a joint presentation with Nielsen of their “Vision AI” product integration, which combines Realeyes' massive database of human consumer experiences with Nielsen’s trove of marketing and media outcomes to help advertisers and agencies predict actual outcomes of their campaigns before they happen.

Full disclosure: Kalehoff previously was a long-time MediaPost columnist, and someone I consider a friend, but the truth is I’ve been scratching my head to understand exactly what Realeyes does until I saw this presentation.

Simply put, Realeyes is a research supplier that utilizes computervision technology observing how real people engage with advertising and media exposures and then correlate it with performance outcomes brand marketers are aspiring to achieve -- usually sales lift.

To graphically illustrate how Realeyes computervision system works, Kalehoff shared some screens of an analysis he did of some recent Cracker Barrel ads showing the technology’s heat-mapping of human attention to the spots. I asked him to share the Cracker Barrel visual, because the company was in the news recently for updating its logo and then reverting back to its old one.

While Cracker Barrel is not one of Realeyes clients, Kalehoff said the analysis did reveal something interesting about its commercials: That they didn’t resonate very well with any audiences, but they tended to resonate better with younger people than older ones.

Needless to say, I’ve seen many versions of similar computervision methodologies over the years, but it was when Kalehoff explained the AI part that the cobwebs began to clear for me.

Basically, he came right out at the ANA conference and said Realeyes product is a synthetic: an extrapolation of a massive database of real human experiences.

Nearly 19 million people, across every conceivable market, culture, language, demographic and psychographic group you might imagine, captured watching ads on screens representing real world media experiences on mobile, desktop and TV.

The research is synthetic because it isn’t measuring those human experiences in real-time, but uses AI to model likely outcomes for brands and agencies testing new ads in real-time.

But it is the combination of continuous human testing, combined with AI’s ability to model new predictive outcomes enabling the best of both worlds: real and synthetic.

“Synthetic isn’t a bad word,” Kalehoff quipped during a follow-up interview with me, “but it is a loaded word that is used to indiscriminately describe many different things, including some shady synthetic stuff.

“At the core, this is all human data. And yes, we use AI to deconstruct and reassemble that human data,” he continued, noting: “So is it direct observation? No. But the DNA is high-quality human data that’s been parsed out and reassembled.”

Kalehoff acknowledged that the method isn’t perfect, but notes that its accuracy rate in predicting human outcomes is “approaching 80% and that it is the combination of ongoing human experience training that is integral to maintaining its accuracy.

“AI on its own can go rogue and having the human database keeps the AI in check and allows the models to evolve to changing sentiments or patterns,” he explained, concluding: “You know, humans change over time.”

Next story loading loading..