For marketers, it can improve audience insights, helping us to better understand what content will most engage them. It can individualize ad campaigns, automatically, and at scale.
I’m moderating a panel next week at MediaPost’s Marketing AI conference that includes one company whose AI optimizes copy for inducing audience response and another that ostensibly replaces the function of entire agencies.
But there’s a problem looming: the most advanced computer intelligences cannot explain their thinking. They can’t say why they do what they do.
According to “The Dark Secret at the Heart of AI,” a must-read article from MIT Technology Review, “We’ve never before built machines that operate in ways their creators don’t understand.” The piece goes on to quote Joel Dudley, who heads the team at Mount Sinai Hospital that developed the disease-predicting AI, saying: “We can build these models, but we don’t know how they work.” (Naturally, they named the thing Deep Patient.)
To oversimplify: In traditional programming, everything the software does — every single thing — results from code written by humans. The software may be extraordinarily complex, but people can figure out why it does what it does.
AI software is different. Programmers build models (which include some type of digital incentive for right answers and disincentive for wrong answers), and then they train the software, which learns through extensive repetition. Just like a toddler.
When it works, the AI eventually gets incredibly smart. It becomes better than people at recognizing faces, for example, or language translation. Or in the case of Deep Patient, it finds patterns in the aggregated data of patient records that it cannot articulately explain to anyone, and that doctors can’t figure out even when shown the results. The MIT article notes that there are lots of very good ways people have developed to predict disease from those records, but the mysterious Deep Patient “was just way better.”
What fascinates me most is that AI is comparable to human intuition in this respect. In my previous article, “Can A Modern Brand Dance Like No One's Watching?,” Reuben Webb, chief creative officer of Stein IAS, referred to the early days of advertising as a time when “a brand was a much less self-conscious thing, more free to be intuitive.”
Carl Sagan’s Pulitzer-winning book “The Dragons of Eden,” which is all about the evolution of the human brain, points out that Western civilization long ago lost its trust for intuition. We developed rational thought as a way to prove to ourselves the “truth” of our intuitive insights, which in fact are data-rich, but whose development process is hidden from our conscious minds.
To bring this back to marketing, how will brands react when AI brains start pointing them in a certain direction but cannot explain their intuitive decisions? That is really going to mess with the “Age of Counting,” as Michael Nicholas, founder of AI agency Born, calls marketing’s modern epoch. Marketers these days are counting and measuring everything; but the very definition of intuition is to know something without really knowing how you know.
The inherent ambiguity in all this once again brought Reuben to mind, because Stein IAS is pursuing an idea the company calls “postmodern marketing” — and AI seems to be evolving into the ultimate expression of postmodernism in technology, given the inscrutability of AIs like Deep Patient.
“This is remarkable stuff,” Reuben told me. “The importance of intuition just went up a gear. Humans can post-rationalize an intuitive decision, but a machine can’t yet. Brands will have to accept this when it comes to AI – which will scare the [expletive deleted] out of them.”
I get why other people’s intuition is not something we easily trust. Sure, our own intuition has an awesome sense of certainty about it, as it is produced from our own
subconscious, which tells us in parallel how right, and good, it is. But someone else’s is another story; to convince us, the other person must back up her “guesses” with facts and
logical reasoning. However, might AI ultimately be considered trustworthy without the kind of proof we demand from humans, because we know AI intuition comes from the crunching of vast data sources?
In a conference call prep session for next week’s Marketing AI panel, several of the participating CEOs told me this issue is already coming up with their customers. They promised to talk about how they get marketers over the hump.
Postscriptum: This comes from the part of my brain I call the random associations generator. As I was writing this column, I wondered if its theme calls into question the premise of the movie “I, Robot.” Will Smith’s character loathes all AI robots because of one robot’s choice to save him during a life-or-death emergency, instead of an imperiled child, when it calculated a lower chance for the child’s survival. Seems to me AI robots won’t be such simpletons, after all.