The iconic scene in “The Graduate” when, amid the hubbub of a cocktail party, Benjamin Braddock is given profound career advice summed up in the single emphatic word “Plastics!” was reflected in real life a few weeks ago on stage at the Business Marketing Association’s (BMA’s) Masters of B2B Marketing Conference.
If a career in plastics was a virtual guarantee of success in the 1960s or 1970s, what is the equivalent for the age of artificial intelligence — i.e., 2017 onward? Data scientist? Maybe; maybe not.
The BMA moment was precipitated by an audience question put to Jon Iwata, IBM’s senior vice president, marketing and communications, after his presentation about the imminent impact of AI on marketing, in general, and of Watson, in particular. The query: “Knowing what you now know about data and artificial intelligence, what would you do differently if you were starting your career today?”
Iwata paused long enough to get some sympathy chuckles from the audience, since in that pause you could nearly see the smoke rising, as my grandpa used to say. He took the question seriously and thought hard before coming up with an answer I esteem worth sharing, verbatim:
“My mind quickly went not to technology, but to behavior. Like, it’s never the data, it’s the questions you want data to answer. So much of this is about really understanding people, like our conversation just now about emotional states.
I guess I would want to spend more time in behavioral economics or other behavioral sciences, so I could then step into the world of marketing and data and AI and say, we could learn so much about people based on things like tone, face, use of language, their interaction mode. All those things are known, but they’re not in the marketing sciences or computer sciences, they’re in other parts of the university. And I would probably go and spend some time over there.”
One reason Iwata’s answer struck a strong chord for me is that a current project finds me reviewing recent research on the inner workings of the human brain, beginning with Carl Sagan’s Pulitzer-winning “The Dragons of Eden.” Now, I’m reading “Thinking, Fast and Slow” by Daniel Kahneman, who won the 2002 Nobel Memorial Prize in Economic Sciences for the pioneering work.
Reading this stuff is hard work. I’m working my way up to the “hardest” science: Antonio Damasio, the only actual neuroscientist in this bunch. The Web record shows he caused a stir with his first (1994) book, “Descartes' Error: Emotion, Reason, and the Human Brain.” He analyzes the best current scientific understanding of consciousness in his most recent (2010), “Self Comes to Mind: Constructing the Conscious Brain.”
As I understand so far from reading about his work, Damasio shows how the brain uses the entire body and all our senses to “think,” and that rationality actually
requires emotion. The sound bite I first encountered from the 2010 book was, “Humans are not either thinking machines or feeling machines but rather feeling machines that think.”
Oh boy. Learning how your brain really works, and what can influence its workings and how those influences work, must give us pause. While not necessarily questioning free will, it certainly raises new questions about its nature.
For example, “priming” is a real thing. Think of it as a subconscious influence that affects your thinking or actions without you realizing it. Kahneman explains that when you show people five words related to old age (Florida, forgetful, bald, gray, wrinkle) and then ask them to walk down the hall, they walk slower than a control group. That’s called the ideomotor effect.
Ideas can be primed. I saw Stanford University professor Robert Sapolsky, a neuroendocrinologist, explain that if you put people in a room with smelly garbage, their views on social issues become more conservative. Kahneman describes experiments in which people subtly primed by money concepts experience increased individualism: “a reluctance to be involved with others, to depend on others, or to accept demands from others.”
Given the direction of big data analytics and AI, this brings two scary thoughts to mind:
First is the common sci-fi disaster scenario: Hold on, what the hell are we doing building these disembodied brains modeled after what we think we know about how human brains work, when the people who know the most keep saying we don’t know enough? Those stories never end well.
Second is, even if we never get to Ray Kurzweil’s singularity and Skynet never enslaves humankind, we are facing an extraordinary cascade of ethical issues.
I’ve written a prior column, "AI, Big Data And Ethics," about how “in the coming age where big data and machine learning algorithms enable you to learn and infer stuff about individuals that are profoundly invasive, marketers -- and companies in general -- will need to openly explore where their ethical boundaries lie.” But the problem is exacerbated many times if the analysis, subsequent decision-making, and actions are all being automated through AI brains whose inner workings may be inexplicable.
To paraphrase Kurt Vonnegut, I just happen to know what the moral of this column is: Bone up on your behavioral science, and neuroscience (if you have the stomach for it). It’ll be good for you.