Unlike the latest wave of technology innovators like Uber and Netflix that disrupted industries and business models, this Fourth Industrial Revolution brings with it a healthy dose of personal disruption. From robots to artificial intelligence, it has the potential to impact everything from how we work to how we live our daily lives. And with that comes some very real concerns about the future -- and our privacy.
Futurists like Stephen Hawkins and Marc Andreessen have helped give the media fuel for the fire. In an interview with the BBC, Hawkins warned that the development of “full AI could spell the end of the human race.” Andreessen has been quoted as saying that in the future there will be two types of jobs: one for “people who tell computers what to do and [another for] people who are told by computers what to do.”
In fact, the most-funded technologies are those focused on making machines more “human.” According to Venture Scanner, deep learning, natural language processing and image recognition make up the top three funding categories within AI.
Kurweil believes that we are only 11 years away from passing the “Turing Test,” which determines if humans can detect the difference between a human or a machine.
Unfortunately, what may get lost in the noise is the great potential of this new generation of technologies. An Atlantic article predicts that autonomous vehicles can save 30,000 lives a year from traffic accidents. According to a CBS News report, robots are being programmed to help give the disabled more independence. AI is advancing the diagnosis and treatment of certain types of cancer. Some, including British researcher Sam Cooper as quoted in The Guardian, believe that AI could lead to "the end of cancer in our lifetime."
Why isn’t the focus on the benefits rather than the problems created by these new technologies? Theodore Levitt, a former professor at Harvard Business School in the ‘60s, may have the answer. Levitt was a thought leader in sales and marketing, but may be best known for this quote: “People don’t want to buy a quarter-inch drill, they want a quarter-inch hole.” The abridged version “Sell the hole, not the drill” has been uttered by sales managers for decades, and is particularly relevant for the latest wave of new technologies.
We’re in the early stages of this revolution, so much of the talk is about the “drill.” Explaining the process of building the “drill” is necessary for audiences like investors or partners. These explanations are also aimed at potential users/customers in hopes they will be able to define the holes to be drilled.
The tricky part for technology marketers is that there are parts of the drill that have real potential to threaten audiences.
This is the tightrope these marketers are going to have to walk for the foreseeable future. In order to develop the apps (the “holes”), they need to find and convert early adopters. The messaging that appeals to that audience may put others on high alert.
This is a classic “crossing the chasm” challenge as described by author and management consultant Geoffrey Moore. According to Moore, early adopters are comfortable with risk. Unfortunately, when things go wrong — like Google’s DeepMind experience with the U.K.’s National Health Services, where the initial work on mobile apps was found to have violated the UK’s patient privacy laws — the “chasm” grows between early adopters and the early majority.
Here’s the lesson for marketers: One of the four characteristics of visionaries that alienate pragmatists (the Early Majority) is the overall disruptiveness of the technology. To be successful in building a bridge over the “chasm,” you may need to tone down your “disruptive” messages. Build a roadmap that gently walks the early majority over the bridge step by step, given them reassurance along the way.
We also know from CEB/Gartner that buyers make purchase decisions based on the personal value they perceive. To market “human-like” technologies to humans, you have to understand their fears, concerns, and behaviors. Just because your technology can do something as well as or better than a human, doesn’t mean you need to actually “say it.”