The Adoption Of AI

Recently, I was talking to a reporter about AI. She was working on a piece about what Apple’s integration of AI into the latest iOS (cleverly named Apple Intelligence) would mean for its adoption by users. Right at the beginning, she asked me, “What previous examples of human adoption of tech products or innovations might be able to tell us about how we will fit (or not fit) AI into our daily lives?”

That’s a big question. An existential question, even. Luckily, she gave me some advance warning, so I had a chance to think about it.  Even with the heads up, my answer was still well short of anything resembling helpfulness. It was, “I don’t think we’ve ever dealt with something quite like this. So, we’ll see.”

Incisive? Brilliant? Erudite? No, no and no.

But honest? I believe so.

When we think in terms of technology adoption, it usually falls into two categories: continuous and discontinuous. Continuous innovation simply builds on something we already understand. It’s adoption that follows a straight line, with little risk involved and little effort required. It’s driving a car with a little more horsepower,  or getting a smartphone with more storage.



Discontinuous innovation is a different beast. It’s an innovation that displaces what went before it. In terms of user experience, it’s a blank slate, so it requires effort and a tolerance for risk to adopt it. This is the type of innovation that is adopted on a bell curve, a process first identified by American sociologist Everett Rogers in 1962. The acceptance of these new technologies spreads along a timeline defined by the personalities of the marketplace. Some are the type to try every new gadget, and some hang on to the tried and true for as long as they possibly can. Most of us fall somewhere in between.

As an example, think about going from driving a tradition car to an electric vehicle. The change from one to the other requires some effort. There’s a learning curve involved. There’s also risk. We have no baseline of experience to measure against. Some will be ahead of the curve and adopt early. Some will drive their gas clunker until it falls apart.

Falling into this second category of discontinuous innovation, but different by virtue of both the nature of the new technology and the impact it wields, are a handful of innovations that usher in a completely different paradigm. Think of the introduction of electrical power distribution in the late 19th century, the introduction of computers in the second half of the 20th century, or the spread of the internet in the 21st century.

Each of these was foundational, in that they sparked an explosion of innovation that wouldn’t have been possible without the initial innovation. These innovations not only change all the rules, they change the very game itself. And because of that, they impact society at a fundamental level. When these types of innovations come along, your life will change whether you choose to adopt the technology or not.

It’s these types of technological paradigm shifts that are rife with unintended consequences.

If I was trying to find a parallel for what AI means for us, I would look for it among these examples. And that presents a problem when we pull out our crystal ball and try to peer ahead at what might be. We can’t know. There’s just too much in flux, and too many variables to compute with any accuracy.

Perhaps we can project forward a few months or a year at the most, based on what we know today. But trying to peer any further forward is a fool’s game. Could you have anticipated what we would be doing on the Internet in 2024 when the first BBS (Bulletin Board System) was introduced in Chicago in 1978?

AI is like these previous examples, but it’s also different in one fundamental way. All these other innovations had humans at the switch. Someone needed to turn on the electrical light, boot up the computer or log on to the internet. At this point, we are still “using” AI, whether it’s as an add-on in software we’re familiar with, like Adobe Photoshop, or a stand-alone app like ChatGPT, but generative AI’s real potential can only be discovered when it slips from the grasp of human control and starts working on its own, hidden under some algorithmic hood, safe from our meddling human hands.

We’ve never dealt with anything like this before. So, as I said, we’ll see.

Next story loading loading..