Commentary

AI Isn't Becoming More Intelligent, But More Capable

It was a massive accomplishment. Just a year and a half after the artificially intelligent Go player AlphaGo beat champion Lee Sedol at Go, DeepMind came out with AlphaGo Zero -- which beat the original AlphaGo 100 games to none.

Even more impressive was the way the AI learned the game. To train the original AlphaGo, researchers fed it 30 million moves from 160,000 historic games. But with AlphaGo Zero, they didn’t give it a single game. They gave it the rules and the desired outcome, and let it play itself, starting with random play. Three days later, it had surpassed the version that beat Lee. Forty days later, it was the best Go player in the world.

This week, DeepMind turned the technology on chess. Now called AlphaZero (it’s no longer just about Go), the AI used the same method to train itself with no human intervention or historic knowledge — and beat the best AI chess system in the world in just four hours.

Amazing. And yet, as AI researcher Joanna Bryson tweeted, “It’s still a discrete task.”

advertisement

advertisement

Is this accomplishment not as impressive as it seems at first glance? Or is she just being curmudgeonly?

The accomplishment is indeed impressive -- but Bryson isn’t just being curmudgeonly. Let her explain. “’A discrete task,’” she says, “means that there are a small number of specific moves you can make, all with certain, known consequences; and a small number of specific places the pieces can be.”

She goes on: “Ordinary life is NOTHING like that.”

What she’s saying is that a computer winning at chess isn’t necessarily intelligent. Google “intelligence” and you’ll get a series of synonyms including “judgment,” “reason,” “comprehension,” and “astuteness,” none of which describes what AlphaZero is doing.

And that’s because artificial intelligence isn’t really intelligence at all. Capability, sure. Skill, definitely. But not intelligence.

Two weeks ago, in a piece called “The Impossibility of Intelligence Explosion,” Francois Chollet dismissed the idea of an artificial superintelligence posing any kind of existential threat to humanity. This fear, he says, is based on “a profound misunderstanding of both the nature of intelligence and the behavior of recursively self-augmenting systems.”

Chollet argues that intelligence is situational, and offers a thought experiment: “What would happen if we were to put a freshly-created human brain in the body of an octopus, and let it live at the bottom of the ocean? Would it even learn to use its eight-legged body? Would it survive past a few days? …The brain of a human is hype-specialized in the human condition -- an innate specialization extending possibly as far as social behaviors, language, and common sense -- and the brain of an octopus would likewise be hyper-specialized in octopus behaviors. A human baby brain properly grafted in an octopus body would most likely fail to adequately take control of its unique sensorimotor space, and would quickly die off.”

True enough. But also not the whole story.

We’re looking for artificial superintelligence to resemble human behavior in some way, when really what we’re creating is artificial supercapability. And it is absolutely fair to be concerned about an artificial supercapability.

An AI does not need to have human-like intelligence to do harm, any more than a hurricane needs human-like intelligence to do harm. Playing chess may be a discrete task, and we may be a long way from any kind of general intelligence. But it’s wise to be attentive to the impact of these developments on all of us -- even if all they represent is an advance in capability.

Next story loading loading..