Commentary

Yep, We Should Be Scared Of AI

They had invited me to talk about the technological singularity: the moment when computer intelligence surpasses human intelligence.

"Is it true?" they asked, brows furrowed. "Are the robots going to kill us all?"

"Very possibly," I replied. "But that's not why we should be scared."

Let me explain.

First of all, we shouldn't be scared of the thing the movies tell us to be scared of.

We shouldn't be scared of the Terminator or I, Robot. We shouldn't be scared of the robots getting mad at us, or wanting revenge. Even if we are awful to them.

We shouldn't be scared of these things because, even when computers become smarter than people, they'll still be computers.

Anger, revenge, boredom, frustration — these aren't computational decisions. They're emotional decisions. And no matter how intelligent computers get, there's nothing to suggest they'll develop emotions.

But people like Elon Musk, who tweeted in 2014 that AI is potentially more dangerous than nukes, aren’t idiots. What they’re worried about is the Control Problem: the fact that, once an AI becomes superintelligent, we have no way to control how it behaves.

advertisement

advertisement

Nick Bostrom used the idea of a paper clip-manufacturing AI to make the point: “The AI will realize quickly that it would be much better if there were no humans because humans might decide to switch it off. Because if humans do so, there would be fewer paper clips. Also, human bodies contain a lot of atoms that could be made into paper clips.”

Right now, you're probably thinking something like, "Well, how come they don't just turn it off?" or "Why don't they just program it to not kill all the people?" or "Can't they just give it morals?" I wondered all those things, too. Long story short: It's not as easy as it sounds.

Anyway, even though the Control Problem is legit, it's also not what we should be scared of. Unless you are directly involved with the development of AI, or unless you direct policy or resources that can influence the development of AI, there's very little you can do about it. You might as well be worried about Yellowstone erupting or the earth getting hit by an asteroid.

However, we are experiencing negative effects from AI right now.

First up: technological unemployment. In 2013, Frey and Osborne published a now-famous paper suggesting that 47%-81% of all U.S. jobs would be under threat from technology within 20 years. Last year a study from the International Labor Organization suggested 137 million jobs in Southeast Asia would be at risk within 20 years.

As MIT's Andrew McAfee said, "If the current trends continue, the people will rise up before the machines do." Second: inequality. Even if jobs don't go away, automation can exacerbate inequality. Last month, a German study found that total employment had only remained stable because wages had gone down.

Third: systemic biases. Our artificially intelligent algorithms embed and reinforce our historic biases and prejudices far more effectively than we ever did.

Thanks to automated ad placements, women are less likely than men to be shown ads for high-paying jobs. The COMPAS recidivism algorithm predicts black defendants to be more likely to reoffend than they really are -- and white defendants less likely to reoffend than they really are.

Unlike the Control Problem, we can all play a part in addressing these issues.

We can look at policy responses like Minimum Basic Income, affirmative action for humans, or shifting taxation systems to favor labor over capital. We can ask how humans can add unique value to our businesses instead of looking for opportunities to eliminate them. We can call for transparency in algorithmic decision-making. We can transform our education system to prepare kids for a lifetime of continual learning and adaptation.

We are co-creating our future right now. We can make conscious choices about it, or we can let it happen to us.

Forget about a Terminator future. Your present society needs you.

Next story loading loading..