Commentary

Bias: Bug Or Feature?

When I talk about artificial intelligence, I think of a real-time Venn diagram in motion. One side is the sphere of all human activity. This circle is huge. The other side is the sphere of artificial intelligent activity. It’s growing exponentially. And the overlap area between the two is also expanding at the same rate.

It’s this intersection between the two spheres that fascinates me. What are the rules that govern interplay between humans and machines?

Those rules necessarily depend on what the nature of the interplay is. For the sake of this column, let’s focus on those researchers and developers trying to make machines act more like humans. Take Jibo, for example -- “the first social robot for the home.” Jibo tells jokes, answers questions, understands your questions and recognizes your face. It’s just one more example of artificial intelligence that’s intended to be a human companion.

And as we’re building machines that are more human, we’re finding that many of the things we thought were human foibles are actually features that have developed for reasons that were at one time perfectly valid.

Conceptual artist Trevor Paglen is a winner of the MacArthur "Genius" Grant. The goal of his latest project is to answer this question:  “What are artificial intelligence systems actually seeing when they see the world?”

What's interesting about this exploration is that when machines see the world, they use machine-like reasoning to make sense of it. For example, Paglen fed hundreds of images of fellow artist Hito Steyerl into a face-analyzing algorithm. In one instance, she was evaluated as “74% female.”

This highlights a fundamental difference between how machines and humans see the world. Machines calculate probabilities. So do we, but that happens behind the scenes, and it’s only part of how we understand the world. Operating a level higher than that, we use meta-signatures -- categorization, for example -- to quickly compartmentalize and understand the world. We would know immediately that Steyerl was a woman. We wouldn’t have to crunch the probabilities.  

By the way, we do the same thing with race. But is this a feature or a bug?  Paglen has his opinion: “I would argue that racism, for example, is a feature of machine learning -- it’s not a bug,” he tells Caitlin Hu for Quartz. “That’s what you’re trying to do: you’re trying to differentiate between people based on metadata signatures, and race is like the biggest metadata signature around. You’re not going to get that out of the system.”

Whether we like it or not, our inherent racism (as demonstrated by a Harvard implicit bias test discussed here) was a useful feature many thousands of years ago. It made us naturally wary of other tribes competing for the same natural resources. As much as it’s abhorrent to most of us now, it’s still a feature that we can’t “get out of the system.”

This highlights a danger in this overlap area between humans and machines. If we want machines to think as we do, we’re going to have to equip them with some of our biases. As I’ve mentioned before, there are some things that humans do well, or, at least, that we do better than machines. And there are things machines do infinitely better than humans.

Perhaps we shouldn’t to try to merge these two. If we’re trying to get machines to do what humans do, are we prepared to program racism, misogyny, intolerance, bias and greed into the operating system? All these things are part of being human, whether we like to admit it or not.

But there are other areas that are rapidly falling into the overlap zone of my imaginary Venn diagram. Take business strategy, for example.

A recent study  from CapGemini showed that 79% of organizations implementing AI feel it’s bringing new insights and better data analysis, 74% think AI makes their organizations more creative, and 71% feel it’s helping make better management decisions.

A friend of mine recently brought this study to my attention, along with what was for him an uncharacteristic rant:  “I really would've hoped senior executives might've thought creativity and better management decisions were THEIR GODDAMN JOB -- and not be so excited about being able to offload those dreary functions to AIs, which are guaranteed to be imbued with the biases of their creators or, even worse, unintended biases resulting from bad data or any of the untold messy parts of life that can't be cleanly digitized.”

My friend hit the proverbial nail on the proverbial head. Those “untold messy parts of life” are what we have evolved to deal with, and the ways we deal with them are not always admirable. But in the adaptive landscape we all came from, our methods were proven to work.

We still carry that baggage with us. But is it right to transfer that baggage to algorithms in order to make them more human? Or should we be aiming for a blank slate?

1 comment about "Bias: Bug Or Feature?".
Check to receive email when comments are posted.
  1. Henry Blaufox from Dragon360, November 1, 2017 at 1:15 p.m.

    Software developers have for years noted that a feature can be a bug we aren't going to, or can't fix.

Next story loading loading..