Commentary

AI Predicts The Future Will Be As Bad As The Past

It’s the age of machine learning, they say. Thanks to algorithms, we can finally eliminate bias. There was no subconscious prejudice -- the decision was made by a computer. After all, computers don’t have a subconscious.

Except, of course, they do.

We can know what you are likely to do based on what you’ve done before. We can know what is likely to trigger you. We can build models that replicate existing outcomes.

But our existing outcomes haven’t always been great. Our world is rife with historical biases and systemic injustices. And when we build machine learning algorithms using historical data, we effectively build these biases and injustices into the model.

I’ve written before about Abe Gong’s excellent talk on ethics for powerful algorithms. In it, he talks about the COMPAS (Correctional Offender Management Profiling for Alternative Sanctions) scoring system, which predicts recidivism for prisoners and is often used to inform parole.

advertisement

advertisement

Right off the bat, there’s a structural problem with the approach. As Gong points out, COMPAS doesn’t predict whether you’ll commit a crime again. It predicts whether you’ll get caught -- and you are much more likely to be arrested for even minor infractions if you are a minority than if you are white.

The system is also deeply unfair in that it asks questions beyond the respondent’s control, such as, “If you lived with both parents and they later separated, how old were you at the time?” or “In your neighborhood, have some of your friends or family been crime victims?”

But the problem is worse than that. An analysis by ProPublica of more than 10,000 criminal defendants in Broward County, Florida assessed by COMPAS, “found that black defendants were far more likely than white defendants to be incorrectly judged to be at a higher risk of recidivism, while white defendants were more likely than black defendants to be incorrectly flagged as low risk.”

The purpose of this article isn’t for all of us to be shocked at how terrible COMPAS is. It is to highlight that these biases exist in every algorithm that is using the status quo to predict the future, because the status quo is itself biased.

Rob Speer, co-founder and CTO at Luminoso, pointed this out recently in a tutorial entitled, "How To Make A Racist AI Without Really Trying.” Apparently, “you can follow an extremely typical NLP pipeline, using popular data and popular techniques, and end up with a racist classifier that should never be deployed.”

Speer’s artificially intelligent sentiment classifier, using common word libraries and sentiment lexicons, was racist AF: the sentence “Let’s go get Italian food,” scored high in positive sentiment, while “Let’s go get Mexican food,” scored badly. “My name is Emily” -- positive; “My name is Shaniqua” -- negative.

The good news is that this phenomenon can be fixed. As Speer says, “Making a non-racist classifier is only a little bit harder than making a racist classifier. The fixed version can even be more accurate at evaluations. But to get there, you have to know about the problem, and you have to be willing to not just use the first thing that works.”

Classifiers aren’t just used in criminal justice or in analyzing sentiment. They determine everything from who gets a degree to who gets a mortgage, from what news you see to whether the stock market goes up or down.

Artificial intelligence is great at predicting the past and amplifying it. But we should demand more. AI shouldn’t just replicate who we’ve been. It should help us become better.

2 comments about "AI Predicts The Future Will Be As Bad As The Past ".
Check to receive email when comments are posted.
  1. Jim Meskauskas from Media Darwin, Inc., August 4, 2017 at 2:16 p.m.

    This is really insightful. Thank you!

  2. Paula Lynn from Who Else Unlimited, August 4, 2017 at 2:39 p.m.

    Of course you are correct (as usual). The question of more importance is who is going to control the programming and who is going to do what with it ? Again, what can be done is not in the same column as what should be and should not be done.

Next story loading loading..