Commentary

The Biases Of AI: Our Devils Are In The Data

I believe that -- over time -- technology does move us forward. I further believe that, even with all the unintended consequences it brings, technology has made the world a better place in which to live. I would rather step forward with my children and grandchildren (the first of which has just arrived) into a more advanced world than step backward into the world of my grandparents, or my great grandparents. We now have a longer and better life, thanks in large part to technology. This, I’m sure, makes me a techno-optimist.

But my optimism is of a pragmatic sort. I’m fully aware that it is not a smooth path forward. There are bumps and potholes aplenty along the way.

Technology, for example, does not play all that fairly. Techno-optimists tend to be white and mostly male. They usually come from rich countries, because technology helps rich countries far more than it helps poor ones. Technology plays by the same rules as trickle-down economics: a rising tide that will eventually raise all boats, just not at the same rate.

Take democracy, for instance. In June 2009, journalist Andrew Sullivan declared “The revolution will be Twittered!” after protests erupted in Iran. Techno-optimists and neo-liberals were quick to declare social media and the Internet as the savior of democracy. But, even then, the optimism was premature -- even misplaced.

In his book “The Net Delusion: The Dark Side of Internet Freedom,” journalist and social commentator Evgeny Morozov details how digital technologies have been just as effectively used by repressive regimes to squash democracy. The book was published in 2011. Just five years later, that same technology would take the U.S. on a path that came perilously close to dismantling democracy. As of right now, we’re still not sure how it will all work out.

As Morozov reminds us, technology, in and of itself, is not an answer. It is a tool. Its impact will be determined by those who built the tool -- and, more importantly, those who use the tool.

Also, tools are not built out of the ether. They are necessarily products of the environment that spawned them. And this brings us to the systemic problems of artificial intelligence.

Search is something we all use every day. And we probably didn’t think that Google (or other search engines) are biased, or even racist. But a recent study published in the journal Proceedings of the National Academy of Sciences, shows that the algorithms behind search are built on top of the biases endemic in our society.

“There is increasing concern that algorithms used by modern AI systems produce discriminatory outputs, presumably because they are trained on data in which societal biases are embedded,” says Madalina Vlasceanu, a postdoctoral fellow in New York University’s psychology department and the paper’s lead author.

In a 2020 opinion piece in the MIT Technology Review, researcher and AI activist Deborah Raji wrote: “I’ve often been told, ‘The data does not lie.’ However, that has never been my experience. For me, the data nearly always lies. Google Image search results for ‘healthy skin’ show only light-skinned women, and a query on ‘Black girls’ still returns pornography. The CelebA face data set has labels of ‘big nose’ and ‘big lips’ that are disproportionately assigned to darker-skinned female faces like mine. ImageNet-trained models label me a ‘bad person,’ a ‘drug addict,’ or a ‘failure.' Data sets for detecting skin cancer are missing samples of darker skin types. “

The search algorithm may not be inherently biased, but it does reflect the systemic biases of our culture. The more biased the culture, the more it will be reflected in technologies that comb through the data created by that culture. In these cases, the devil is in the data. This is regrettable in something like image search results, but when these same biases show up in the facial recognition software used in the justice system, it can be catastrophic.

In an article in Penn Law’s Regulatory Review, the authors reported that, “In a 2019  National Institute of Standards and Technology report, researchers studied 189 facial recognition algorithms—'a majority of the industry.' They found that most facial recognition algorithms exhibit bias. According to the researchers, facial recognition technologies falsely identified Black and Asian faces 10 to 100 times more often than they did white faces. The technologies also falsely identified women more than they did men—making Black women particularly vulnerable to algorithmic bias. Algorithms using U.S. law enforcement images falsely identified Native Americans more often than people from other demographics.”

Most of these issues lie with how technology is used. But how about those that build the technology? Couldn’t they program the bias out of the system?

There we have a problem. The thing about societal bias is that it is typically recognized by its victims, not those that propagate it. And the culture of the tech industry is hardly gender-balanced, nor diverse.  According to a report from the McKinsey Institute for Black Economic Mobility, if we followed the current trajectory, experts in tech believe it would take 95 years for Black workers to reach an equitable level of private sector paid employment.

Facebook, for example, barely moved one percentage point from 3% in 2014 to 3.8% in 2020 with respect to hiring Black tech workers but improved by 8% in those same six years when hiring women. Only 4.3% of the company’s workforce is Hispanic. This essential whiteness of tech extends to the field of AI as well.

Yes, I’m a techno-optimist, but I realize that optimism must be placed in the people who build and use the technology. And because of that, we must try harder. We must do better. Technology alone isn’t the answer for a better, fairer world.  We are.

2 comments about "The Biases Of AI: Our Devils Are In The Data".
Check to receive email when comments are posted.
  1. Ben B from Retired, July 19, 2022 at 10:06 p.m.

    Interesting article on aI and its biases I knew there was bias I didn't know it was widespread bias thou, but then again never thought of it I hope that it will get fixed and do better in the future.

  2. John Grono from GAP Research, July 20, 2022 at 8:46 p.m.

    A great and very thought provoking post Gord.

Next story loading loading..