The Orwellian potential of a facial recognition-powered police state has rocked the tech industry and spurred at least one local government to publicly steer clear of the artificial intelligence
technology. Yet, while unintended bias in faulty facial recognition systems is a very real threat, the integration of poorly-designed AI into the mundane tools of business and government could create
a chasm between the haves and the have-nots of tomorrow’s AI-reliant world.
The backlash against facial recognition tools that can’t tell the differences between two Asian women or image recognition software that can’t distinguish between
black people and primates is growing. The drumbeat got louder when Orlando, Florida’s police
force decided to cease its trial of Amazon’s Rekognition software
amid protests from the ACLU, Amazon shareholders and others
concerned that such systems could result in racial profiling.
As an AI technologist who has taken great pains to ensure the systems I create do not enable unintentional favoritism or
inequities, I aim to prevent perhaps even more insidious and far-reaching than something as sinister as prejudiced policing machines.
I want to prevent what I call the “AI Decision
Divide.”
We’ve all heard of the Digital Divide, the socio-economic obstacles blocking universal access to the information and opportunities accelerated by the internet. Like faulty
facial recognition systems that fail to spot the differences between two distinct individuals, too many of the artificial intelligence systems developed today are trained using narrow and
non-representative datasets reflecting only selective segments of society rather than information that reflects real, diverse and inclusive populations, places and ideas. The result are systems whose
algorithms actually mimic the disparities in society, exacerbating the impending AI Decision Divide.
The cracks of the AI Decision Divide are already forming. AI-based technologies are making
automated decisions determining whether women see ads for particular jobs or schools. They’re automating decisions that determine loan rates offered to minorities, filtering lists of job
candidates and informing where businesses should build offices.
I know first-hand how challenging it is to assemble a truly-representative pool of study participants and test subjects to train
algorithmic systems. Years ago, I embarked on a large undertaking to build a matrix of behavior classifying human behavioral response, particularly response to what people read. We couldn’t rely
solely on data from other engineers or highly-educated people with similar backgrounds to train the system. We needed people from different backgrounds with different types of experiences, because the
general population would interpret and react to the text they read in a variety of ways. It took us five years to achieve a representative sample for this behavior matrix.
Just think of all
the venture capital chasing after AI technologies, some built with little to no consideration for the social impact of the decisions made using these systems. Are the builders of these technologies
taking care to ensure the data informing them represents various socio-economic groups, or are they rushing to cobble together software that serves only to please the limited criteria of the investor
class?
Many of the people making decisions based on AI technologies fail to question the systems or how they’re built. As government policy makers, law enforcement and even corporate
marketers increase their unquestioning reliance on these systems to inform and automate their decisions, we must ensure that the AI is based on information representative of a broad spectrum of
society.
Humans often refer to the decisions they make day-to-day as intuitive or gut-decisions, when they are actually subject to unwitting-but-inherent bias and ignorance. To develop AI
technologies that will advance and enhance human decision-making, we must acknowledge and understand these cognitive biases and train our systems in such a way that they genuinely improve the way we
naturally think.
Ultimately, only if the people developing algorithmic technologies take great care to inform and train these systems with diversity and inclusion in mind, will they truly
fulfill their promise of advancing human insights and decisions.