Google Search Inherent AI Bias And What's Being Done To Correct It

AI models and training sets, which fundamentally work in the same way, have inherent bias. Many of them are used in search engines.

To make systemic change in biased systems that have existed for centuries, tech companies like Amazon, Google, Microsoft, Facebook and others must rethink and retrain them.

“There’s a lot of similarities on how language works and how visual recognition works,” said Barak Turovsky, head of Google Translate. He joined Think About This hosts Shelly Palmer and Ross Martin to talk about Google AI and AI training models.

While Turovsky didn't address racial bias, he did say that all types of bias are inherent in AI. 

Thinking in a different way and questioning assumptions begin with language. The inherent bias in language surfaces when translated from English into another language, especially when assigning gender to the query or the words. Russian and Spanish are examples of having gender-assigned words.

Language is bias, he said. Some challenges — societal and custodial — have been picked up by AI for thousands of years, Turovsky said.

Google crawls the internet for publicly available data and information to train its AI models.

Some of it, like the Bible, dates back many years. All the data during these years provide a giant reserve from which to pull from. All this is picked up by machines. As language evolves, it has not infiltrated the AI models as it has in everyday life.

Language is local. Google recognizes the daunting task and the huge societal responsibility of correcting racial, gender, sexual orientation, and other types of bias. AI training models pick up all these bias such as recency.

Turovsky pointed to information about COVID-19.  

Defining the problem is the first step. How to correct the bias is the second.

The first steps taken by Google were to allow the AI training data to work, then to approach new gender rules and integrate them into the models, Turovsky said — but the problem is language because it develops very slowly. Another way is to change the past and give people more choices and control.

Usually Google translates the question asked. “Most of the time that’s what people want,” Turovsky said. “If this is a big risk gender query, we will give you two options to choose from.”

The same bias applies to facial recognition. In the last few weeks IBM, Microsoft and Amazon all admitted facial recognition unfairly singles out people of color and AI models aren’t exactly perfect. Google stopped doing selling facial recognition services to law enforcement in 2018, and this year others followed.  

Translate is one of the largest AI projects, with about 20% of the people in the world speaking English, but about 50% of the information on the Internet is in English, Palmer estimates. “We often don’t understand the bias programmed into models,” he said.

 

Next story loading loading..