Last May, the White House warned in its Big Data report that personalized ads and content could reflect discrimination.
The report specifically mentions research by Latanya Sweeney, former chief technologist at the Federal Trade Commission. She reported in 2013 that Google searches for black-identifying names, like “DeShawn and Darnell,” were more likely to generate ads that contain the word “arrest” than searches for white-identity names, like “Geoffrey and Jill.”
“It’s clear that outcomes like these, by serving up different kinds of information to different groups, have the potential to cause real harm to individuals, whether they are pursuing a job, purchasing a home, or simply searching for information,” the White House says in its report.
Now, a new study by researchers including Carnegie Mellon University's Amit Datta suggests that users' gender can determine whether they are served ads that could lead to higher-paying opportunities.
For the paper, researchers visited Google's Ads Settings, selected a gender, and then visited Alexa's top 100 employment-related sites. After visiting the jobs sites, the researchers navigated to the Times of India, where they collected ads.
Researchers found that browsers identified as male were served ads encouraging career-coaching services for $200,000-plus positions almost six times more often than browsers identified as female.
A Google spokesperson said in a statement that advertisers “can choose to target the audience they want to reach.”
“We provide transparency to users with 'Why This Ad' notices and Ad Settings, as well as the ability to opt out of interest-based ads,” the spokesperson added.
Datta and the others say they don't fault Google for what they characterize as “discrimination” in ad targeting. In fact, the researchers acknowledge that Google's policies allow demographic targeting based on gender.
“We cannot determine whether Google, the advertiser, or complex interactions among them and others caused the discrimination,” the paper says. “The discrimination might have resulted unintentionally from algorithms optimizing click-through rates or other metrics free of bigotry.”