YouTube Recommendation Algorithm Violates Its Own Policies, Research Finds

YouTube’s video recommendation algorithm often violates the platform’s very own content policies, according to a 10-month long, crowdsourced investigation released today by Mozilla.

Mozilla’s Senior Manager of Advocacy Brandi Guerkink, who led the research, said misinformation was the dominant category, as well as violent and graphic content and hate speech and deceptive practices. “Specific videos include sexualized parodies of 'Toy Story'," she said. “There were racist and misogynistic videos.”

The in-depth study also found that people in non-English-speaking countries are 60% more likely to encounter videos they considered disturbing. Pandemic-related reports were the most prolific in non-English languages. Among English-language regrettable videos, 14% are pandemic-related. For non-English regrettable videos, the rate is 36%.

When asked why non-English-speaking countries are far more likely to encounter videos that are considered disturbing, Geurkink said one hypothesis is that the algorithms are better trained in the English language. “YouTube confirmed in their own statement that once they introduce policy changes to deal with this issue, they roll out changes first in the United States, followed by other English-language countries,” she said. “Then years later, they roll it out in the rest of the countries they operate.”

The study found that 71% of all videos that volunteers reported as “regrettable videos" (labeled "Regrets") were actively recommended by YouTube’s own algorithm. Recommended videos were 40% times more likely to be regretted than videos that someone queried or searched on.

Around 9% of recommended "Regrets" have since been removed from YouTube. These videos had a collective 160 million views before they were removed.

In 43.6% of cases where Mozilla had data about videos a volunteer watched before a "Regret," the recommendation was completely unrelated to the previous videos that the volunteer watched.

Mozilla is calling on YouTube to be more transparent about the recommendation algorithm.

Suggestions include that platforms should publish frequent and thorough transparency reports including information about recommendation algorithms.

The platforms also should provide people with options to opt out of personalized recommendations, and create risk management systems devoted to recommendation AI.

Mozilla also calls on policymakers to enact laws that mandate AI system transparency and protect independent researchers.

A YouTube spokesperson said the platform’s recommendation system is intended to connect viewers with content they love. On any given day, more than 200 million videos are recommended on the home page.

“Over 80 billion pieces of information is used to help inform our systems, including survey responses from viewers on what they want to watch,” per the YouTube spokesperson. “We constantly work to improve the experience on YouTube, and over the past year alone, we’ve launched over 30 different changes to reduce recommendations of harmful content.”

The spokesperson also said these changes reduced the consumption of borderline content that comes from its recommendation engine, which now falls significantly below 1%.

YouTube earlier this year disclosed its violative view rate -- the percentage of views on YouTube that comes from content violating our policies.

The most recent VVR is at 0.16-0.18%, which means that out of every 10,000 views on YouTube, 16-18 come from violative content. This is down by more than 70% when compared to the same quarter of 2017, largely thanks to YouTube’s investments in machine learning.  

Next story loading loading..