Commentary

In Light Of Buffalo Shooting, What Does YouTube's MRC Accreditation Really Mean?

Last week YouTube was awarded content-level brand safety accreditation from the Media Rating Council (MRC) for the second year in a row. With over 2 billion global users, the popular social video site is the first digital platform to receive this level of accreditation.

The MRC is sending a message to brands that it is safe and effective to advertise on YouTube, and that ads have a high probability of reaching preferred audiences without appearing beside harmful or offensive content.

In other words, YouTube is a trustworthy platform, with suitable general policies and a fair level of transparency embedded into its ad targeting process.

“Over the past two years, we’ve worked directly with advertisers and agencies to better understand their needs and develop a set of best practices,” YouTube said in a recent statement.

In many ways, this is a huge comeback for a company that experienced a massive backlash from advertisers in 2017. Walmart, PepsiCo, Starbucks, GM and other major brands left the site amid an onslaught of offensive video content.

Videos involving racist, homophobic, and anti-Semitic views were gaining ads from the biggest brand names including Coca-Cola, Amazon, and Microsoft.

Thankfully, over the past five years, Youtube has improved its placement control tools with third-party verification options to improve brands’ safety.

Pivot to Saturday’s horrific, racially targeted mass shooting at a grocery store in Buffalo, New York.

The terrorist who shot 13 people (11 of whom were Black), killing 10, broadcast this heinous act of violence in real time via a Twitch livestream. The video was taken down within two minutes of the first gunshots, but copies of it were shared across the internet.

According to The Washington Post, video was viewed 3 million times on Streamable, a link that also amassed over 500 comments and 46,000 shares on Facebook before being removed 10 hours later.

Because of incidents like this one, which have become a basic reality in the U.S. and in many places across the globe, major social platforms like Meta and YouTube have taken steps to counteract the spread of violent content.

From New Zealand’s Christchurch massacre in 2019––when a terrorist murdered 51 Muslims in 2 mosques while livestreaming on Twitch––arose the Global Internet Forum to Counter Terrorism (GIFCT), a group designed by major tech companies to respond efficiently to similar attacks.

Hours after the shooting in Buffalo, the GIFCT launched its highest alert to block videos from the four founding platforms––Facebook, Microsoft, Twitter and YouTube––as well as on Airbnb, Discord and Amazon sites.

But even with timely systemic responses, it seems impossible to prevent harmful, violent, and offensive content from seeping through the cracks.

Bloomberg reported that portions of the horrific video attack from Saturday’s shooting were also uploaded to YouTube before they were taken down.

Even though these were nonviolent sections of the terrorist’s original Twitch livestream––him driving to the grocery store––it brings to mind a disturbing question.

Is even a major platform like YouTube––which has hired thousands of moderators, and is the only social platform to be awarded two consecutive years of MRC safety accreditation––unable to ensure that this type of disturbing content won’t reach its users?

Next story loading loading..