Google already flags and removes videos endorsing terrorism — specifically on YouTube — but on Sunday Kent Walker, general counsel for the company, penned a post that ran as an op-ed outlining four steps that are underway to fight a host of radical actions that occur online.
As the Cannes Festival of Creativity gets underway — and VidCon later this week in Anaheim, California — Google has pledged better detection of extremist content, faster review times, and an expansion of counter-radicalization work.
The company plans to block the content with the use of technology and human intervention, increasing the number of "flaggers" in its program to identify inappropriate content.
"We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online," Walker wrote in a blog post. "There should be no place for terrorist content on our services."
Walker also explains how Google's engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. The company has invested in systems that use content-based signals to help identify new videos for removal.
The news comes days after Facebook made the same move, telling the world it would block extremist content on its network by strengthening its automated and human efforts to flag and remove content. It also plans to develop data-sharing technology.
The technology will keep certain images and videos that had been previously flagged from being uploaded again.
Recently, the two companies have been criticized for not doing enough. And as the most talked-about video conference — VidCon — gets underway later this week, both companies have a major presence at the event.
In a separate post, Facebook said it would share its own thinking on challenging issues, including the removal of controversial content, how to use data without undermining consumer trust, and difficult to discuss questions like what to do with a person's online identity when they die.