Facebook, Twitter, Microsoft, and YouTube agreed to follow European regulations announced Tuesday that require them to review hateful online content within 24 hours of being notified and to remove it, as part of a new code of conduct aimed at combating hate speech and terrorist propaganda across the European Union.
Calling terrorist propaganda "illegal hate speech," the EU has been pushing for Web companies to step in to counter efforts from groups like ISIS. Some companies have developed their own rules to counter groups like ISIS, but the code of conduct announced today marks the first effort to unify policy on online hate speech across the EU.
Google search users querying extremist-related content on the search engine in Britain saw anti-radicalization links under a pilot program for nonprofits through an AdWords grant in February. At the time Google had received more than 100,000 reports flagging inappropriate content related to terrorist propaganda from the public.
The AdWords grants, which nonprofits can use to target particular keywords, allowed those eligible to run campaigns focused on counterterrorism. The searches would apply to words linked to religious extremism. The campaigns aimed at countering radicalism used search engine advertising.
"The recent terror attacks have reminded us of the urgent need to address illegal online hate speech, according to Vra Jourová, EU Commissioner for Justice, Consumers and Gender Equality, in a prepared statement. "Social media is unfortunately one of the tools that terrorist groups use to radicalise young people and racist use to spread violence and hatred," Jourová notes. "This agreement is an important step forward to ensure that the internet remains a place of free and democratic expression, where European values and laws are respected."