Amazon plans to hire a small group of people to help monitor and identify future online threats.
The new hires will work within Amazon Web Services (AWS) division and with outside researchers and develop an expertise for this type of work, reported Reuters, citing sources familiar with the matter.
The plan aims to “take a proactive approach to determine what types of content violate its cloud service policies, such as rules against promoting violence, and enforce its removal," according to Reuters.
Experts told Reuters this would turn Amazon into a powerful arbiter of content.
“Reuters’ reporting is wrong," an AWS spokesperson wrote in an email to Search & Performance Marketing Daily. "AWS Trust & Safety has no plans to change its policies or processes, and the team has always existed.”
Last week, the Washington Post reported that the company pulled the plug on a website hosted on Amazon Web Services featuring propaganda from the Islamic State that celebrated the suicide bombing that killed an estimated 170 Afghans and 13 U.S. troops in Kabul.
Nida-e-Haqq, an Islamic State media group that distributes Islamist content in the Urdu language, had been using Amazon’s cloud-computing division to host its content, despite Amazon’s policies against working with terrorist groups.
Rita Katz, executive director of SITE Intelligence Group, which monitors online extremism, discovered the link with Amazon Web Services.
“It’s just mind-blowing that even after all these years, ISIS could still find a way to exploit a hosting company like Amazon,” Katz told The Seattle Times. “Of course, we should presume that ISIS will always be searching for ways to bypass security protocols, but this app isn’t even trying to stay low-key.”
The latest move is likely to renew debate about how much power tech companies should have to restrict free speech, and how to define the concept.
Amazon’s Acceptable Use Policy bars customers from, among other practices, using the cloud-computing service “to threaten, incite, promote, or actively encourage violence, terrorism, or other serious harm.”
Approaches around content issues -- such as determining when misinformation on a company's website reaches a certain scale -- are also being reviewed.
The new AWS team does not plan to search through the content that companies host on the cloud, but will look for future threats such as emerging extremist groups whose content could make it onto the AWS cloud, Reuters reports, citing an unspecified source.