
As part of a mammoth effort to police a community of nearly 2 billion
people, Facebook further outlined its “counterterrorism” strategy, on Thursday.
“There’s no place on Facebook for terrorism,” Monika Bickert and Brian Fishman --
director of global policy management and counterterrorism policy manager at Facebook -- note in new blog post.
Of course, the social giant reviews reports of potential terrorism posts, and
alerts authorities whenever threats are considered to be legitimate. To more effectively monitor its network, however, Facebook is increasingly relying on artificial intelligence to identify threats
the moment they appear.
“Although our use of AI against terrorism is fairly recent, it’s already changing the ways we keep potential terrorist propaganda and accounts off
Facebook,” according to Bickert and Fishman.
Bickert and Fishman say they are focusing their most cutting-edge techniques to combat terrorist content connected to ISIS, Al Qaeda and
their affiliates.
Specific AI-aided techniques include “image matching,” which can identify terrorist-related photos and video the instant it is posted.
The tech titan has
also begun to experiment with using AI to understand text that might be advocating for terrorism. Because terrorists often operate in clusters, Facebook is also employing algorithms to identify
related material connected to Pages, groups, posts or profiles that have already been linked to terror activity.
Facebook also claims to have improved its ability to detect new fake accounts
created by repeat offenders. This has helped the network reduce the time period that terrorist recidivist accounts remain online.
Facebook is also experimenting with systems to catch bad
actors across platforms, as well as its various properties, including WhatsApp and Instagram. “This work is never finished because it is adversarial, and the terrorists are continuously evolving
their methods, too,” Bickert and Fishman concede.
“We’re constantly identifying new ways that terrorist actors try to circumvent our systems … and we update our
tactics accordingly.”
The new outline follows a nearly 6,000-word manifesto published earlier this year by Facebook founder Mark Zuckerberg, in which he addressed everything from “promoting peace and
understanding” to “ending terrorism, fighting climate change, and preventing pandemics.”
Facing growing criticism for its failure to stop the spread of offensive and
otherwise inappropriate video, Facebook promised to hire another 3,000 human content monitors, in May.
As of May, Facebook said its community operations team already consisted of about 4,500
monitors.
That same month, the Guardianlearned Facebook’s content guidelines were spread across more than 100 training manuals, spreadsheets
and flowcharts. Responding to these revelations about its content policy, Facebook admitted
that watching over its massive network is difficult.