Facebook has removed 7 million posts that shared misinformation related to COVID-19, but the social network still blames the virus for impeding its ability to a better job of policing all types of content and removing more.
Overall, the social network took action on 22.5 million pieces of content in the second quarter of 2020 -- up from 9.6 million in the first quarter, according to the company’s Community Standards Enforcement Report.
“Due to the COVID-19 pandemic, we sent our content reviewers home in March to protect their health and safety and relied more heavily on our technology to help us review content,” Guy Rosen, Facebook vice president of integrity, wrote in a blog post on Tuesday. “We’ve since brought many reviewers back online from home and, where it is safe, a smaller number into the office.”
On Tuesday the social network published the sixth edition of its Community Standards Enforcement Report, the quarterly update providing metrics on how Facebook enforced its policies from April 2020 through June 2020.
This report includes metrics across 12 policies on Facebook and 10 policies on Instagram.
Despite apologizing for not meeting internal standards, improvements in technology helped the company find and remove much of the bad content. The automation technology in the first qurater expanded in languages such as Spanish, Arabic and Indonesian and made improvements to its English detection technology in the first quarter, but in the second quarter, improvements were made to help take action on more content in English, Spanish and Burmese.
Between April and June 2020, Facebook took action on 8.7 million pieces of content related to terrorism and 4 million pieces related to organized hate.
The proactive detection rate for hate speech Instagram rose by 39 points from 45% to 84% and the amount of content the company took action on increased from 808,900 in the first quarter 2020 to 3.3 million in the second quarter.