Facebook said on Tuesday it removed 7 million posts in the second quarter for sharing false information about the novel coronavirus, including content that promoted fake preventative measures and exaggerated cures.
It released the data as part of its sixth Community Standards Enforcement Report, which it introduced in 2018 along with more stringent decorum rules in response to a backlash over its lax approach to policing content on its platforms.
The world’s biggest social network said it would invite proposals from experts this week to audit the metrics used in the report, beginning in 2021. It committed to the audit during a July ad boycott over hate speech practices.
The company removed about 22.5 million posts with hate speech on its flagship app in the second quarter, a dramatic increase from 9.6 million in the first quarter. It attributed the jump to improvements in detection technology.
It also deleted 8.7 million posts connected to “terrorist” organisations, compared with 6.3 million in the prior period. It took down less material from “organised hate” groups: 4 million pieces of content, compared to 4.7 million in the first quarter.
The company does not disclose changes in the prevalence of hateful content on its platforms, which civil rights groups say makes reports on its removal less meaningful.
Facebook said it relied more heavily on automation for reviewing content starting in April as it had fewer reviewers at its offices due to the COVID-19 pandemic.
That resulted in less action against content related to self-harm and child sexual exploitation, executives said on a conference call.
“It’s graphic content that honestly at home it’s very hard for people to moderate, with people around them,” said Guy Rosen, Facebook’s vice president for integrity.
Facebook said it was expanding its hate speech policy to include “content depicting blackface, or stereotypes about Jewish people controlling the world.”