More than 12m pieces of Covid-19 misinformation removed by Facebook

More than 12m pieces of Covid-19 misinformation removed by Facebook

According to the latest figures from the social network, Facebook removed more than 12 million misinformation related to Covid-19 between March and October this year.

The company’s new Community Standards Enforcement Report showed that the millions of jobs have been cut for making misleading claims such as false preventive measures and excessive cures that could result in imminent physical harm.

During the same period, Facebook said it had posted warning labels on around 167 million Covid-19-related content referring to articles by third-party fact checkers that exposed the allegations made.

And while Facebook said the pandemic continued to bother the content review workforce, some enforcement metrics returned to pre-coronavirus levels.

This was due to improvements in artificial intelligence used to detect potentially harmful contributions and the expansion of detection technologies to include more languages.

For the period between July and September, Facebook announced that 19.2 million violent and graphic content had been edited, an increase of more than four million compared to the previous quarter.

In addition, the site edited 12.4 million pieces of content related to nudity and child sexual exploitation, an increase of around three million over the previous reporting period.

During that time, 3.5 million bullying or harassment items were also removed, up from 2.4 million. Instagram captured more than four million violent graphic content, as well as one million child nudity and sexual exploitation content, and 2.6 million posts related to bullying and harassment against them, an increase in each area.

The report added that Instagram took action against 1.3 million pieces of content related to suicide and self-harm, up from 277,400 in the last quarter.

It also found that Facebook had 22.1 million posts classified as hate speech, with 95% of the posts being proactively identified by Facebook and its technologies.

Guy Rosen, Vice President for Social Integrity, said, “As the Covid-19 pandemic continues to disrupt our workforce when it comes to reviewing content, we are seeing some enforcement metrics returning to pre-pandemic levels.

“Our proactive content breach detection rates in most policies increased from the second quarter as AI improved and our detection technologies expanded to include more languages. Even with a reduced review capacity, we still prioritize the most sensitive content to review, including areas like suicide, self-harm, and child nudity. “

Facebook and other social media companies were kept under constant scrutiny for their monitoring and removal of misinformation and harmful content, especially this year during the pandemic and in the run-up to the US presidential election.

In the UK, online security groups, activists and politicians are calling on the government to bring the introduction of its online damage law to parliament, which is currently being postponed until next year.

The bill proposes stricter regulation for social media platforms with harsh financial penalties and possibly even criminal liability for executives if websites fail to protect users from harmful content.

Facebook previously announced that it would welcome more regulation within the sector.

Mr Rosen said Facebook will “continue to improve our technology and enforcement efforts to remove harmful content from our platform and keep people safe while using our apps.”



Please enter your comment!
Please enter your name here