Facebook has a clear problem with hate speech and terrorism-related content, and this week's revelations just underscored how prevalent these kinds of content is on the social media platform. Facebook has already shut down around 583 million accounts in the first three months of this year alone, but there's a lot left for Mark Zuckerberg's company to weed out.
For the first time ever, Facebook has come out with a quarterly Community Standards Enforcement Report, a summary of how the social media platform has handled violations of its Community Standards. In terms of numbers, the objectionable content floating around on Facebook, at least the ones it has caught, are simply staggering.
According to Facebook, a huge majority of their action against user content was against spam and fake accounts. Just in the first three months of 2018, Facebook already had to take out around 837 million spam posts as well as 583 million fake accounts.
It may be a fraction of the rubbish that Facebook has taken out these past few months, but terrorism-related content on the platform still numbered in the millions, according to the report. Facebook has succeeded in detecting and deleting 1.9 million posts linked to terrorist propaganda, on top of 3.4 million pieces of graphic violence and 2.5 million hate speech posts, as tallied up by the Guardian.
Facebook has also claimed to have been able to develop algorithms that supposedly catches 99.5 percent of terrorism-related content before they even get reported by users. Even so, that supposedly tiny fraction, a mere 0.5 percent that escapes Facebook's filters, are surprisingly easy to find online, according to a report by an internet safety organization.
Digital Citizens Alliance, a nonprofit dedicated to online safety, counterfeit goods and illicit drug sales on the Internet, has just published a report called "Fool me Once... How Terrorists (Like) And Rely Upon the 'See No Evil, Hear No Evil' Business Model of Google, Facebook and Instagram."