Facebook took down 21 million pieces of adult nudity in three months

Share

Facebook also took down 837 million pieces of spam in [the first three months of the year], "nearly 100% of which we found and flagged before anyone reported it" said Rosen.

The world's largest social network published enforcement numbers for the first time on Wednesday, revealing millions of standards violations in the six months to March.

In 85.6 percent of the cases, Facebook detected the images before being alerted to them by users, said the report, issued the day after the company said about 200 apps had been suspended on its platform as part of an investigation into misuse of private user data. "It's partly that technology like artificial intelligence, while promising, is still years away from being effective for most bad content because context is so important". "While not always ideal, this combination helps us find and flag potentially violating content at scale before many people see or report it". But only 38 per cent of hate speech was noticed by the AI.

Hate speech is harder to police using automated methods, however, as racist or homophobic hate speech is often quoted on posts by their targets or activists. The Menlo Park, California-based company is constantly in the process of taking down fake accounts. Overall, the social giant estimated that around 3%-4% of active Facebook accounts on the site during Q1 were still fake.

Guy Rosen, Facebook's vice president of product management, said the company had substantially increased its efforts over the past 18 months to flag and remove inappropriate content. "These are the same metrics we're using internally to guide the metrics of the teams".

Luckily for Facebook, almost all of that content was scrubbed from the social network by its technology. Why are certain people banned, even if they did nothing wrong? "It's why we're investing heavily in more people and better technology to make Facebook safer for everyone". Recently, Facebook released for the first time the internal rules about what stays up and what comes down.


However, the enforcement data from Facebook isn't complete.

The posts that keep the Facebook reviewers the busiest are those showing adult nudity or sexual activity - quite apart from child pornography, which is not covered by the report.

Facebook says AI has played an increasing role in flagging this content. A photo with nudity may be porn, or it may be art, and human eyes can usually tell the difference. Summits are expected later in the year in India, Singapore and the US.

Facebook uses computer algorithms and content moderators to catch problematic posts before they can attract views.

"In other words, of every 10,000 content views, an estimate of 22 to 27 contained graphic violence", the report said. The estimate is taken from a global sampling of all content in the first quarter, weighted by popularity of that content. Facebook doesn't yet have a metric for prevalence of other types of content.

[Image: courtesy of Facebook]"We aim to reduce violations to the point that our community doesn't regularly experience them", Rosen and vice president of data analytics Alex Schultz write in the report.

Share