https://ift.tt/2Ghwb76
As Facebook continues to grapple with spam, hate speech, and other undesirable content, the company is shedding more light on just how much content it is taking down or flagging each day.
The company today published its first-ever Community Standards Enforcement Report today, detailing what kind of action it took on content displaying graphic violence, adult nudity and sexual activity, terrorist propaganda, hate speech, and spam. One of the most noteworthy numbers: Facebook said that it took down 583 million fake accounts in the three months spanning Q1 2018, down from 694 million in Q4 2017. That doesn’t include what Facebook says are “millions” of fake accounts that the company catches before they can finish registering with Facebook.
The report comes just a few weeks after Facebook published for the first time detailed internal guidelines for how it enforces content takedowns.
The numbers give users a better idea of the sheer volume of fake accounts Facebook is dealing with as it has pledged in recent months to use facial recognition technology — which the company also uses to suggest which Facebook friends to tag in photos — to catch fake accounts that might be using another person’s photo in their profile pictures. But a recent report from the Washington Post found that Facebook’s facial recognition technology may not be that effective yet in catching fake accounts, as the tool mostly searches within a user’s friends network or their friends of friends to catch fake accounts.
Facebook also gave a breakdown of how much other undesirable content it removed during Q12018, as well as how much of it was flagged by its systems or reported by users:
- 21 million pieces of content depicting inappropriate adult nudity and sexual activity was taken down, 96 percent of which was first flagged by Facebook’s tools.
- 3.5 million pieces of violent content was taken down or slapped with a warning label, 86 percent of which was flagged by Facebook’s tools.
- 2.5 million pieces of hate speech was taken down, 38 percent of which was flagged by Facebook’s tools.
The numbers show that Facebook is still predominately relying on other people to catch hate speech — which CEO Mark Zuckerberg has spoken about before, saying that it’s much harder to build an AI system that can detect what is hate speech then to build a system that can detect a nipple, for instance. Facebook defines hate speech as a “direct attack on people based on protected characteristics—race, ethnicity, national origin, religious affiliation, sexual orientation, sex, gender, gender identity, and serious disability or disease.”
The problem is that, as Facebook’s VP of product management Guy Rosen wrote in the blog post announcing today’s report, AI systems are still years away from becoming effective enough to be relied upon to catch most bad content.
But, hate speech is a problem for Facebook today, as the company’s struggle to stem the flow of fake news and content meant to encourage violence against Muslims in Myanmar has shown. And Facebook’s failure to properly catch hate speech could push users off the platform before the company develops an AI solution.
TECH
FINANCE
via VentureBeat https://venturebeat.com
May 15, 2018 at 06:28PM
No comments:
Post a Comment