And despite the social network’s pledge to do a better job eliminating hate speech from its platforms, the evidence suggests that the Silicon Valley giant still has plenty of work ahead.
ProPublica, a leading nonprofit investigative journalism portal, has made holding Facebook accountable one of its leading causes over the past several months. The news service’s reporters and readers have been aggressively flagging social media posts that they say violate the company’s terms and conditions – including community standards that ban any violent threats against people based on religious beliefs.
The problem, however, is that Facebook’s content reviewers enforce the company’s rules in a pattern that at best can be painted as wildly inconsistent.
For example, one reporter involved with ProPublica’s ongoing “algorithm injustice” investigation discussed a post, uploaded with a graphic image, describing “the only good Muslim is a (expletive deleted) dead one.” A reader had flagged the post as hate speech via Facebook’s reporting feature.
But according to ProPublica, Facebook’s automated response was “We looked over the photo, and though it doesn’t go against one of our specific Community Standards, we understand that it may still be offensive to you and others.”
Not exactly a step forward for artificial intelligence, is it? Nor does it speak well of Facebook’s quest to hire thousands of employees who are tasked with identifying and removing offensive and incendiary posts. Yet another post, which had no image and simply spelled out, “Death to the Muslims,” was removed relatively quickly.
ProPublica’s crowdsourced review of hate speech concluded last week that Facebook’s haphazard enforcement mechanism has become the norm. “Even when they do follow the rules, racist or sexist language may survive scrutiny because it is not sufficiently derogatory or violent to meet Facebook’s definition of hate speech,” wrote Julia Angwin, Ariana Tobin, and Madeleine Varner.
ProPublica contacted Facebook almost 50 times to explain the logic of letting some hate speech linger on the social network. In less than half the cases, Facebook admitted there was an error. But 19 times, the company defended the decision to let posts stand. In a few other cases, Facebook said content was flagged “incorrectly,” was deleted or there was not enough information to provide any response.
A more detailed listing of Facebook’s struggles containing hate speech shows that the company’s decision-making process is all over the map. One anti-Islam post, for example, was allowed to stand because “attacking the members of a religion is not acceptable, but attacking the religion itself is acceptable.”
Facebook also provided similar logic when defending decisions made to leave anti-Semitic posts on its platform.
Do not expect this problem to be solved anytime soon. After all, the sword cuts both ways: users who post controversial images on Facebook, in a push to fight back against sexism, racism or homophobia, have also seen their posts removed or have even been booted from the site. And Facebook’s byzantine reporting policies still accomplish little except generate confusion.
“For users who want to contest Facebook’s rulings, the company offers little recourse,” concluded ProPublica. “Users can provide feedback on decisions they don’t like, but there is no formal appeals process.”
Image credit: John S. Quarterman/Flickr