3bl logo
Subscribe
logo

Wake up daily to our latest coverage of business done better, directly in your inbox.

logo

Get your weekly dose of analysis on rising corporate activism.

Select Newsletter

By signing up you agree to our privacy policy. You can opt out anytime.

What Solutions Exist for a ‘Toxic’ Workplace for Content Moderators at Facebook?

Sarah Hutcherson headshotWords by Sarah Hutcherson
New Activism
hero

Humans with help from artificial intelligence have been enforcing Facebook’s community standards to ensure its 2.3 billion users do not interact with disturbing content. These 15,000 contractors who view videos of animal torture, beatings and more of humanity’s most horrific acts on the world’s largest digital community so users don’t have to watch are now bringing attention to the dark realities of their jobs.

Over the past year, these content moderators have taken different actions – including issuing lawsuits and breaking their non-disclosure agreements to share their stories with the press to highlight the grueling culture as they strive to make their case for better pay and workplace conditions. 

In a recent feature published on the Washington Post, a group of content moderators who work within an Accenture site in Austin issued a letter on an internal Facebook forum. The letter described the poor morale in their workplace due to a sense that they could be easily replaced, the jobs' low pay and strict production quotas.  

How Facebook is reinventing this toxic work environment

Facebook claims that it currently provides pre-training for all content moderators so they know how to access wellness support and ongoing coaching. The company says it also has a team of clinical psychologists who design and evaluate resiliency programs for all content moderation centers – as stated in Facebook’s blog.

The social network says it also plans to raise contractors’ hourly wage globally – including from $15 to $18 in Austin and from $20 to $22 in northern California’s Bay Area by the summer of 2020. Meanwhile, the company insists it is expanding on-site counseling for all hours of operation and developing a tool for content moderators to blur disturbing images before they review the material.

 “We have to run a very large-scale platform. We have to take care of the community. And that means we have to get a whole lot of work done,” Arun Chandra, the company’s vice president of scaled support, told The Verge, “but that is not at the expense of [contractors’] well-being.”

Moderation is critical to Facebook’s brand reputation

To secure its position as a leader in the technology space, Facebook must answer its own question about how to foster a safe community for its content moderators so the company can deliver on its brand’s mission to build community. Since Facebook’s moderators put the brand’s values of globalization and connectivity into action, they are critical to Facebook’s positioning as the company to bring communities from around the globe together.

“The fundamental reason for content moderation – its root reason for existing – goes quite simply to the issue of brand protection and liability mitigation for the platform,” explained Sarah T. Roberts, an assistant professor at UCLA who studies commercial content moderation. 

“It is our job at Facebook to help people make the greatest positive impact while mitigating areas where technology and social media can contribute to divisiveness and isolation,” wrote CEO Mark Zuckerberg in a 2017 blog.  

How can Facebook mitigate the harsh realities of humanity to preserve its legacy and uphold democracy without harming thousands of contractors’ mental health?

What are viable ways to reinvent a toxic work environment?

One way Facebook can reinvent the toxic moderation environment is by hiring these content moderators as full-time employees, rather than as contractors. The result would be employees who had more affordable healthcare (and perhaps most importantly, access to mental health professionals), and would also be able to take time off for any reason without losing pay. A counter-argument to such a policy, however, would be that the scaling of such a solution would require huge sums of cash that would otherwise be used to retain talent in other areas of Facebook’s business.  

Another option the company could consider is to continue paid counseling sessions and other mental health services post-employment at Facebook for a certain length of time. Since some content moderators have reported PTSD symptoms and drug use, such long-term services would allow for a safer transition after their tenure at Facebook, continuing to their next jobs. In the end, employees would have access to longer and better well-rounded treatment.

Both solutions could come across as band-aid approaches, and would not stop thousands of contractors from watching traumatic videos day in and day out to create a safer Facebook community. Is it possible for Facebook to reinvent its strategy for fostering a safe global community that negates content moderation all together without closing its doors? What do you think?

Image credit: Glen Carrie/Unsplash

Sarah Hutcherson headshotSarah Hutcherson

As a recent Bard MBA Sustainability graduate, Sarah is excited to be a contributing writer to TriplePundit to demonstrate how environmentally and socially responsible business is synonymous with stronger returns and a more sustainable world. She is most intrigued with how to foster regenerative food systems, develop inclusive and democratic workplaces and inspire responsible consumption.

Read more stories by Sarah Hutcherson

More stories from New Activism