Can Facebook Outweigh Data Breach With Release of Once-Secret Community Standards?

By: Hayley Jennings

April 25, 2018

Facebook has released its long-hidden community guidelines this week in another move to curry favor with the public after the Cambridge Analytica fiasco. An April 24 post to the company’s newsroom written by Monika Bickert, Facebook’s VP of global product management, explains how Facebook’s content policies are developed and enforced. It also introduces a new appeals process for posts that may have been flagged and removed by mistake.

Facebook has never before made its community standards public, even after the Guardian obtained and revealed information from leaked internal documents about the platform’s guidelines in 2017. But Bickert asserts that the standards are being released now to “help people understand where we draw the line on nuanced issues,” and to “[make] it easier for everyone, including experts in different fields, to give us feedback so that we can improve the guidelines—and the decisions we make—over time.”

In the post, Bickert stresses that Facebook now has over 7,500 content reviewers—a 40% increase over this time last year—and mentions the company’s efforts using artificial intelligence to identify content that violates its standards. But with over 2 billion users on the platform and numerous complaints about the company’s dealings with hateful and abusive content over the years, the company still has a lot of work to do toward reassuring users, stakeholders and government entities that its abusive content problem is under control. Even CEO Mark Zuckerberg admitted “we won’t prevent all mistakes or abuse, but we currently make too many errors enforcing our policies and preventing misuse of our tools.”

The Community Standards themselves are rooted in three principles, Facebook says: safety, voice and equity. However, it’s likely that some may still take issue with the wording of some of the guidelines. For instance, the company explains that “at times we will allow content that might otherwise violate our standards if we feel that it is newsworthy, significant or important to the public interest.” Though it also makes clear that it will weigh the possibility of real-world harm before allowing this type of content, it could be argued that decisions around what is “newsworthy” or “significant” are subjective, and may lead to disturbing content left on the platform.

Though the standards are clearly written, Facebook may face some unintended consequences from their release. Andrew S. Ricci, principal of Riccon Strategic Communications, told PR News that “by being very clear about what’s explicitly allowed, it might create some opportunities for provocateurs and others to get right up to the line and stretch the standards as far as they can go…[Facebook is] going to have to figure out ways to enforce both the letter and the intent of the policies pretty well, which could involve some tough calls that it’ll get blowback for.”

Still, Ricci says, this is “a good step forward for their efforts to demonstrate transparency,” but the social giant needs to keep doubling down on its efforts. Ricci concludes that “in the near term, [Facebook has] got to figure out how to address the claims that its users and users’ data are products that can be bought and sold. If it doesn’t figure out how to get that storyline under control and get back to the basics, more troubles might be on the horizon.”

 

Follow Hayley: @that_hayley

At The Social Shake-Up