Is Regulation Anti-Social? Key Takeaways From Zuckerberg’s ‘New Yorker’ Profile

By: Justin Joffe

September 12, 2018

It’s no secret that Facebook has had one crisis after another this year. Between reports of fake news, privacy concerns surrounding the Cambridge Analytica scandal, declining revenues and increasing hate speech, the ubiquitous social media platform still has a lot to answer for.

Two separate congressional hearings with Facebook later, many PR pros are absolutely onboard with the platform being subject to some regulation. The sentiment holds that regulation will inevitably repair trust, thus bringing end users (read: customers) back to the platform, which is good for Facebook and social marketers alike.

This week, The New Yorker published a 14,000-word profile on Facebook founder/CEO Mark Zuckerberg on its website ahead of its publication in the next print issue. The gargantuan read simultaneously challenges the widely-held assumptions that Zuckerberg is an unfeeling, emotionless automaton while explaining his belief in the distinction between feeling an emotion and acting on it through your business.

While regulation continues to be proposed as the umbrella solution to several of Facebook’s problems this year, here are a few takeaways from the profile on what Zuckerberg thinks about external regulation, and what efforts the platform is taking to regulate itself:

Zuckerberg is opposed to government regulation, and government regulators don’t see it happening any time soon. While it’s no surprise that Zuckerberg is opposed to any all all talk of regulating Facebook, including breaking up the platform up into smaller companies, he says that such a decision on behalf of policymakers would be a huge business gaffe. He argues the market is extremely competitive and vulnerable to “ceding the field” to a country like China.

“I think that anything that we’re doing to constrain them will, first, have an impact on how successful we can be in other places,” says Zuckerberg in the profile. “I wouldn’t worry in the near term about Chinese companies or anyone else winning in the U.S., for the most part. But there are all these places where there are day-to-day more competitive situations—in Southeast Asia, across Europe, Latin America…”

The profile goes on to say that, while the F.T.C. may fine Facebook for future violations and make moves to block it from buying up competitors, there remains a general consensus in Washington that no formal regulation is on the horizon (despite the detailed efforts of some senators.) A former F.T.C. commissioner told the New Yorker profile writer Erin Osnos, “[I]n the United States you’re allowed to have a monopoly position, as long as you achieve it and maintain it without doing illegal things.”

Facebook is willing to self-regulate, but isn’t getting there fast enough. As evidenced in Facebook COO Sheryl Sandberg’s hearing with the Senate Intelligence Committee last week, Facebook is working diligently ahead of this year’s midterms to root out the hostile foreign powers that have weaponized its ad platform by micro-targeting users for malevolent purposes, most recently ahead of the 2016 presidential election.

The profile also points out Facebook’s role in enabling a less publicized but hugely tragic crisis: the ethnic cleansing of Myanmar’s Rohingya Muslim minority in spring 2016, which was largely enabled and encouraged through hate speech on Facebook and subsequently deemed a genocide by the UN. Facebook hired several additional Burmese language content reviewers after the news broke, but the damage was already done.

“I hate that we’re in this position where we are not moving as quickly as we would like,” said Zuckerberg when pressed about the Myanmar genocide. “Across the board, the solution to this is we need to move from what is fundamentally a reactive model to a model where we are using technical systems to flag things to a much larger number of people who speak all the native languages around the world and who can just capture much more of the content.”

This past April, in a call with investors, Zuckerberg said that it’s “easier to build an A.I. system to detect a nipple than what is hate speech.”

Zuckerberg, Sandberg and co. have offered a similar answer to the question of what is being done to stop Russians from interfering in our midterms. Facebook has already removed over 600 nefarious accounts, while also acknowledging that hostile foreign actors have become more nuanced and aggressive in their attacks. These accounts are no longer paying for ads in Russian rubles, for instance, and their IP addresses are become harder to trace.

Nevertheless, Zuckerberg’s words about moving away from a reactive model echo an evergreen PR adage when dealing with crises: “Be proactive, not reactive.” That doesn’t just mean hiring the right people, as Zuckerberg suggests in this story, but also making sure the company agrees on a threshold for escalation and removal alongside its guidelines.

Ben Spangler, head of SEO on the Performics Practices team at Spark Foundry, recently told us that Facebook would do well to emulate Google’s approach toward ranking and authority. While Facebook is building out preventative measures now, it still has a ways to go before those changes are baked into its algorithm.

How quickly that will happen is unclear. On the one hand, Zuckerberg says in the New Yorker story that he believes the fake news epidemic is greatly overblown. “I find the notion that people would only vote some way because they were tricked to be almost viscerally offensive,” he said. “Because it goes against the whole notion…that individuals are smart…and can make their own assessments about what direction they want their community to go in.”

On the other hand, the story notes that “after years of lobbying against requirements to disclose the sources of funding for political ads, [Facebook] announced that users would now be able to look up who paid for a political ad, whom the ad targeted, and which other ads the funders had run.”

The platform’s old model could use a bit more friction. “Zuckerberg used to rave about the virtues of ‘frictionless sharing,'” writes Osnos, “but these days Facebook is working on ‘imposing friction’ to slow the spread of disinformation.”

Facebook’s efforts in this arena include hiring Nathaniel Gleicher, the former director for cybersecurity policy on President Obama’s National Security Council to blunt “information operations,” along with the removal of over 30 accounts running disinformation campaigns that were traced to Russia. This was quickly followed by the removal more than 650 accounts, groups and Pages with links to Russia or Iran.

The 14,000-word story fails to address a glaring question that still lingers following Sandberg’s testimony before Congress last week. When pressed by Senator Ron Ryden of Oregon to comment on what Facebook is doing to safeguard its proprietary ad tool—specifically that tool’s ability to micro-target ads to specific user demographics—Sandberg had no answer prepared.

This moment was concerning to an already distrustful public, as Facebook’s micro-targeting tool is precisely why Russia used Facebook as its means of spreading false propaganda. Facebook’s advertising product is the platform’s key source of revenue—and admittedly a useful one for communications and marketing professionals.

What the platform will do to modify this process in a way that assuages profits and maintains its usefulness for marketers—while protecting end users—remains alarmingly unclear. And until Facebook has an answer for it, calls for government regulation will likely continue.

Follow Justin: @Joffaloff

At The Social Shake-Up