Facebook’s Mark Zuckerberg has handed responsibility for ultimate rulings on content moderation — including on posts by Donald Trump — to a body of his own making
Facebook’s Mark Zuckerberg has handed responsibility for ultimate rulings on content moderation — including on posts by Donald Trump — to a body of his own making © FT Montage/Getty Images

For years, Facebook has grappled with the thorny question of who should be the ultimate “arbiter of truth” when it comes to moderating content on its near ubiquitous social media platform. 

Mark Zuckerberg, chief executive and ultimate decision maker, has agreed that it should not be him — semi-outsourcing the problem to a group of his own making, called the Facebook Oversight Board

Comprising 20 journalists, academics and politicians, this board is tasked with issuing judgments on the most high-profile content moderation cases. Facebook casts it as an independent, almost quasi-judicial body, or “supreme court” for speech. 

But the effort is laden with complexities, as the board’s recent decision to uphold a ban on former US president Donald Trump using Facebook has shown.

Some of the main points of contention have been whether the board is independent enough, and its scope, broad enough.

It is currently funded — through a $130m trust — by Facebook. And it has binding authority over a very narrow type of case: whether a removed piece of content should be reinstated or an offensive post should come down, and whether users should remain banned. It only hears a handful of these a year.

“It’s still looking at that very narrow slice of what content moderation is, namely how Facebook treats individual posts,” says Evelyn Douek, a lecturer at Harvard Law School. “It doesn’t look at things like groups, pages, down-ranking decisions, how the news feed works in terms of prioritisation, how Facebook treats entire accounts.”

Facebook’s fiercest critics believe this renders the board a toothless “sham”, and that the company is merely trying to prove a self-regulation model works in order to stave off any legislation that could actually hurt its business.

“It’s a made-up corporate PR tool” says Jesse Lehrich, co-founder of the campaign group Accountable Tech. “I’m so frustrated by the notion that this thing that was created by Facebook — for Facebook to shirk their own responsibility — is being treated like a ‘supreme court’.” 

Some of these fears were borne out in the board’s ruling this month on whether or not former president Donald Trump should remain banned from the platform for violence-inciting speech that helped fuel the Capitol riots in January.

Trump supporters protest outside the US Capitol on January 6
Trump supporters protest outside the US Capitol on January 6 © Alex Edelman/AFP via Getty Images

The board upheld the suspension but pushed the decision on Trump’s ultimate fate — whether this should be permanent or not — straight back to Facebook.  

In an admission of the limitations of its own clout — and, perhaps, a snipe at Facebook — the board also noted that the company refused to answer seven requests for more information that it had submitted.

These included, for example, how Facebook’s news feed had affected how many people saw Trump’s posts. The board said there were numerous vague reasons cited by Facebook to explain their refusals, but these were light on detail. 

“If this was a court, you would have filings to that effect,” says Nate Persily, a professor at Stanford Law School. He notes that a legal case would be more transparent: the board would have to publish exactly what it was requesting from Facebook which, in turn, would have to lay out precisely why it was not handing over the information.

Whether the board’s mission is ideologically biased or its principles are clear are other questions to have raised by critics.

In its charter, Facebook outlines international human rights law as one of its guiding principles, and several experts in the field sit on the board. However, some experts argue that this jars: international human rights law is designed to govern states, not private companies. 

To some, this approach lends weight to the argument that Facebook is, in effect, a public utility, or a quasi-world government with ultimate power over free speech. But if a government were to ban nudity, for example, as Facebook does in its community guidelines, that would be at odds with human rights law.

“The problem is it buys into the idea that Facebook is a government and the news feed is a public square,” says Persily. “International human rights are not what the Facebook community standards are about. You have to think about what kind of rules can be applied at scale in very rapid decision-making contexts.”

In any case, under the First Amendment, platforms have almost complete discretion to decide what speech they will host and an individual does not have a right to be able to use their services, according to Douek.

This leaves the board hashing out new ground. But should a Facebook-backed entity really be the body to do this? Could a central regulatory body be the better alternative? 

Emily Bell
Emily Bell

Emily Bell, director of the Tow Center for Digital Journalism at Columbia University, advocates for an agency that can look across the tech platforms and exercise real powers — for example, to audit the companies and their secretive algorithms. “That has to be a separate regulator with its power coming from legislation, not Zuckerberg’s cheque book,” she says. “I don’t want the Federal Aviation Administration overseen by Boeing.”

But this poses the question: should each nation have such a body, leaving Facebook with fragmented rules from country to country? And what if a country takes a repressive, censoring stance on speech? Facebook has already been forced to accede to requests from authorities in India, Thailand and Vietnam to take down comments posted by dissidents.

A supranational body would conceivably work, but this too would have to decide whether to come down more on the side of free speech or human rights. 

Many experts argue that the board is the least-worst option — a stance that Facebook itself appears to have taken.  

“It is not a perfect answer, but it’s the best answer that we can come up with in an imperfect world where there isn’t clear government or public regulation [about] how we should deal with issues like this,” said Nick Clegg, Facebook’s vice-president of global affairs and communications, at the FT Global Boardroom conference this month.

He called it the “germ of an idea” that might later be “taken up in statutory regulation”, or expanded, with other social media companies coming on board. 

Casey Mattox, senior fellow at the Charles Koch Institute, argues that a self-regulatory body such as the board protects Facebook from politicians “trying to impose their own partisan will on the platforms”. It also guards against Facebook becoming too US-centric in its approach to moderation given its global operations. 

There are alternative experiments in the offing, though, that could yet bear fruit. Among the most thought-provoking are proposals from Twitter chief executive Jack Dorsey to solve the free speech dilemma with more technology.  

Dorsey has suggested offering users more “algorithmic choice”; do I want my feed to run in reverse chronological order, show the most popular posts, or automatically filter out certain types of content?

This approach was flagged by Dr Stephen Wolfram, founder and chief executive officer of Wolfram Research, who has suggested that Big Tech could “open up the pipeline, and create a market in which users can make choices for themselves” — essentially letting you, the user, pick your moderation algorithm from a brand you trust. 

Dorsey has now set up a project called Bluesky that is looking into the prospect of making content moderation fully decentralised — an open-source model where users have potentially endless choices. “There are opportunities for more user choice to drive less of a need for [an] external board like this,” says Mattox. 

But these ideas are nascent. And they still do not quite solve the issue of where the line should be drawn for the most egregious content and who exactly should decide on that.

“Algorithmic answers can create difficult edge cases and you’ll always need to resolve things on the basis of context and circumstances,” says Bell. At the end of the day — the buck still stops at a human.

Copyright The Financial Times Limited 2024. All rights reserved.
Reuse this content (opens in new window) CommentsJump to comments section

Follow the topics in this article

Comments