Content moderation dilemma tests Big Tech’s reach
We’ll send you a myFT Daily Digest email rounding up the latest Social Media news every morning.
When Facebook said it would create a court-style board to rule on site content, the social media group’s expansive influence on society was again drawn into the spotlight.
The announcement in September — shortly after a controversial plan by the company to launch a global virtual currency — promises to let Facebook users challenge its decisions.
The move was widely seen as an attempt to appease critics and fend off charges of political bias and censorship.
Despite employing 15,000 content reviewers, Facebook has been accused of being too slow to remove illegal content, such as live footage from terrorist shootings, and of inconsistent decisions about acceptable content for its site.
“Lawmakers are clearly gunning for Facebook,” says Phillip Souta, a technology specialist at law firm Clifford Chance. “Facebook is saying: ‘We get it, we’re doing what we can about contentious content. Give us some time.’ ”
Like other large social media companies, Facebook is spending more time and money monitoring and censoring extreme messages, photos and videos published by its 2.45bn users.
Facebook’s “oversight board” is due to start deliberating next year, and is being presented as the most ambitious step yet by Big Tech to create an impartial system for content arbitration. The board — whose 40 members will be paid through a trust funded by the company but not include Facebook employees — has been given the task of balancing freedom of speech with protection from unlawful content and hate speech.
Moderating content is problematic for all big social media platforms, says Sarah Roberts, assistant professor of information studies at University of California, Los Angeles. “They wish they could be out [of content moderation]. But that’s not going to happen and if anything, it’s going to increase.”
When a Facebook user disagrees with how the company enforces its rules, the user will be able to request an appeal to the oversight board.
Facebook says the board will be independent, and that its content decisions will be binding in most cases — the exception being potential violations of local law — even if Facebook disagrees. The board is likely to consider serious cases that have a wider “real-world impact”, Facebook says.
Potential cases could include disagreement with Facebook’s decision to remove content judged to be hate speech or bullying; removing a page promoting terrorism; or allowing nudity on the site because the content is deemed newsworthy. “We make millions of content decisions every week,” wrote Mark Zuckerberg, Facebook founder and chief executive. “I don’t believe private companies like ours should be making so many important decisions about speech on our own.”
Experts have mixed views on the board. Dia Kayyali of Witness, a group that helps people use video and technology to protect human rights, says the board is a “serious attempt” to make decisions about Facebook content.
Tommaso Valletti, professor of economics at Imperial College Business School and former chief economist of the European Commission, says the board is a “PR stunt” that at best will “take the blame on Facebook’s behalf”.
Facebook should be overseen by governments and regulators, not by a structure chosen by the company whose function will depend on Facebook’s existence, Mr Valletti says.
Wolfie Christl, a technology researcher and digital rights activist, is also unimpressed by the board. “Facebook may use it as a cheap tool to outsource responsibility and shield its business practices from real democratic oversight,” he says.
Alexander Brown, a reader in political and legal theory at the University of East Anglia who is writing a report about online hate speech for the Council of Europe, says the move is a positive one. Facebook’s commitment to abide by the board’s rulings is unprecedented and will create accountability and transparency in content moderation, he says.
However, Facebook’s decision that the board will not consider cases of potentially illegal content could limit its scope and usefulness, he says.
This distinction is a “missed opportunity” and “perplexing”, Mr Brown says.
Some experts say Facebook’s oversight board may be an attempt to pre-empt government regulation of content on social networks. In Europe, for example, Facebook is under pressure to remove illegal content faster or be fined.
Bernie Hogan, senior research fellow at the Oxford Internet Institute, says there could be a conflict of interest if a board decision is against Facebook’s commercial interests.
For example, if it ruled that Facebook was correct to publish news and images about anti-Beijing protests in Hong Kong, there could be a backlash against Facebook from Chinese authorities.
Facebook says a law firm is screening potential board members for “actual or perceived conflicts of interest” that could compromise their decisions.
However, some experts are reserving their judgment. “I’ll be convinced by Facebook’s [oversight] board when it makes a decision on content that slows Facebook’s growth,” Mr Hogan says.
Promoted Content
As technology is becoming ever more intertwined with society, questions are being raised over its growing influence. In this report, we look at the impact of Facebook creating an oversight board; how tech is blurring the lines for healthcare providers; the rise of e-government; and religion’s unexpected embrace of apps
Comments