Time and Again, Social Media Giants Get Content Moderation Wrong: Silencing Speech about Al-Aqsa Mosque is Just the Latest Example
Since May 7, Al-Aqsa, one of the holiest sites for Muslims, and the neighborhood of Sheik Jarrah in Jerusalem have been the site of violent attacks against Palestinians, many of whom had come to the mosque to worship during Ramadan. During this violence, people took to Facebook and Instagram to post about what was happening to Palestinians using hashtags that referenced the mosque (#AlAqsa). But Instagram, owned by Facebook, many of the posts through its content-filtering system because it inaccurately identified the hashtag as referencing a terrorist organization. While this may have been an error 鈥 though a demeanning and culturally ignorant one 鈥攊ts impact on users鈥 expression and the flow of information cannot be ignored. Palestinians and their supporters were silenced by one of the most powerful communications platforms in human history at a critical moment.
This wasn鈥檛 an isolated incident. In today鈥檚 world, a small handful of private corporations make some of the most important decisions regarding our online speech. Companies like Facebook, Twitter, and YouTube set rules for what kinds of content they鈥檒l allow people to publish, defining what constitutes 鈥渉ate speech,鈥 鈥減raise of terrorism,鈥 and 鈥渇ake news.鈥 And they rely on algorithms to automatically flag certain words or images that appear to cross the lines. Once posts are removed or accounts are suspended, users 鈥 particularly those who don鈥檛 make headlines and cannot access backchannels 鈥 have too little recourse to appeal or get reinstated.
The major social media companies often get content moderation wrong 鈥 both because of their vague and sweeping rules, and because they make mistakes when applying those rules, often through blunt automated detection systems. Perfect content moderation may be impossible, but the major platforms can do better. They should give users more control, and respond to user experiences and reports, rather than rely so heavily on automated detection. They should also provide transparency, clarity, and appeals processes to regular users, and not just those that make headlines or have a personal connection at the company.
Here鈥檚 a rundown of some recent examples of content moderation gone wrong.
What Qualifies as Praise of 鈥淭errorist鈥 Groups?
Instagram鈥檚 ban on #AlAqsa is not the first time that the social media giants have misapplied their rules regarding praise or support of 鈥渢errorist鈥 content. Over the summer, dozens of Tunisian, Syrian, and Palestinian activists and journalists covering human rights abuses and civilian airstrikes that Facebook had pursuant to its policy against 鈥減raise, support, or representation of鈥 terrorist groups. Facebook deleted at least 52 accounts belonging to Palestinian journalists and activists in a single day in May, and more than 60 belonging to Tunisian journalists and activists that same month.
Relatedly, in October 2020, , all refused to host San Francisco State University鈥檚 roundtable on Palestinian rights, 鈥淲hose Narratives? Gender, Justice and Resistance鈥濃攆eaturing Leila Khaled, a member of the Popular Front for the Liberation of Palestine. The companies decided to censor the roundtable after it became a target of a coordinated campaign by pro-Israel groups that disagree with Khaled鈥檚 political views. In this instance, the companies pointed to anti-terrorism laws, rather than their own policies, as justification.
But these decisions, too, highlight platforms鈥 role in curtailing speech in an increasingly online world 鈥 not to mention problems with the underlying material support laws. One fundamental problem is that governments鈥攏ot to mention the social media giants鈥攄o not have an agreed upon and transparent definition for terms like 鈥渢errorism,鈥 鈥渧iolent extremism,鈥 or 鈥渆xtremism,鈥 let alone 鈥渟upport鈥 for them. As a result, rules regulating such content can be highly subjective and open the door to biased enforcement.
鈥淗ate鈥 Against Whom?
For years, Facebook moderators have treated the speech of women and people of color 鈥 including specifically when describing their experiences with sexual harassment and racism 鈥 differently than that of men and white people, pursuant to their community standards regarding hateful speech.
In 2017, when Black women and white people posted the exact same content, Facebook .
Notwithstanding tweaks to their community standards and algorithms, that problem persists. This year, for example, Facebook removed posts in groups created by users as spaces to vent about sexism and racism for .鈥 Meanwhile, 鈥減osts disparaging or even threatening women鈥 stayed up. The company banned phrases like 鈥渕en are trash,鈥 鈥渕en are scum,鈥 and even 鈥淚 dislike men鈥 鈥 but posts like 鈥渨omen are trash鈥 and 鈥渨omen are scum鈥 were not removed, even if users reported the posts.
Similarly, in June 2020, at the height of protests against the police murder of George Floyd, Facebook鈥檚 automated system removed derogatory terms like more often than slurs against Jewish, Black, and transgender individuals. Even after Facebook attempted to tweak its algorithmic enforcement to address this, Instagram still removed a post calling on others to 鈥渢hank a Black woman for saving our country,鈥 pursuant to the company鈥檚 鈥渉ate speech鈥 guidelines.
As and the reported, these policies and applications have forced users to avoid using certain words, swapping out 鈥渕3n鈥 for 鈥渕en鈥 and 鈥渨ipipo鈥 for 鈥渨hite.鈥
What is 鈥淪exually Suggestive鈥 and What is Socially Acceptable?
Similar problems have arisen with enforcement of the companies鈥 policies regarding 鈥渟exually suggestive鈥 content and posts regarding 鈥渟exual solicitation.鈥
In 2019, after Instagram repeatedly took down a topless photo of Nyome Nicholas-Williams, a fat Black woman, in which her arms covered her breasts. The users noted that nude images of skinny white women were not subjected to the same treatment and less likely to be considered inherently sexual.
More recently, Facebook , an independent band whose members are a same-sex couple, as 鈥渟exually explicit鈥 because it pictured the two women with their foreheads touching. As an experiment, Unsung Lilly uploaded the same ad with two different photos, one with a 鈥渘onromantic鈥 photo of themselves and the other of a heterosexual couple touching foreheads. Both were approved by Facebook.
Along with queer users, sex workers, certified sex educators, and sexual wellness brands are also regarding 鈥渟exual solicitation.鈥 These users report that posts including 鈥渇lagged words, like 鈥榮ex鈥 and 鈥榗litoris,鈥 have been removed from Instagram鈥檚 search function.鈥
Censorship Decisions are Arbitrary
Social media giants silence speech in arbitrary ways. For example, Facebook recently banned a recent college graduate for three days after he posted and deeming America the 鈥渓and of ignorance and greed.鈥 The company also drew seemingly random regarding absentee voting, poll-watching, and COVID-19 during the 2020 presidential election, removing such posts as 鈥淚f you vote by mail, you will get Covid!鈥 and 鈥淢y friends and I will be doing our own monitoring of the polls to make sure only the right people vote,鈥 but permitting 鈥淚f you vote by mail, be careful, you might catch Covid-19!鈥 and 鈥淚 heard people are disrupting going to the polls today. I鈥檓 not going to the polling station.鈥
A Path Forward
These examples highlight ongoing problems with the social media giants鈥 content moderation policies and practices 鈥 specifically, a lack of clarity in community standards, policies that cover too much speech, a disconnect between automated systems and actual user reports, and insufficient access to appeals. These problems have significant impacts in the offline world, altering discourse and limiting access to potentially critical information.
For these reasons, if they are to err, the major social media companies must err on the side of enabling expression. They should establish content policies that are viewpoint neutral, and that favor user control over censorship. And they should ensure that any expansion of content policies comes with a commensurate expansion of due process protections 鈥 clear rules, transparent processes, and the option of appeals 鈥 for all users. Because of the scale at which these platforms function, errors are inevitable, but there can be fewer and even those that still occur need not be permanent.