Selective Rules Enforcement Stirs Facebook Transparency Debate
Last week two major stories broke about the ways in which Facebook silently bends or waives its rules to accommodate influential voices. In India, the company waived its hate speech rules to avoid upsetting the government, while here in the U.S. it has avoided enforcing its fact-checking policies against select conservative outlets to avoid political blowback. Silicon Valley’s selective rules enforcement reminds us just how critical it is to have more data about social media companies’ operations.
At the dawn of the web’s globalization, many in the academic and policy communities saw the nascent Internet as a tool of democratization. The web’s decentralized nature promoted anything-goes free speech and made censorship theoretically futile, leading to predictions that it would circumvent the centralized control of countries such as China. In reality, governments throughout the world have instead harnessed the web to create ultimate censorship and surveillance states. Far from undermining the power of government, the web entrenched it.
In India, the Wall Street Journal details how some powerful politicians’ posts calling for violence against minorities were deemed so dangerous that, under Facebook’s rules, they should have been banned from the platform. Instead, the company’s content moderators were quietly overruled by its public policy team, which didn’t want to risk being restricted in the heavily populated South Asian country.
Similarly, in the U.S. the company appears to have bent the rules of its much touted fact-checking program, overruling the findings of its fact-checking partners by not taking punitive action against repeated or egregious violators. While its official written rules require suspensions, demonetization and algorithmic deprioritization for repeat offenders, Facebook appears to have consistently waived those rules for select conservative pages. In holding those pages to a different standard, the company cited everything from the amount of advertising dollars spent by PragerU to concerns that livestream video bloggers Diamond and Silk are “extremely sensitive and ha[ve] not hesitated going public about their concerns around alleged conservative bias on Facebook.”
In 2017 the company’s head of global policy intervened to prevent a news feed algorithmic change that would have reduced the visibility of many conservative pages.
With Facebook so willing to avoid enforcing its rules against conservatives while they hold the White House and Senate, what might happen if Joe Biden wins in November? With little need to avoid conservative ire, would the company begin a massive purge of conservatives from its platform for violating its acceptable speech and misinformation policies? Would GOP politicians suddenly find themselves removed en mass, placing them at a severe disadvantage in communicating with voters? Would Democrats now find themselves quietly exempted from the rules?
The Washington State Democratic Party learned last month just how devastating even a brief Facebook suspension can be. When the company claimed the state party had violated its ad disclosure rules, the resulting month-long suspension prevented the party from being able to weigh in on the news, issue rapid responses, fundraise, let voters and supporters know about events, etc.
Moreover, what is the point of Facebook’s fact-checking program if the company selectively enforces its misinformation rules, penalizing some transgressors while giving others a pass? The company defended its policy interventions to BuzzFeed, offering that while it “defer[s] to third-party fact-checkers on the rating that a piece of content receives,” it alone is “responsible for how we manage our internal system for repeat offenders” in deciding when to penalize users for repeated violations.
Interestingly, BuzzFeed notes that the company outsources some of its policy decisions in other countries to its fact-checking partners, suggesting their ratings may play a far greater role than previously known.
When Facebook and Twitter earlier this month forced President Trump to remove a tweet about COVID-19’s impact on children, both companies cited his statement as a violation of their health misinformation rules. Yet, when Elon Musk tweeted the same statement, Twitter defended it as allowed under its rules.
With Facebook’s Mark Zuckerberg discussing the setting of rules around state use-of-force announcements in the aftermath of George Floyd’s death, and Twitter’s Jack Dorsey embracing the company’s responsibility to “[not] hesitate” to “interven[e]” on posts, including presidential statements, that it believes are harmful to the nation’s future, it is clear that the companies now see themselves as shaping governments. Only a better understanding of their influence might slow them.