Social Media's Role in Democracy: More Harmful Than Helpful?
AP Photo/Kathy Willens, File
Social Media's Role in Democracy: More Harmful Than Helpful?
AP Photo/Kathy Willens, File
Story Stream
recent articles

Last week something extraordinary happened: Twitter briefly suspended the official account of the president of the United States, preventing him from posting until he deleted a tweet it said violated its rules. From merely hiding the president’s tweets, as it had done before, the company briefly stopped him from tweeting altogether.

Then, three days later, Yelp announced it would start formally flagging businesses accused of racism based solely on media reports.

Those two developments crystallized once again a key question that increasingly shadows our age: How can the growing power of social media companies coexist with the foundations of democracy? A democratic society rests upon an informed citizenry free to openly debate their shared future. The First Amendment guarantees this, enshrining both the right of the press to cover the unvarnished reality of daily events and the right of the public to consider all ideas, even those possibly deemed harmful by the majority of society. Pundits who laud social-media censorship would do well to remember that calls for the rights we hold dear today, including universal suffrage and civil rights, were once deemed the same kind of “harmful” speech that in today’s world would likely be banned by social media.

Social platforms were once viewed as a way to promote democracy to the world, granting unfettered freedom of expression and unfiltered access to information. Today they enforce ever-changing opaque rules of “acceptable speech” and define “truth.” Even more troubling, the journalism world is increasingly embracing Silicon Valley’s new role as Ministry of Truth rather than condemning it.

Emboldened by the media’s support for muzzling a president many news outlets despise, Silicon Valley companies have ramped up their censorship of elected officials. It was just five months ago that Twitter first visibly flagged an official statement of the U.S. government as “misleading.” With such censoring becoming almost routine now, it becomes front page news only when a social platform doesn’t censor the president.

Yet Twitter’s suspension of President Trump’s Twitter account last week crossed a new line. What would have happened if a national emergency such as an earthquake or coordinated terrorist or cyberattack had struck during this period, with the president ability to communicate with the American public compromised? Such disasters could have impaired Twitter’s ability to quickly restore his access, and it is unclear if they would have done so even in a national disaster.

The courts have ruled that “Twitter is not just an official channel of communication for the president; it is his most important channel of communication.” How is it, then, that a private company has the right to disable an official government communications channel from posting and Facebook has the right to delete an official government announcement? Unsurprisingly, neither company responded when this question was posed to them.

How do social media companies reconcile this censorship with the traditional norms of democratic societies? In 2018, a Facebook spokesperson offered only that “they’re definitely important questions, but I don’t have anything else to share right now.” Asked again in light of their increasing action against the president, neither Twitter nor Facebook responded. Nor did either company respond when asked what would stop them from banning users or politicians calling for them to be broken up as monopolies.

Not content merely to rule the digital world, social platforms have increasingly stretched their reach over the physical domain. This past April, Facebook banned the use of its platform to organize protests that did not require social distancing. It subsequently quietly relaxed this ban for the George Floyd protests and has remained silent when asked whether it still enforces those rules regarding other such demonstrations.

Yelp continued this trend last week with its announcement that it would begin appending a “Business Accused of Racist Behavior Alert” warning label to reviews. Rather than rely on the due process of police reports, forensic media analysis and court rulings, the company’s sole verification source will be news reporting. Given that media coverage itself can be misled by viral social campaigns, it is unclear how, precisely, the company will ensure its new effort is not manipulated. And given the #MeToo movement’s split over the sexual assault allegation against Joe Biden, it is further unclear how Yelp will adjudicate the inevitable dual standards that will emerge and evolve.

Yelp’s reliance on news reports for “verification” points to the larger problem confronting social platforms today: How to arbitrate truth? Take the example of conflicting guidance from public health authorities regarding spread of the coronavirus. Asked whether a post recommending masks would have been removed back in February for violating then-current CDC guidelines, a Facebook spokesperson acknowledged the difficulty of determining “truth” amidst the fast-changing scientific understanding of COVID-19 and suggested that government should step in rather than having private companies decide what to delete and what to permit.

Beyond their more overt actions of banning users, deleting posts, and setting “acceptable speech” rules, there lurks an even more powerful force impacting American democracy: the algorithms that increasingly customize what we see online.

The media once served as a bulwark against the narrowing of our national understanding of key issues. While the coastal elites of legacy news outlets were always given outsized influence on the news cycle and national conversation, local journalists would spotlight the events and concerns of their own communities, ensuring their voices could be heard in the national debate. But with the collapse of small-town journalism, the increasingly dominant coastal media often dismiss those concerns as the uneducated ramblings of “flyover country.” Once-sacrosanct media ideals like “both sides” reporting are facing calls for elimination in order to stop promoting “nonsense” and “conspiracy” theories and Republicans’ lies.

In their heyday, broadcast and print journalism exposed us to a cross-section of the day’s events, broadening our horizons with the sometimes-serendipitous discovery of news and ideas we would not otherwise have encountered. In contrast, the algorithms that underlie our social platforms are designed to channel us towards content that provokes the emotional extremes most likely to engage us. Facebook’s own internal research concluded in 2018 that “our algorithms exploit the human brain’s attraction to divisiveness” and will feed users “more and more divisive content in an effort to gain user attention & increase time on the platform.”

This can lead to almost parallel worlds of information awareness. In 2014, for example, Facebook users famously enjoyed lighthearted videos of friends and celebrities dumping buckets of ice over their heads for the ALS Ice Bucket Challenge, perhaps blissfully unaware there was anything amiss in America. Twitter users, meanwhile, saw endless livestreams of social turmoil as police and protesters clashed in Ferguson, Mo. Invisible algorithms steered their respective user communities towards two starkly different views of our nation.

As news is increasingly consumed through these digital platforms, the media landscape has begun to drift back toward the narrow parallel views of America that haunted the party-paper model. Viewers of CNN and MSNBC could be forgiven for believing that Portland, Ore., has been at peace the last four months and that Seattle’s CHOP zone enjoyed a “summer of love.” Fox viewers saw video of violent looters rampaging nightly in the streets, while the news channel’s peers praised “peaceful demonstrations.” Their only overlap was a fixation on imagery of law enforcement.

How can a democracy function when half the nation turns on the television, opens a newspaper or reads social media and sees an entirely different America than the other half? How can we reach consensus on issues ranging from policing to pandemic response when we’re exposed to such different views of our nation?

In these partisan times, it can be all too easy to embrace Silicon Valley’s censorship as a necessary evil to curb the flow of hateful speech and misinformation. The problem is that, by definition, a democracy represents the collective will of an informed people, not the arbitrary decisions of unaccountable corporations to determine what is allowed and disallowed.

To see where this path inevitably takes us, ask your helpful Amazon Alexa device, “Is Amazon a monopoly?” -- and try running an ad campaign on Facebook questioning its answer. 

RealClear Media Fellow Kalev Leetaru is a senior fellow at the George Washington University Center for Cyber & Homeland Security. His past roles include fellow in residence at Georgetown University’s Edmund A. Walsh School of Foreign Service and member of the World Economic Forum’s Global Agenda Council on the Future of Government.

Show comments Hide Comments