Privacy Rules Won't Fix Real Problem in Facebook Scandal

COMMENTARY
Privacy Rules Won't Fix Real Problem in Facebook Scandal
AP Photo/Alastair Grant
X
Story Stream
recent articles

The Facebook-Cambridge Analytica revelations have caused renewed focus on privacy rules, although investors don’t seem too concerned judging by the nearly $50 billion increase in Facebook’s market value after announcing its first-quarter earnings. What should concern us all, however, is not Wall Street’s views on privacy rules, but that Congress and so many others believe privacy is the key policy area to focus on in response to the incident. 

By its own admission, Facebook did not show sufficient interest in policing the privacy standards it promised its users. But the policy responses generally being considered -- new privacy rules based on the European Union’s stricter standards — are not directly relevant to the issue at hand because they would have done little, if anything, to prevent the damage the Cambridge Analytica incident helped foster. 

Nobody among the 87 million users whose data Cambridge Analytica obtained appear to have been harmed directly. That shouldn’t be surprising since Facebook does not generally collect sensitive information like health and financial data. 

But that does not mean the incident was harmless. 

The real harm came from the ability of bad actors — the Russian government in particular — to use social media to promote misinformation in an effort to suppress voter turnout and change votes, as Robert Mueller so carefully laid out in his indictment of three organizations and 13 Russian nationals. For example, Mueller notes that in July 2016, “[d]efendants and their co-conspirators purchased advertisements on Facebook to promote the ‘March for Trump’ and ‘Down with Hillary’ rallies.”

 Thus, while the users whose data was taken were not directly harmed, anyone whose voting behavior changed because of misinformation targeted with the help of that data was harmed. This private harm aggregates to larger social harms if it affected the outcomes of any elections. This includes not just the presidential election but state and local elections as well. 

Regardless of whether one believes European-style privacy rules would be a net benefit, they are not a response to the problem at hand. After all, strict privacy rules did not prevent similar election interference in Europe. 

To its credit, Facebook has announced its intention to require more transparency in the identity of buyers of political ads, much like political ads on old media include a line saying, “I am politician so-and-so, and I approve this message.” But this change, beneficial though it may be, may be difficult to enforce, especially if political messages are disguised as news or other supposedly non-political posts. We may also see pushback against this rule from U.S. politicians themselves when they find themselves unable to instantly post campaign ads in the next election cycle. 

A famous cliché says that it takes a theory to beat a theory. And I have no good suggestions for what the right policy solutions are. Still, it is useful to reframe the debate so that it focuses on ways to address the issue rather than on ways to implement a separate agenda that is only tangentially related. 

We will probably never know if the misinformation campaigns affected the outcomes of any elections. But we want to make such campaigns more difficult to carry out in the future. Economic regulation was never intended to be a tool that protects our social choice mechanisms from well-financed targeted attacks, and we should not allow the Facebook-Cambridge Analytica incident to eclipse the reluctance of the Trump administration and Congress to properly respond to attacks on election integrity. 

Let Facebook eat crow. Let’s have a robust debate on privacy based on empirical evidence on how much people truly value their privacy, in word and deed.  Conversations need to include the costs and benefits of different policy approaches to regulate the data-driven economy. But that is a separate debate. 

We must remember that what the Facebook-Cambridge Analytica incident reveals is how easy it was for the Russian government and others to rapidly spread misinformation through advertisement channels in attempting to affect an election’s outcome. This problem is larger than the ad network of a single platform, but Facebook should be responsible for the potential and dangers of its own technology, and the administration and Congress should not feign ignorance of election interference in the information age.

Scott Wallsten is president and senior fellow at the Technology Policy Institute.



Comment
Show commentsHide Comments
You must be logged in to comment.
Register