Fact-Checking: A Business or a Public Service?
One of the most striking aspects of the fact-checking landscape is just how little is known about the inner workings of the sites that increasingly act as the online world’s ultimate arbitrators of truth. While many sites self-report basic information, few offer substantive detail on their methodology and workflow, especially the instruction manuals and review processes at the heart of their work.
While RealClearPolitics’ Fact Check Review makes its entire codebook and methodology available, few fact-checking sites do so. Moreover, specific details can be hard to come by even when asking the sites directly. When I interviewed the founder of well-known fact-checking site Snopes a year and a half ago, questions relating to precise details of its inner workings were answered only in the broadest terms.
Last month I reached out to PolitiFact Editor Angie Holan to ask her many of the same questions. The goal is to better understand how the Pulitzer Prize-winning site works internally and to learn more about the policies behind the trends we see in the Fact Check Review.
One of the most basic and important decisions fact-checking sites make is story selection. How do they decide which of the myriad questionable and misleading claims published each day are worthy of investigation? On its website, PolitiFact outlines the basic tenants it uses to decide which claims to fact check. In a reflection of just how subjective this process can be, the site emphasizes that “in the world of speechmaking and political rhetoric, there is license for hyperbole” and that “we avoid minor ‘gotchas’ on claims that are obviously a slip of the tongue.”
This leaves open the question of just what counts as acceptable hyperbole or slip of the tongue not worthy of fact-checking and what should be subjected to a full fact check?
Asked where it draws the line between dismissible errors and those grievous enough to warrant fact-checking, Holan offered that PolitiFact has a detailed set of training materials and internal guidelines that ensure consistency across its fact checkers. She noted there are also clear guidelines governing how to decide whether a fact check is determined to have veered too far into opinion checking and should be published as a news item rather than a fact check.
As to where these materials are published on its site, she said that PolitiFact considers its training and guideline materials to be trade secrets and part of its “competitive advantage” over other fact-checking sites. It therefore does not make them available to anyone outside its staff.
When asked whether PolitiFact at least makes its materials available to other fact-checking sites to help them learn from its own experiences and improve their processes, Holan emphasized that fact-checking sites are, at the end of the day, businesses. Although some are nonprofit and others for-profit, they nonetheless compete with each other for funding, staff and resources. In PolitiFact’s view, the proprietary materials it has developed over the years give it a significant advantage over its peers; sharing them with other fact-checking sites would enable them to improve their processes and more effectively compete with PolitiFact.
In the public narrative, fact-checking sites tend to be portrayed as a collaborative fraternity of partners working closely together in pursuit of the public good. Such a narrative might suggest absolute transparency with fact checkers openly publishing all of their training and instruction materials, offering detailed documentation of their workflows and working closely with their peers to improve the overall quality of the fact-checking landscape.
By contrast, framing fact-checking sites as competitive businesses with valuable trade secrets changes our expectations. In this model, it makes sense for sites to provide only the kind of rudimentary detail most currently provide on their websites.
Fact-checking isn’t free. As with all journalism, there are human and operational costs to running such an operation. From a purely economic standpoint, it makes perfect sense that fact checkers may view their work through the lens of how to pay the bills and that this would take priority over transparency.
On the other hand, the immense influence fact-checking sites have over our modern online world, with the ability to effectively banish stories and entire outlets from Facebook with their ratings, raises the question of at what point they transcend being mere businesses towards being a public good that requires greater visibility into how they function?
Facebook has frequently made the argument to me that as a commercial business it has no obligation to be transparent and that sharing more information about its inner workings would benefit its competitors. It, too, views its role as a business, rather than a quasi-public utility, when it comes to the obligations inherent in the latter with regards to transparency.
The problem with this lack of transparency is that, as Facebook learned, it’s hard to disprove allegations of bias when the data necessary to properly evaluate such claims is not made publicly available. By being transparent, fact-checking sites could enable a more informed conversation about their design, dispel common myths and help uncover the kinds of hidden biases that persist in even the best-designed systems.
PolitiFact’s founder Bill Adair offered in 2016 that “we’re human. We’re making subjective decisions. Lord knows the decision about a Truth-O-Meter rating is entirely subjective.” Angie Holan echoed this theme. “We make no claims that we are a random sample or a representative sample, though we don’t believe there is bias in our decisions,” she told me, adding that “it’s important to recognize that this is not a scientific instrument, it’s not peer reviewed. We are journalists not social scientists.”
This was a common theme throughout our conversation, that fact checkers are journalists and should be held to journalistic standards of transparency, rather than academic-like researchers who would be expected to publish their internal manuals and workflows.
One question I asked was whether PolitiFact would be willing to share the full list of URLs of potentially false or misleading information forwarded to it each day through its various channels. Given that the site is only able to fact-check a small fraction of those URLs, having the master list would afford greater visibility into PolitiFact’s selection process. Similar to Facebook’s release of its Trending Topics list, having this list would allow external assessment of any implicit bias in PolitiFact’s selection process.
Holan initially argued that compiling such a list would take too much time, but when I noted the process could be automated, she offered that it would be irresponsible for the site to republish questionable content, since that would grant it greater visibility. When asked whether PolitiFact would at the very least make its materials available to academic researchers and external experts on misinformation, she declined. She noted that due to the site’s limited resources, “we have to balance the good of transparency with the good of publishing quickly.” Summarizing the demands of rapid-turnaround fact-checking, she noted that “speed outweighs transparency.”
In essence, fact-checking sites today operate as black boxes with no ability for outside experts to audit their methodologies, no sharing of training manuals to help hone the next generation of fact checkers, and no availability of instructional documents to help the general public improve its own information literacy. Instead of helping the public learn how to be more informed consumers of information, fact-checking sites simply give them their version of the answers.
At PolitiFact, the editors believe that this is what its readership wants. According to Holan, few readers are apparently interested in wading through pages of complex investigative research and perusing all of the underlying notes and assessments. Instead, users just want to look up a claim, see its rating, and move on. In short, typical consumers of fact checks aren’t using them as a starting point and deep diving into their references and conducting their own research; they just want to be told whether the claim is true or false.
This extends even to contextual details, such as whether there was internal disagreement over the rating assigned to a given claim. PolitiFact uses a three-person voting process where the fact checker suggests a rating and a team of three editors decides, with the majority winning. This raises the question of how often ratings are unanimous and why the site does not flag fact checks where there was split decision among the editors. Holan’s answer was again that PolitiFact’s experience is that readers simply don’t care about having that kind of information – they just want to be told whether the claim is true or false and move on with their lives.
Should fact-checking sites take the extra time to provide details the majority of their readers aren’t interested in? Perhaps if sites began to include such detail, readers might start to consume it over time, helping to nudge the public towards greater information literacy.
I also asked whether, in the interests of transparency, PolitiFact would be willing to publish a list of all of its fact checks that have been disputed, along with an indication of whether the rating was eventually changed. Intriguingly, she noted that many of the disagreements about fact checks are actually never sent to PolitiFact’s editors. Instead, they are published across the web at large, from social media posts to personal blogs to news articles. In essence, rather than filing a formal complaint with PolitiFact disputing a fact check, typical users might instead take to their personal blog or social media to air their grievance.
This suggests one of two possibilities. Perhaps those in the fact-checking community need to better educate users about the options available to them to notify fact checkers directly about their concerns. Or maybe, users simply don’t believe fact checkers will be receptive to criticism of their work. Perhaps a useful service might be a site that compiles all fact-checking disputes into a single centralized database that fact-checking sites can review and platforms like Facebook can use to weigh whether a fact check itself is disputed. This also points to the value of fact checkers providing greater transparency into their internal workflow for a given fact check, offering indicators of how confident the fact checker is in their assessment and whether there was a disagreement among the editors as to what rating should be assigned.
PolitiFact is also one of the sites that Facebook relies upon to flag “fake news.” Facebook’s initiative has been criticized by the fact-checking community for its own lack of transparency and the conditions it places on its fact-checking partners not to release substantive details of their collaboration or the information they receive from the company.
Facebook provides its fact-checking partners with an internal portal that lists all of the posts its algorithms have flagged as potentially false or misleading. Partners are not permitted to share this list with anyone else, making it impossible to conduct an external review of potential ideological or topical bias in Facebook’s “fake news” efforts.
When I asked Holan whether PolitiFact had considered pressuring Facebook to allow its lists to be shared with external researchers, she emphasized that it is not PolitiFact’s role to lobby for researchers or the public to have access to it. Returning to the theme of fact checkers as businesses, not public goods, she noted that lobbying for greater transparency and access to such data is beyond the scope of what fact checkers do.
On more than one occasion during my conversations with Facebook, they have used that exact argument, emphasizing that as a business it is unfair to expect it to be transparent and disclose any detail about its inner workings. To Facebook’s point, as a commercial enterprise there are no obligations or expectations of transparency. Yet, as a company that exerts exceptional influence over society, at what point does a company transcend to a “public good” that must offer more information on how it works?
Putting this all together, as a public good, fact checkers would be expected to be completely transparent, publishing all of their instruction manuals, workflows and other details for public scrutiny. Fact-checking sites would be expected to collaborate closely, sharing all of their materials and helping improve lagging sites to ensure consistency and quality across the fact-checking landscape. Yet, as businesses competing with each other, fact checkers would understandably lack transparency, tightly restricting all information about their workings as trade secrets. Similarly, if fact-checking sites believe their readers are interested only in ratings and not all of the inner processes that led to that rating and whether it was subject to internal disagreement, then it makes sense for sites to focus their efforts on arbitrating truth, rather than improving information literacy. In the end, as fact-checking infuses itself ever more tightly into the fabric of our online experience, we must decide whether they are businesses or whether they are public goods and, if the latter, fact checkers must open up to a new era of transparency.