Seeding a Fact-Checking Conversation: Claims & Ratings
One of the goals of the RealClearPolitics’ Fact Check Review is to seed informed conversation about the fact-checking landscape. Towards this end, it was gratifying recently to hear from the Washington Post’s Fact Checker offering that publication’s take on two of the key focal points we draw out in the Review: factually true statements labeled as false on grounds of being “misleading” and the common practice of assigning a single rating to a collection of independent claims.
As we were first building the Fact Check Review, one of the practices that stood out to us was how fact checkers occasionally verify the factual elements of a claim as true, but assign a rating of false due to their conclusion that the claim was “misleading.” In such cases, the fact checker typically argues that additional external context might sway how a reader interprets and understands the statement in question.
Given the intense subjectivity involved in moving beyond the factual elements of a claim towards the myriad ways a reader might interpret it, we explicitly flag such claims in our Review. Even the most basic statement of “fact” is seen through the lens of our own experiences, beliefs and views, leaving it open to differences of opinion.
In our announcement of the Review, I cited the example of a Washington Post fact check that examined several statements made by President Trump regarding sanctuary cities. Following the Post’s standard formatting, each of the claims being investigated is set off from the rest of the text by being indented, bolded and italicized. Following each claim is several paragraphs of narrative and evidence supporting or refuting it and at the end of the fact check is a single rating.
One of the president’s assertions I referenced was this one: “Last week, the mayor of Oakland warned criminal aliens of a coming ICE enforcement action -- giving them time to scatter and hide from authorities. The mayor’s conduct directly threatened the safety of federal immigration officers and the law-abiding Americans in her community.”
At face value, the president’s statement makes three distinct claims: (1) the mayor issued a warning; (2) the warning permitted individuals to avoid arrest; and (3) having criminals on the loose poses a potential danger to the community.
Instead of evaluating the statement by itself, the Post looked to additional remarks made by the president, Attorney General Jeff Sessions and ICE Director Thomas Homan regarding the mayor’s actions. Taking statements by these three men together, the Post interpreted the president’s comment to mean that 864 criminal aliens avoided arrest because of the mayor’s warning, rather than sticking to just the original bolded claim, which didn’t go that far.
This is an intriguing approach that suggests that fact checkers should adjust the structure of their fact checks to highlight their specific interpretation of a claim, rather than the specific quote from which it comes. In this case, instead of highlighting the statement above and restating it in the narrative as a more narrow claim, a more concise approach might have been to state the claim up front as “Did 800 undocumented immigrants elude authorities due to the Oakland mayor’s warning and were these individuals a danger to the community?” and provide all of the relevant statements they believe support this interpretation in the subsequent narrative as context.
To be fair, the Post’s fact check was explicit that its interpretation of the original statement was that “Trump, Sessions and ICE Director Thomas Homan all suggested that 800 or so undocumented immigrants had eluded authorities because of [Mayor] Schaaf’s warning.” However, other readers might interpret things differently, in line with Homan’s far more subdued argument that “I have to believe that some of them were able to elude us thanks to the mayor’s irresponsible decision.”
Therein lies the challenge when fact checks move beyond evaluating the factual elements of claims and begin interpreting how readers might understand those claims. Similarly, the stream of conflicting remarks that often emerge from an administration make it difficult to divine the deeper meaning of a statement through what others are saying about it. To the Post, Trump’s statement implied that more than 800 immigrants eluded arrest as a direct result of the mayor’s warning. To other readers, that same statement might merely indicate a belief that “some” could possibly have eluded arrest.
The differences of opinion that can arise when moving beyond explicit factual verification and towards reinterpreting claims is the reason we flag such fact checks in our Review.
Separately, the Post also offered an insight into how it views the assignment of its ratings. Trump’s statement about the Oakland mayor’s ICE warning was among several claims evaluated in a fact check with the rating “Four Pinocchios.” To the Post, ratings are a cumulative score that summarize the combined veracity of all claims evaluated in that fact check. In this case, the Post clarified to us, Four Pinocchios referred to that specific combination of those five claims taken together and cannot be extrapolated to any of the five claims individually.
Yet, a casual reader looking for a quick answer as to whether one or more of the president’s statements was true would likely assume the Four Pinocchios rating applied to all five claims, especially in the absence of an individual rating beside each claim.
In an unrelated interview with Politifact Editor Angie Holan, she told RCP she believes that typical fact check readers aren’t interested in the details surrounding a rating, they just want to quickly know whether a claim is true or false and then move on.
This possible disconnect between the Post’s view of its ratings as collective scores and the public’s potential to see those scores as applying equally to each claim within lends weight to the argument that fact checkers should limit fact checks to a single claim each. This would ensure that there is a one-to-one ratio of scores to claims, eliminating such ambiguity and ensuring that readers in a hurry can properly understand the ratings.
Moreover, as Silicon Valley increasingly looks to automate “fake news” detection by training machine-learning models on fact checks, having a one-to-one mapping of ratings to claims is crucial to the proper training of these algorithms. Putting this together, we’re pleased that the new Fact Check Review has sparked a conversation about two key fact-checking practices and we’re looking forward to many future such exchanges.