More on Models: Answering Critics on Election Predicting

By Sean Trende - November 22, 2011

‹‹Previous Page |1 | 2 |

With that in mind, let’s turn to Masket’s (and to a lesser extent, Nyhan’s) arguments. There’s a lot to cover, so I’ll just hit the highlights. Masket claims that to accept Abramowitz’s model, you don’t really have to accept that elections can be reduced to two or three variables. This is inconsistent with Abramowitz’s claim (and the similar claim of almost every model) that these three, linear factors explain 89 percent of the variance in elections. These models may allow for other factors, but they are necessarily pretty insubstantial in model-world.

Masket tries to shoehorn these "campaign effects" into the models' error terms. But the statistical expectation is that the error terms are random, not that they are a catch-all for stuff the model missed. If the error term isn’t random -- that is, if it represents actual, statistically significant factors that are missing from the model, then the model is not “unbiased” and the risk of false positives and false negatives is greatly increased (as we learned in 1982, 1992, 1994, 2000, 2002, 2004 and 2010).

In fact, you can see this bias in a chart Nyhan supplies: In four of the last five elections, the models have almost all come out on the same side of the actual value. Note also that Nyhan’s “ensemble” model has predicted a higher vote share for the incumbent party than actually occurred in seven of the nine elections at issue. So why have these models shown bias? It could be bad luck (problem: the odds of getting seven “heads” in nine trials is about 7 percent). Or it could be because they proceed on the assumption that Bill Clinton’s re-election in 1996 was largely driven by the same two or three factors that drove Dwight Eisenhower’s re-election in 1956.

To be sure, Masket concedes that there may be some non-linearity when you get far out on the “tails.” But that leads to a big problem. Linear regression does not deal with outliers well, especially with an “n” of 16 (the number of data points in Abramowitz’s model). It still tries to fit a line through the elections that occur out on the “tails.” This attempt to squeeze non-linear results into a linear equation will pull the line, and skew results, decreasing the overall reliability of the model.

This problem is avoided if we’ve never been out on a tail, but this strikes me as unlikely. If you look at charts of the relationship between economic variables and presidential elections, 1956, 1964, 1972 and 1984 all stand out as years where the president’s party did substantially better than the best-fit line would suggest, implying that the “in party” was enjoying non-linear effects. And 1952 is likewise substantially off the lines; this is probably not due to non-linear effects from the economy so much as the sui generis candidacy of Dwight Eisenhower. But again, the model will tend to treat the Eisenhower elections in terms of economic growth and not in terms of, well, Ike. Or, as political scientist Jay Greene once put it, “[t]he problem is that modelers try to fit their models as closely as possible to previous elections, thereby masking what is regular in elections with what was previously exceptional.”

Masket then dismisses the problems arising from a small number of variables on the grounds that “it's pretty amazing we get such robust results from so few cases.” I’m not particularly amazed. There’s no doubt that the models are onto something. The economy is clearly a major factor in elections, and a regression will properly pick that up. With tens of thousands of variables to choose from and enough time in the stats lab, you’ll find more than a few that correlate well with 16 data points.

More importantly, there is really little variance in presidential election results to explain.

Most of the variance is supplied by only a few years: 1956, 1984, 1964 and 1972. Which means that these models are really expending most of their energy trying to explain these years, giving them an effective “n” of four. As it happens, these also are years when (a) second-quarter growth has been robust and (b) a party is seeking only its second term. I’m not sure we can say with all that much confidence that this isn’t a coincidence.

Especially surprising is Masket’s argument that “[i]t’s very rare that an economy that's humming along at three percent growth for a year or more will suddenly plunge into recession the quarter prior to an election. That certainly can happen (and it kind of did in 2008), but it's a very rare event.” It is surprising because, only a paragraph earlier, he cites his 2008 model’s accuracy as proof of how well this election can be explained purely on second-quarter economic diagnostics.

First, please understand that the second-quarter numbers are used simply because they’re the ones that work, not because there’s an especially compelling theoretical justification for them, or to get the predictions out in time. If you re-estimate Abramowitz’s model using third quarter numbers, the r-square is more like .46, with neither job approval nor incumbency significant at .05. It then predicts Jimmy Carter in a squeaker, and Adlai Stevenson in 1952. Oh, and Dewey defeats Truman.

This gets to the nut of our disagreement. The models say that the 2008 election was largely about second-quarter growth -- which was incidentally the only positive quarter in 2008, and was positive only because Congress artificially juiced growth with a stimulus. Any model that got 2008 “right” based on second-quarter factors is assigning minimal importance to the fact that the economy was contracting at 9 percent on Election Day.

I suppose there is an argument for this, much like I suppose that there is an argument that 1948 was really about the unusually strong second-quarter growth, even though the areas that had swung the most heavily against Tom Dewey vis-à-vis 1944 were actually in the midst of a serious recession on Election Day. But I think it’s an extraordinarily weak argument.

Instead, I tend to give credence to the polls taken both before and after the 2008 conventions showing a tight race that could have gone either way, and think that the fact that the economy contracted by 3.9 percent in the third quarter and was contracting at a 9 percent annualized rate in the fourth quarter were what caused Obama’s substantial victory in an otherwise close race. In other words, I’m pretty convinced that the models fell backwards into being correct in 2008. Who knows, they may fall backwards into being correct in 2012 as well. Should we expect it? Probably not. 

‹‹Previous Page |1 | 2 |

Sean Trende is senior elections analyst for RealClearPolitics. He is a co-author of the 2014 Almanac of American Politics and author of The Lost Majority. He can be reached at Follow him on Twitter @SeanTrende.

A President Who Is Hearing Things
Richard Benedetto · November 12, 2014
Obama Is No Clinton
Larry Elder · November 13, 2014
Bret Stephens' Call for Robust U.S. Foreign Policy
Peter Berkowitz · November 16, 2014

Latest On Twitter