Advertisement

The Fuzzy Math, and Logic, of Election Prediction Models

The Fuzzy Math, and Logic, of Election Prediction Models

By Sean Trende - November 11, 2011


This week, the Center for Politics' "Crystal Ball" released the latest iteration of Professor Alan Abramowitz's "Time for Change" model, based on the past 16 presidential elections (dating to 1948). According to Abramowitz, presidential elections can be reduced to a simple equation: The incumbent party's share of the two-party vote equals 51.7 percent, plus .11 multiplied by the incumbent president’s net approval rating in June of the election year, plus .54 multiplied by GDP growth in the second quarter of the election year. An additional caveat is then added, stating that if a party is seeking to hold the White House for a third consecutive term or more, it loses 4.4 percent off of that tally. Since President Obama is only seeking a second consecutive term for the Democrats, he will not receive this penalty. Therefore, unless things take a serious turn for the worse, he will have an excellent chance of winning the election.

Models such as this one are becoming increasingly popular these days, as political scientists use the Internet as a means of reaching a mass audience. This is unfortunate, as there are many, many reasons to be skeptical of these models. Most of the better objections involve high-level statistical jargon that isn’t appropriate here. So instead, just understand that to accept the validity of Abramowitz’s model -- and to be honest, most predictive models of elections -- you have to accept the following 10 things:

First, at a basic level, you have to accept that something as complex as voting can be reduced to a simple, three-variable equation. And you have to accept that this equation is linear. In other words, you have to accept that if the economy declines from 6 percent growth to 2 percent from the first quarter to the second quarter of the election year, the president’s expected vote share decreases just as much as if the country went from 2 percent growth to 2 percent shrinkage.

Second, you have to accept that there’s a reason for the nation to want to deny parties three consecutive terms (fairly easy to accept), and that this reason would not also be reflected heavily in the incumbent’s job approval rating (somewhat more difficult to accept).

Third, you have to accept that there is no problem predicting the president’s vote share from only 16 data points. Such a small number of observations typically opens us up to a real risk of “false positives.” That is, there is a decent chance that we are finding a correlation when none, in fact, exists. As we’ll see, there’s good reason to suspect that this is the case.

Fourth, you have to accept that presidential elections haven’t changed at all over the past 64 years. You have to accept that a presidential election held in 1948 -- a time when the solidly Democratic South was just beginning to break up, when roughly a third of the workforce was still unionized, and when African-Americans by and large could not vote -- was driven by the exact same factors as an election held today. As a corollary, you have to accept that the enfranchisement of African-Americans and poor whites in the South, as well as the enfranchisement of 18-to-21-year-olds nationally, had no effect on the outcome of the later races. A casual glance at the results of the 2008 elections would seem to suggest otherwise.

Fifth, you have to accept that this model is preferable to any number of other models that make nearly identical claims to accuracy using (sometimes entirely) different variables. You have, for example, Douglas Hibbs’ “Bread and Peace” model, which makes similar claims using weighted real disposable income per capita and the number of casualties suffered in war. You have Allan Lichtman’s “Keys to the Presidency,” which examines 14 variables (and concludes that President Obama will win easily). You have Ray Fair’s latest version of FAIRMODEL, which is based on a cornucopia of economic variables, measuring incumbency and wartime stress, among other things, and which presently predicts a close race if we have a reasonably good economic outcome.

You also have a model of sorts that purports to predict presidential elections based on whether the Washington Redskins win their last home football game prior to the election. If the Redskins win, the incumbent party stays in power. If they lose, the incumbent party is tossed out. This actually predicted every outcome of every race from 1936 through 2000. It missed in 2004, but predicted correctly in 2008.

Now, obviously this is merely a huge coincidence. But the bigger question is: How many of these other models with more plausible claims to validity are also simply measuring coincidences? The answer is that almost all of them have to be, but we have no way of determining before the fact which ones are valid and which ones are just picking up random noise (something, again, that is very easy to do when you have only 16 observations).

1 | 2 | Next Page››

Sean Trende is senior elections analyst for RealClearPolitics. He is a co-author of the 2014 Almanac of American Politics and author of The Lost Majority. He can be reached at strende@realclearpolitics.com. Follow him on Twitter @SeanTrende.

A President Who Is Hearing Things
Richard Benedetto · November 12, 2014
Obama Is No Clinton
Larry Elder · November 13, 2014
Bret Stephens' Call for Robust U.S. Foreign Policy
Peter Berkowitz · November 16, 2014

Latest On Twitter