More on Models: Answering Critics on Election Predicting

More on Models: Answering Critics on Election Predicting

By Sean Trende - November 22, 2011

Seth Masket and Brendan Nyhan have both pushed back on my column earlier this month regarding statistical models for predicting elections, part of a larger back-and-forth that Jaime Fuller has aptly dubbed a "nerd fight." Before getting into the details of their comments, there are a few broad points worth making.

Masket, Nyhan and I start out from something of the same perspective. I started writing precisely because political journalism frequently is far too reliant on conventional wisdom and shibboleths ("It's the Six-Year Itch!!!"). There's no doubt that more political journalists could use a couple of semesters of statistics. In that sense we’re on the same team. We’re also on the same page to the extent that political science is just trying to emphasize that the economy matters quite a bit in elections.

But this doesn’t really add much to our understanding of elections. Quite frankly, the idea that there’s a horde of political journalists out there who believe that the economy is wholly secondary to campaign effects is as much of a straw man as the notion that political scientists believe that it is all about the economy. Given enough time, it would be relatively easy to dig up literally tens of thousands of examples from the last election alone emphasizing economic forces.

In fact, political journalists and historians have held this understanding for a long time -- and without the benefit of data-dredged equations. As just one example of many, here is James Ford Rhodes writing in 1920 about the Republican debacle in the election of 1874, as a part of his epic eight-volume history of the United States (page 133): “The depression, following the financial panic of 1873, and the number of men consequently out of employment weighed in the scale against the party in power.” Rhodes provides a laundry list of other “campaign effects,” but this is perfectly consistent with research indicating that economic variables probably account for only about a quarter to a third of election results, leaving plenty of room for other factors.

The truth is, however, that these models go a step further than simply claiming that there’s some sort of significant relationship between the economy and elections. Here’s Professor Alan Abramowitz, whose “Time for Change” prediction model formed the basis of my earlier piece : “[b]ased on the president's average net approval rating in recent national polls, between -10% and -5%, and the estimated growth rate of the U.S. economy during the second quarter of 2011, 2.5%, Obama would be expected to win approximately 52% of the national popular vote, enough to almost certainly guarantee him a majority in the Electoral College. But the president clearly has little margin for error. A double-dip recession and/or a substantial decline in his approval rating could easily put him below 50%.”

And again:

“The three predictors are highly statistically significant and the model explains 89% of the variation in the incumbent party's share of the major party vote. The weight assigned to the CHANGE factor in the model indicates that after controlling for both the growth rate of the economy and the incumbent president's approval rating, a first-term incumbent like Barack Obama can expect to receive an additional 4.4% of the major party vote compared with a candidate seeking to extend his party's hold on the White House beyond eight years. That explains why first-term incumbents rarely lose.”

This implies a degree of precision that is simply unsupportable. And it isn’t just Abramowitz, as this is usually how these models are framed when they are presented to the public.

As a final reminder: The instrumentalist “these models work well” argument is weak, even setting aside broader arguments about instrumentalism. In fact, these models crash and burn all of the time; the journal PS is filled with postmortems lamenting the failure of models almost as frequently as extolling their virtues. In 1982, the models predicted that Republicans would lose between 30 and 50 House seats. In 1992 they generally thought George H.W. Bush would win, and actually fared worse in their predictions than a control group largely composed of non-political science graduate students. In 2000, they generally thought Al Gore would win handily; 2002 and 2004 were a mixed bag for the models, at best. In 2010, the structural models predicted Republican pickups of between 25 and 45 House seats. A few models predicted Democratic gains in 1994.

The modelers’ frequent rejoinder is that the models have since been re-estimated, and now (usually) explain those elections nicely. But that’s precisely the problem, from an analyst’s perspective. We have no way of knowing whether 2012 is a year when the model will work nicely, or when it will have to be tossed out and re-estimated. And I think there are plenty of reasons to expect that, in any given year, it will be the latter.

1 | 2 | Next Page››

Sean Trende is senior elections analyst for RealClearPolitics. He is a co-author of the 2014 Almanac of American Politics and author of The Lost Majority. He can be reached at Follow him on Twitter @SeanTrende.

A President Who Is Hearing Things
Richard Benedetto · November 12, 2014
Obama Is No Clinton
Larry Elder · November 13, 2014
Bret Stephens' Call for Robust U.S. Foreign Policy
Peter Berkowitz · November 16, 2014

Latest On Twitter