Introduction

During section this week, we discussed the partisan effects of shocks, and in general the importance of political polarization on elections. During the first half of the blog post, I’ll explore the predictive power of different types of two sided models over time. In the second half, I’ll return to the issue of uncertainty in forecasts, this time using a type of hierarchical model to generate election outcomes.

Two Sided Models

For most of this blog, I’ve used a two sided model based on incumbency. The reasoning behind this choice is to try to control for all of the advantages that an incumbent president (or incumbent party)1. However, as briefly discussed last week, I have not accounted for the role of particular parties in most of the models I have looked at.

This week, I decided to compare several two sided models, to see which are the most effective at forecasting on a state by state basis. The baseline will be the two sided model that I have been using for the past several weeks: a binomial model that uses a polling average2, second quarter national GDP, second quarter state level real disposable income3, and a state fixed effects indicator. I split the data based on if the party was incumbent, to control for the impacts of incumbency.

Now, does this make sense? Think in the very short-term: does it make sense to treat Donald Trump completley differently as a candidate in two elections four years apart? The answer is not entirely clear. To consider the problem, I decided to build a two sided model that is identical to the baseline model I have been using, but splitting by party instead of incumbency status.

As an addition to these baseline modles, I also added a control for the opposite factor. In the incumbent/challenger model, this is an indicator for party, and in the party model, this is an indicator for party incumbency status.

In and Out of Sample Fits

We can first look at the regression results to think about in sample fit. The full outputs for all four regressions are detailed in the appendix. In incumbency based models, one thing to note is that the coefficient on party is highly significant, suggesting that it a worthwhile addition to the model. In the party based model, the coefficient on incumbency status is significant as well. We can also look at the deviances to get an overall sense of fit.

Model Null Deviance Residual Deviance
Incumbent 27672541 3800967
Challenger 29524520 4203355
Democrat 31804157 4091837
Republican 25631135 3569384

The deviances are still quite high. For both the incumbent and challenger, the null deviance is an order of magnitude larger than the residual deviance, which is a good sign, but the residual deviance is still extremely large. The same trend holds for the party based models, but the residual deviances are still on the order of ten to the sixth. One thing to note is that both sides of the party based model have higher residual and null deviances than the incumbency based model. Another thing to note is that the standard errors on all of the coefficients are quite small, which gives a fair amount of ceretainty to the modle. While all of this gives us a general idea of in sample fit, the more interesting question is that of out of sample prediction.

To assess out of sample prediction, I conducted leave one out validation for every state and year combination going back to 1980, for which we have reliable state polling data. I then recorded the accuracy for each model in each year. Accuracy is defined as the number of states correctly predicted in a year, divided by the number of states that we make predictions about. We can look at a plot over time.

There’s no clear trend of any kind, suggesting that both models are suceptible to impacts from other factors. One thing to note is that on the whole, the prediction accuracy is quite high. The minimum is in 1980, when the party model predicts 32 out of the 37 states with data correctly. We can also look at which states the models get wrong, to see if there are any patterns there.

Year States Incorrectly Predicted By Incumbency Model States Incorrectly Predicted by Party Model
1980 Maine, Minnesota, North Carolina, Tennessee Maine, Minnesota, North Carolina, South Carolina, Tennessee
1988 Connecticut, Maryland, Washington Connecticut, Maryland
1992 Kansas, Louisiana, North Carolina, South Dakota Georgia, Louisiana, North Carolina, Texas, Montana
1996 Colorado, Georgia, Virginia Colorado, Georgia, Virginia
2000 Florida, Iowa, Oregon, Wisconsin, New Mexico, Arkansas Florida, Oregon
2004 New Mexico New Hampshire, Wisconsin, New Mexico
2008 Missouri North Carolina, Indiana
2012 Florida, New Hampshire, Virginia Florida, New Hampshire, Virginia
2016 Florida, Michigan, North Carolina, Pennsylvania, Wisconsin Florida, Michigan, Pennsylvania, Wisconsin

The models tend to miss similar states. In recent years, they seem to have had trouble with what we would usually call swing states, like Florida, Pennsylvannia and Wisconsin. They also tend to be states that are close in the polls, demonstrating these models heavy reliance on polling data. There are two solutions: first, shifting to a probabilistic model, or adding more covariates. Aditional covariates could be demographics, or some sort of uniform swing4. However, the risk of increasing the number of covariates is overfitting, so it could be that we just need different predictors, which we could get via some sort of regularization or more precise model selection.

Uncertainty: Voting Distributions

In the past two weeks, I have made attempts to add uncertainty into my predictions, with relatively little success. This week, I’ll add two dimensions of uncertainty: one familiar, one new. To start, our base model will be the two sided incumbency model that includes a control for parties, which we just took a look at.

Because the model is based on a binomial, we are predicting the probability that an eligible voter will vote for a particular candidate. As discussed previously, a reasonable assumption to make is that not everyone votes in the same way, and instead that voting behavior comes from a beta distribution. Unlike last week, where I used empirical Bayes to build a hierarchical model, this week I’m going to something more straightforward. When a prediction is made with any type of linear model5, the predicted output is thought of the mean of a distribution. The associated value also has a variance, which can be calculated from the data. The trick that I am using this week is what’s called the method of moments: because the mean and variance of a beta distribution can be calculated from its parameters, I can set up a system of equations to solve for the parameters for a given mean and variance.

There’s just one problem with this approach, that’s specific to modeling the 2020 election. The variance on the predicted voting probabilites for 2020 are tiny. While most of the predictions on are on the order of 10 to the -1 or -2 (as they are values between 0 and 1), many of the variances are on the order of 10 to the -6. This means that if the voting probability is the only thing that has a variance, we still have an essentially deterministic model.

Uncertainty: Turnout

One of the big stories of the 2020 election is COVID-19, both on the campgain trail and for forecasting the election. Thinking about how to model COVID is hard, because there are so many dimensions it might impact, some of which are harder to account for than others. Theoretically, I already account for people changing their vote because of COVID by including polling data.

One key area that I have not addressed is voter turnout, something that is greatly influenced by COVID-19 (and other factors). Interestingly enough, turnout in primaries after COVID became a national issue increased by 50%. In addition, because of expanded voting by mail, turnout may greatly increase. At the same time, social distancing measures, and fear of the pandemic may depress in person turnout. 538’s Nate Silver predicts a roughly 50% increase in error when predicting turnout. Other sophisticated models like that of The Economist also make efforts to account for variability in turnout, but they are less specific in how they account for the uncertainty.

A Model

In this model, I’ll add a very basic way of adding variability to voter turnout. Instead of the eligible voting population being fixed in each state, it will be drawn from a normal distribution. The normal distribution will have the mean of the eligible population in that state, and variance based on the historical voting eligible population. For each of the 10,000 simulations of the election, the model progresses as follows:

  1. The voting population is drawn from a normal distribution, with mean of the voting eligible population in that state in 2016 and standard deviation based on the historical standard deviation in the size of the voting eligible population times 1.5, to account for increased variance in turnout.

  2. The probability of voting for each candidate is drawn from a beta distribution with mean and variance derived from the two sided logistic regression model.

  3. The voting process is simulated as a binomial process, based on values from the first two steps.

For the current election, Biden clearly has a commanding lead. There is very, very little overlap. In all, Joe Biden wins 9998 out the 10,000 simulations, while Trump wins 2. Biden wins an average of 370.49 electoral votes, while Trump wins an average of 167.36. This blowout prediction comes from the model being heavily reliant on the polling averages, which with small variances, give Biden a massive lead. In terms of the popular vote, Biden wins an average of 55.71 percent of the two party vote, while Trump wins 44.29 percent of the two party vote, consistent with the landslide victory for Biden.

Reflections Before the Final Prediction

All in all, it seems like Joe Biden has a very, very good chance to end up the winner of the electoral college. With such a large and stable lead in the polls, Trump would need a large polling error and likely some extraordinary events in the next 10 days to pull off a victory.

In terms of the model, there are a few things I would like to add for the final prediction. First, demographic data, for two reasons. One, demographic data tends to be a good substitute for party, as particular groups have tended to vote for particular parties over the last 50 years or so. Secondly, it helps add correlation between state outcomes. This is key: if Biden happens to win Pennsylvannia, it tells us he is more likley to win Wisconsin, because the people in the two states are relatively similar. The second thing I would like to add is correlation between states in terms of voter turnout, for similar reasons. If lots of people turn out to vote in Wisconsin, it is likely that lots of people have turned out to vote in Michigan as well. This might mean drawing turnout from a multinomial distribution, based on demographic or COVID based correlation6. Finally, adding some sort of term to account for uniform swing would be nice, as it gives yet another way for results between states to be correlated with one another.


  1. See this post for more details.

  2. This polling average takes the mean of state level polls over the past two weeks. Data for 2020 comes from FiveThirtyEight’s polling average.

  3. This data comes from the Bureau of Economic Analysis.

  4. Uniform swing would be using lagged election results of some kind. The idea is that the best predictor of an election somewhere is the last election that happened there. This could take the form of vote share, which party won, or how many times in a row a state has voted for a particular party.

  5. The binomial/logistic model I have been using is a generalized linear model, so the same ideas apply.

  6. States that are similar in terms of COVID outcomes, or in terms of demographics will have similar turnout.