Election handicapper Nate Silver's prediction model is in the spotlight thanks to an election forecast that seems bullish on former President Donald Trump’s chances of victory, raising questions about how such models work.
Silver’s forecast has drawn significant criticism for giving Trump a better chance of winning than other forecasters. Last week, for instance, his forecast gave Trump a 64% chance of winning the Electoral College while giving Vice President Kamala Harris just a 35% chance of victory, even while the same forecast saw Harris as more likely to win the popular vote and his polling averages had her leading in enough swing states to take the election.
Silver’s model also gives Trump a better chance of winning when compared to peer forecasts. For instance, FiveThirtyEight, the handicapping outlet Silver founded in 2008 and left earlier this year, seems to see a more heated contest unfolding, giving Harris a 56% chance of winning and Trump a 43% chance. Likewise, DecisionDeskHQ’s current model gives Harris a 54% chance of winning the presidency
Silver’s forecast has drawn criticism from Democratic operatives for the types of polls included in his forecast and how certain Republican-leaning pollsters are weighted in his polling averages. Social media users have criticized his employment at Polymarket, a political betting site that has received significant investment from conservative billionaire Peter Thiel, who has personal and professional connections to the Republican vice presidential nominee, J.D. Vance. He’s also received praise from Trump himself, which probably hasn’t helped the perception that his forecast is biased toward Republicans, despite Silver recently telling the “Risky Business” podcast that he plans to vote for Harris.
While there are valid critiques of Silver’s model and criticisms of Polymarket, which is potentially pushing the boundaries of what is legal in terms of how derivatives markets are allowed to operate in the United States, Trump’s statement also touches on a core misunderstanding of what an election forecast like Silver’s is and how their creators normally present them. Trump himself said that Silver had him leading and “up by a lot,” which isn’t true if a reader looks at Silver’s polling average. Other publications have even characterized Silver as a pollster when what he does is probably better described as handicapping, akin to predicting what team will win a match in football or what hand might win in a game of poker.
Although a forecast like Silver’s or FiveThirtyEight’s is informed by data, it’s not necessarily free from editorial decision-making. Forecasters still choose what will be included in their model, how different factors will be weighted, and what assumptions are made about an election and how they think it is likely to unfold. Silver himself has suggested that his model’s forecast will change as the election draws closer, even if there is no shift in the polls, meaning that there is likely some sort of shift in polling priced in.
According to Scott Tranter, director of data science at Decision Desk HQ, forecasting is far from an exact science, even if modelers are normally trying their best to provide sober, statistically informed analysis.
“Models are like cooking, there are many ways to cook a burger, many of those are edible, but at the end of the day people have a preference,” Tranter said in an interview.
Decision Desk HQ’s model works like most models, Silver’s and FiveThiryEight’s included, in that it uses historical and present-day demographic data on the electorate — think race, income and party registration — and combines it with past election results and polling data to produce a likelihood of a given electoral outcome.
In creating a model, Tranter said, forecasters still have to make plenty of decisions about how to weigh certain factors and whether to even include some.
“There’s many different polls forecasters use, weighting schemes, things like that. Some forecasters don’t like to use Trafalgar,” Tranter said, referring to a pollster that frequently has move favorable numbers for Trump than others. “It’s not a good or a bad choice, it’s just a choice.”
The way forecasters often choose what to include in their model is backtesting a given indicator against historical election data. If a factor has been predictive in past elections, that’s a solid argument for including it in a model for future elections. However, making these decisions is far from an exact science and there’s no way to guarantee that an indicator in past elections will hold true in future elections.
This runs up against one of the issues with using historical data and election about presidential elections and election modeling in general. There have only been 59 presidential elections, meaning the sample size to backtest indicators on is quite small. And most forecasters, like Tranter, would argue that “there’s a good chunk of those that aren’t good to backtest on.” The electoral indicators of the 1800s, for example, are unlikely to predict elections in the 21st century.
Logan Phillips, the founder of the handicapping site Race to the WH, explained that picking up on potential future predictors is where the differences between a lot of forecasts are established.
Phillips said in an interview that he started incorporating the partisan drift of states into his forecast this year, meaning that his model assumes that a "state that has been rapidly accelerating towards one party is probably going to keep moving in that direction.” An example of a state like this is Florida, which has shifted towards Republicans in recent elections.
Another indicator that Phillips incorporated into his model in 2022 was special elections. He credits his incorporation of special election data as helping him avoid overestimating Republicans that year. Importantly, however, factors like special elections might not have been particularly good historical indicators, even if they were a good indicator of electoral performance in 2022.
Another decision that forecasters make is whether to make their model into what Tranter calls either a “forecast” or a “nowcast.” The distinction here is that a “forecast” might have movement in one direction or another priced into its final analysis whereas a “nowcast” produces a prediction as if the election were to happen today. This is another editorial decision that forecasters make, and one that can leave the average reader scratching their head as to why the polls might say one thing and the forecast predicts another.
It’s also the sort of decision that makes quantitative models like the ones at FiveThirtyEight not so dissimilar from qualitative models, like the sorts of rating issued by the handicappers at Sabato’s Crytal Ball or the Cook Political Report.
Miles Coleman, the associate editor at Sabato’s Crystal Ball, described the process of creating ratings in an interview, and in some ways the decisions that quantitative and qualitative handicappers make are comparable.
“We try to keep a good balance of what polls say versus where are parties spending money versus what are the historical trends in these states and what are our contacts telling us,” Coleman said.
This year, Coleman identified the lack of ticket-splitting as a potentially important indicator in electoral predictions. In recent elections, voters have become less likely to vote for candidates belonging to different parties on the same ballot. Coleman said that this is factoring into the rating he’s working on in states like Nevada, North Carolina and Montana.
Another factor they’re tracking this far out from Election Day is the vote share a candidate is polling at as opposed to a candidate’s margin in a given survey. While it’s possible to win an election with less than half the vote, it’s impossible to lose an election in a given state with the majority of the vote. Coleman said that this is a useful metric, especially if you’re expecting presidential and down-ballot polling to converge between now and Election Day.
One thing most handicappers — either quantitative or qualitative —encourage readers to do is to think about their forecasts probabilistically. While a 33% chance of winning might seem like bad odds in the context of an election, an event with a sample size of one, it’s also about the same chance an NBA player has to make a three-point shot. Forecasters also often encourage readers to look at multiple forecasts and compare and contrast them.
In other words, even though Silver’s forecast might give Trump a 60% chance to win and another forecast might give Harris a 55% chance to win, those forecasts are essentially in agreement in terms of the bigger picture — both candidates have a good chance of winning.
In Trenter’s assessment, this often gets lost as forecasts are circulated on social media and can become misleading, especially if a forecast is presented like a poll, where a 60% to 40% split would indicate a near insurmountable lead.
The “bottom line,” as Trenter puts it, is that, even though some forecasters might agree or disagree with what goes into a given forecast, “we’re all saying roughly the same thing.”