Topline
Polling experts worry that polls might be “herding” toward a similar result in the final days of the election—another possible theory behind why 2024’s polling may be accurate or lacking.
Key Facts
Polling for the 2024 presidential election shows candidates in a dead heat, but experts say polling accuracy is complicated.
In 2020, presidential polling was the “most inaccurate in 40 years” and still inexplicably predicted President Joe Biden’s win by more than three percentage points more than his eventual margin, with some believing the pandemic caused less survey engagement and failed to capture Trump’s support strengthening, the WSJ reported, citing a panel of polling experts.
In 2016, Hillary Clinton was widely projected to win by sweeping key “blue wall” states like Michigan, Pennsylvania and Wisconsin, but was edged by Trump in all three, and lost handily.
But the 2018 and 2022 midterm election polling were relatively accurate, further complicating pollsters’ understanding of the factors at play.
Now, concerns around polling have returned as the race continues to be nearly tied, with concerns on both sides that incorrect polling figures, even a “small systematic polling error,” could unpredictably skew the results, prominent statistician Nate Silver writes in the New York Times.
Silver says, if anything, the polls show that a “surprise” is equally likely for each side, even if his “gut feeling” points to Trump’s victory.
Key Background
In political polling, dozens of firms survey thousands of people in many different ways, from mail surveys, phone calls and online surveys. Often, pollsters use a combination of outreach methods. They then employ several methods to try and make polling data more accurate, such as weighting it, or interviewing tactics to ensure more trustworthy results.
Are Polls “herding” Or Matching Each Other?
“Herding” explains how pollsters aim to protect their reputation by skewing toward a more widely acknowledged election result instead of an outlier that might be terribly wrong. The effect is more common in the last days of an election. On Oct. 29, Silver raised concerns on X, formerly known as Twitter, about herding, saying, “there are too many polls in the swing states that show the race exactly” as Harris leading by one point, a tie or Trump leading by one point. There “should be more variance than that,” Silver writes. In contrast, few polls present outliers. Silver often points to the Times/Siena College poll as an exception to “herding” pollsters. Its predictions aren’t perfect (in 2020, it predicted a win for President Joe Biden by nine points when he won by only 4.5), but it does publish outliers. Silver pointed out another outlier—a Des Moines Register/Mediacom poll that shows Harris with a three-point lead in Iowa, bucking a trend for a state projected to go red throughout the election and most other pollsters show Trump handily claiming.
Are Polls Timely?
Some polls are taken during the final days of an election, but almost no poll represents Election Day. The Times/Siena poll, for example, publishes its last national poll toward the end of October, representing data from (at the latest) Oct. 23. And by the time Nov. 5 rolls around, states have been collecting early votes for days or weeks. For example, California starts early voting 29 days before the election. This has been considered a “temporal (or time-dependent) error” to FiveThirtyEight. If something influential happens in the days leading up to the election, the polls can’t factor it in. That said, a surprise seldom impacts the election in the eleventh hour.
What Margin Of Error Do Polls Have?
Data isn’t perfect. FiveThirtyEight has explained on its website that even if an “infinite” number of citizens were polled immediately on Election Day, “the average poll would still miss the final margin in the race by about 2 points.” Many polls followed by Forbes share a margin of error between two and five points. That’s especially pertinent in a tied election like 2024, in which a few percentage points could decide the election.
What Type Of Bias Do Pollsters Worry Can Influence Poll Responses?
Recency bias is the inclination to think the person who won most recently will win again. Voters might predict Trump will be the next president because he won in 2016. Nonresponse bias accounts for certain groups who show disinterest in taking polls. In the past, Trump supporters have tended to be less responsive to surveys and polling outreach, suggesting that polls end up underestimating them. But this year, some Democrats wonder if (and hope) polls are similarly undercounting their traditional key demographics, including Black voters and young voters.
Pollsters Are “weighting” Their Polls More—here’s Why Some Are Concerned That Might Be Skewing The Polls.
Weighting emphasizes survey data, like a respondent’s sex or age, that pollsters believe is underrepresented. During the 2018 midterm election, pollsters weighted by education to trim state polling inaccuracies in 2016’s presidential election polling figures, which did not accurately reach voters with less education, according to the New York Times. After 2020, pollsters increasingly started weighting by “recalled vote,” or who the respondent recalled previously voting for, to make sure they were capturing enough Republicans. But some experts are concerned that emphasizing the “recalled vote” has flaws. Respondents might not remember their vote correctly and falsely record they voted for the winning candidate, according to the New York Times. For some polling operations, the “recalled vote” could inaccurately shape their predictions by reflecting opinions about a previous election. Silver considers weighting on “recalled vote” another example of “herding” because it aligns the polls with past data, he said in an X post on Nov. 4. Even so, two in three polls used this technique in September, the New York Times reports.
What Is The “shy Voter Theory”?
The “Shy Voter Theory” suggests that Trump supporters are not always truthful about their voting plans, perhaps because they don’t want to admit to a pollster they support Trump. That means polls might be underestimating Trump’s popularity in the 2024 election. Some say this theory explains the inaccurate polling figures in 2016 and 2020. But it also might not affect polling today, Silver states in the New York Times. Many Trump voters are ardent public fans and have no reservations about declaring their support.
What Is The “bradley Effect”?
The “Bradley Effect” theory would be a boon for Trump. The theory—named after Los Angeles Mayor Tom Bradley, a Democrat who was Black, who lost the 1982 California gubernatorial race to Republican state attorney general George Deukmejian— holds that white voters who identify as “undecided” sometimes just don’t want to admit that they won’t vote for a Black candidate. The Obama campaign has advocated against this belief and somewhat disproved it by winning the presidency in 2008 and 2012 by relatively large margins. But some think this theory might apply to 2024 polls and to Harris, as the first Black woman presidential candidate. A similar, gendered effect was blamed for 2016, where undecided voters eventually tended not to vote for Clinton, Silver writes.
What Is The “unified Theory”?
The “Unified Theory” explains, based on Trump voters’ inclination to ignore surveys, why presidential election polling is less accurate than midterm elections. According to this theory, polling was more accurate in 2018 and 2022 because more highly engaged voters—the ones most likely to take part in polls—participated in the midterms. In contrast, a larger pool of potentially unresponsive, less engaged voters exist and tend only to vote in presidential elections. That means the pattern from 2016 to 2020 of polls undercounting Trump voters — who are typically less engaged — could continue in 2024.
What Is The “patchwork Theory”?
The “Patchwork Theory” says that the errors made in polling in 2016 and 2020 are entirely unrelated to each other and unrelated to midterm elections. These elections both faced a set of unique factors that led to the polls being wrong. Polls overrepresenting college-educated individuals and undecided voters pushed 2016 polls in Clinton’s favor, some argue. In 2020, the pandemic’s widespread influence on people’s lifestyles and decisions affected the polls drastically, others point out.
Could Polls Be Overreacting To The “trump Effect”?
It’s noted throughout this story that 2016 and 2020 polling inaccuracies are largely the reason for concern in this election cycle. Mostly, that’s because of Trump’s fervent, typically less engaged, voter base, which has historically been underrepresented in polls. As journalist Nate Cohn points out in his latest piece in The New York Times, these pollsters’ concerns may be underestimating Harris or overestimating the support for Trump. The dead heat of the election represented in polls might be over-tweaked to account for the Trump supporters of past elections.