close
close

Association-anemone

Bite-sized brilliance in every update

How to understand the surveys you’re seeing right now
asane

How to understand the surveys you’re seeing right now

This article is part of The DC Brief, TIME’s policy newsletter. REGISTER Here to get stories like this delivered to your inbox. of e-mail.

As we finally make it to the final weekend before Election Day, many of our friends are suddenly poll experts. Whether it’s a political circle, a YA book club, or even the line at the grocery store, talkers only have one topic on their minds. You saw this the gender gap in The New York Times latest poll last week? How about Thursday’s Gallup poll pointing voter turnout among Democrats greater than at any point in the last 24 years? But I heard further Pr that Trump is polling better than any Republican in the last two decades, including when Trump won in 2016? It can be a lot.

For those who want to do it, going down the polls can be a take-your-own-adventure story of self-reliance, self-torture, and deep confusion. And to be honest, either way is totally valid.

Sure, there are plenty of indicators to go by to gauge the health and potential of the two presidential campaigns. Campaign finance data, advertising strategy, where candidates are planting in the final days. Oh, and don’t even get me started on imprecise modeling behind early voting NUMBERS.

But really, polls are the easiest way to get a sense of the race. In August I published a primer on how to read surveys like a pro. But in the final days of an election cycle like no other, many are wondering if the pollsters are getting the presidential race completely wrong…again. Here’s a rundown of why the 2024 polls are different than any other year, and why they create more confusion about who’s doing well.

Don’t all polls show it’s basically a coin toss?

Yes, but no.

Apologies to readers looking for an easy answer, one is not forthcoming. As Republican pollster Kristen Soltis Anderson NOTESthe numbers are remarkably consistent across surveys, even if the surveys follow a different set of assumptions to get there. The Times The poll showing a 48% tied race and the CNN poll showing a 47% tied race may be accurate in each case, but there are big differences in how they reach similar conclusions.

In legal terms, jurors A and B can find someone guilty of a crime, but reach that verdict by prioritizing a different set of facts. This does not mean that the defendant is not guilty, but each juror’s reasoning can be as true as it is divergent.

Part of this multi-track to the same goal comes down to different polling firms pursuing different theories of the electoral case. Is Harris changing the electorate in unprecedented ways, with dramatic—and still unrealized—success among women and college-educated voters? Does she form the old Obama coalition from 2008? Is Trump reviving the base in 2016 or is he relying on another coalition that has become more tolerant of non-compliance? And should 2020 voting patterns be ignored given that we were in the middle of a pandemic? All of these scenarios may be true, but to what extent? Different pollsters consider some of these questions more relevant than others in deciding who comes out on top.

So yes, the polls are near. No one in either camp sleeps comfortably these days, if at all. The candidates are busy for a reason: this can be decided by less than 100,000 people in three (as yet unknown) states. And nobody knows who I am.

So these polls don’t all use a common base?

Not. Not even close, if they’re being honest. Each voting operation must use its best understanding of who will actually turn up. Typically, as Election Day approaches, polls shift from a broader universe of registered voters to likely voters — and that’s where a mix of statistical modeling, historical trends, and more than a little gut comes in.

Co-director of Vanderbilt University’s robust polling operation, Josh Clinton, published an incredibly useful illustration of this challenge. Using a raw data set from a national poll taken in early October, it found Harris ahead by about 6 percentage points. This finding reflects who the pollsters were able to reach, which may not accurately reflect who ultimately turns out to vote. There each surveyor makes different decisions about how to adjust the raw data. When Clinton adjusts the data to match the 2022 turnout universe, Harris is actually up 8.8 percentage points. Turnout in 2020, it’s a 9-point race for Harris. And if you use 2016 numbers, Harris still wins by 7.3 percentage points.

But here is where things can get interesting. If you overlay the model on how many voters identify as Democrats, Republicans, or neither, you can get very different views of the race. If you believe the Pew Research Center’s national electorate data, Harris’ lead shrinks to a 3.9 percentage point lead if turnout is anything like 2020. Head over to Gallup’s electorate snapshot and that lead drops to 0.9 percentage points . So you can see how just modeling, using the same raw numbers, can change this race by 8 points. And that’s just the most basic example of how a tweak here—to a single input question—and a bump there to dozens of other factors can throw off the entire system.

This happens at every polling station in the political universe, and every set of data geeks look at data sets through different lenses. That’s why the same set of voters can say the same thing to pollsters and see it reflected in an entirely different race. There is a reason we had to show our work in math class; the process matters as much as the response.

So we shouldn’t compare, say, the CNN polls with the New York polls Times Surveys?

Absolutely not. Best practice is to compare as if.

This year includes the added twist of Democrats dropping Joe Biden for Kamala Harris as the nominee in July. Basically, most comparisons between pre- and post-Biden Exit are of limited utility. The same is true of comparisons between polls, as they all make different assumptions about the electorate.

There is also little value in comparing surveys of registered voters and those of likely voters. They are completely different universes.

Wait. Has no one fixed the political polls after 2016?

The 2016 polls became the punchline and gut punch after them non-alignment with the reality it has become apparent quickly on election day. After all, Hillary Clinton was thought to be headed for a clear defeat of Trump. But with the benefit retrospectivelyit was pretty clear that the investigators assumed that too many college graduates would show up, as just one of the more glaring omissions. Pollsters did their best to fix it four years later, but polls again thinking Biden would better than he did.

Part of that is the Trump effect, which again has pollsters second-guessing themselves and, in particular, what factors matter most in swing voter behavior. A research team at Tufts University surveyed, well, surveys and finder that some of the biggest changes in back-end modeling since 2016 have come in placing much more importance on education, voting history and where voters actually live. They also document a shift away from giving respondents’ income and marital status too much influence. Most surveys also adjusted the weight they give for age, race, and gender.

So yes, the polls have taken steps to iron out the wrinkles that were so evident in 2016. But this is a science of public opinion that has to take into account certain assumptions. And they are just that: the best educated guesses about the in-game universe.

(Just to be contrarian: a credible contrarian argument is that the 2016 polls were not that off, it’s just that the national polls didn’t match the state-by-state results that mattered most. Clinton’s allies would rather blame the pollsters for inflating her voters’ confidence to complacency, but the reality is much more nuanced.)

So you’re saying we should chill with polls?

Absolute. Surveys are informative, not predictive. By the time you read them, they are already out of date. Each of them makes some educated guesses about who will bother to vote. Almost every crossword in the latest version of a survey includes a judgment call, and no one gets them all right.

But let’s be honest: we won’t cool it. It’s just not what armchair people know how to do. After two – if not four – years of waiting for this latest push, the catnip of these numbers is too much. It might be a waste of time, but in the end it might actually have virtue in the unlikeliest of ways.

Closer polls can be a shot in the arm to get more people to vote if they believe they can really determine the outcome. So with that, these tight polls could be good for the exercise of democracy and at the same time rubbish for the discussion of it. However, it’s all we’ll be talking about for the next few days – and maybe beyond if the expectations it creates are too far off the mark. I’m just as likely to be guilty of this as anyone else. And, no, I probably won’t repent.

Understand what matters in Washington. Sign up for the DC Brief newsletter.