UPDATE — 1:30 p.m. : On Tuesday, Gallup released its first polling results on the presidential race among respondents they consider most likely to vote. Among likely voters interviewed over the last week, they show Republican nominee Mitt Romney with a 2-point advantage over President Barack Obama (49 percent to 47 percent). Their results for all registered voters continue to give Obama a 3-percentage point advantage (49 percent to 46 percent), although that margin narrowed by 2 points compared to the previous day.
Separately, however, Gallup noted that interviews conducted Monday and Tuesday nights suggested that Romney’s debate performance “may not have a lasting impact.”
On Tuesday, just one day after reporting two different results on the presidential race among registered voters, the Gallup organization announced it will begin reporting on the polling results of those it deems most likely to vote.
The shift will “wipe out” the 5-point advantage for President Barack Obama among registered voters, according to USA Today Washington Bureau Chief Susan Page, because “Republicans are more energized and more likely to actually go and vote.”
The results of the new Pew Research Center survey, also released on Tuesday, showed a similar result. Obama and Republican nominee Mitt Romney were tied among all registered voters (with 46 percent each), but Romney led by 4 percentage points (49 percent to 45 percent) among likely voters.
Pollsters have sound reasons for wanting to narrow their samples to the most likely voters. Neither all adults nor all registered voters actually vote. Four years ago, just over 131 million Americans cast a vote for president, a number that amounted to roughly 62 percent of eligible adults or roughly 90 percent of those who told the U.S. Census that they were registered to vote.
Honing in on those most likely to vote has provided more accurate forecasts of an election’s outcome. However, this task is difficult and involves at least as much art as science. Part of the reason, as political scientist Joel Bloom put it a decade ago, is that the actual electorate is a “population that technically does not yet exist,” but rather is “in the process of becoming one.” In other words, some voters will not actually decide whether or not to vote until the last moment.
But a bigger challenge, as Slate’s Sasha Issenberg succinctly put it, is that “likely voters lie.” Many routinely claim they have voted or are likely to vote when they have or will not. So pollsters use various indirect measures to identify the most likely voters.
Gallup’s approach is the best known and has evolved only slightly since first adopted by George Gallup and his colleagues over 50 years ago. They ask seven questions, known to correlate generally with turnout, about voting intentions, past voting, interest in the campaign and knowledge of voting procedures. They then combine those questions into a index that they use to designate some predetermined fraction of their adult sample as most likely to vote.
Gallup’s biannual switch to likely voters inevitably raises interest in this poorly understood aspect of polling methodology. Here are six things worth knowing about how pollsters choose likely voters:
1) No two pollsters do it exactly the same way. The Gallup “index and cutoff” approach is among the best known and most complex, and is a model followed by many other national media polls, such as the Pew Research Center, Washington Post/ABC News, Ipsos and many others. But many pollsters take the very different approach of asking two or three simple screening questions, such as whether respondents are registered and say they are very likely to vote. (This review provides a guide to the specific approaches used by various pollsters eight years ago).
2) Pollsters often withhold details about their approach. Some, like Gallup, share complete details of their model or, alternatively, reveal the straightforward screening questions at the beginning of their surveys. Many others, however, are more circumspect, describing their process in more general terms or, more often, saying only that “likely voters” have been selected.
3) The “science” of likely voter models involves retrospective tinkering to get a more accurate estimate. It is typically not an effort to identify with precision those who actually vote. That distinction is subtle but important.
One of the most rigorous recent assessments in the public domain of the classic Gallup model is this 1999 vote validation study conducted by the Pew Research Center, a study that produced a somewhat surprising finding: The Gallup-style model misclassified the turnout behavior of roughly 1 in 4 voters (27 percent), yet it resulted in a slightly more accurate prediction of the vote than other measures that classified the true electorate with greater precision.
The conclusion? “Correctly classifying respondents does not lead to better horserace predictions.” This finding reinforced the retrospective tinkering approach that’s been used by pollster for decades. They look back and ask, ‘how could I have changed my likely voter model to predict the outcome more accurately?’ They use their conclusions to refine their model for the next election.
4) For some, the tinkering can involve a decision about which particular model to apply, and those decisions are often not fully disclosed. ABC News explains, for example, that they “develop a range of ‘likely voter’ models” and “evaluate the level of voter turnout produced by these models and diagnose differences across models when they occur.” In other words, they reserve the right to apply different variants of their models as circumstances warrant.
5) “Internal” campaign pollsters are increasingly taking a very different approach. Frustrated by inaccurate self-reports, campaign pollsters are increasingly turning to samples drawn from official lists of registered voters and use the records of actual turnout by individual voters to model the likely electorate. Unfortunately, few of these polls are publicly released, and those that are disclose little or nothing about their methods and typically show a bias — campaign pollsters tend to release only good news for their clients.
6) When applied weeks or months before an election, Gallup-style likely voter models can add considerable volatility to polling results. After the 2000 elections, political scientists Robert Erikson, Costas Panagopoulus and Christoper Wlezien found that wide swings, shown in the daily tracking poll that Gallup conducted for CNN and USA Today during the fall, had been the result of changes in the composition of the likely electorate rather than a consequence of shifts in voter preference. “The danger,” they wrote, is mistaking “shifts in the excitement level of the two candidates’ core supporters for real, lasting changes in preferences.”
Erikson and his colleagues studied only the Gallup data, but some argue that the trends seen in recent weeks, first toward Obama following the party conventions and now toward Romney after the first debate, have been magnified among “likely voters” by similar shifts in enthusiasm.
The volatility seen in 2000 may be one reason why Gallup has waited until four weeks before the election to make the switch to reporting results among likely voters. Erikson and his colleagues argued for waiting longer, until election eve, to report on likely voters, but so far no polling group has taken their advice. The result is that we may see more shifts in very close races as partisan enthusiasm ebbs and flows between now and Nov. 6.
Article Courtesy of The Huffington Post