Home WebMail Saturday, November 2, 2024, 12:23 PM | Calgary | -0.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2016-01-07T19:02:47Z | Updated: 2016-12-23T20:09:35Z

For all the challenges facing the election polling industry, one of the most basic remains figuring out whether people actually vote. Pollsters hoping to call elections correctly need to include Americans who will show up at the ballot box and screen out voters who won't.

Those who probably wont vote tend to favor different candidates than those who are more likely to . A new report from Pew Research shows that how pollsters determine which camp someone falls into can result in completely different predictions about who will win.

Recent elections show the hazards of erring toward either side. Pollsters including Gallup miscalled the 2012 election in large part because they didn't expect Obama's coalition of young and minority voters to turn out as strongly as that demographic did. In the 2014 midterms, some pollsters included too many Democrats and missed a Republican wave.

That 2014 miss prompted Pew Research to dig into its own polling data to figure out what caused their polls to underestimate Republican support. The organization found that one of the biggest factors was determining who the likely voters were. How pollsters made that call could change the estimate by up to 8 percentage points.

But theres not much consensus on how to best identify likely voters. Pews newly-released report details how the same polling data on 2014 House races can shift from a 2-point Democratic lead to a 6-point Republican lead, depending on how likely voters are chosen. The data itself wasnt biased toward either party; the differences are all in the likely voter calculation.

In testing 14 commonly used ways to identify likely voters, Pew found that any attempt to screen for likely voters is an improvement over including all registered voters, or even all those who say that they intend to vote, "both of which include far too many people who ultimately will not cast a ballot," according to the report.

Not all methods were equally accurate, though. One major difference is between reaching interviewees through a method called "random digit dialing" -- which, as it sounds like, involves calling a list of randomly generated numbers -- and registration-based sampling, which pulls respondents from a database of registered voters and includes information about their voting history. Methods including voter file data tended to produce the most accurate results.

Pew notes that while there's been more blurring of the lines in recent years, the public polls released by media and university pollsters tend to use random digit dialing, then narrowing down to prospective voters by asking people a series of questions that gauge interest in the election, past voting behavior and intention to vote." Campaign pollsters, who tend to work on behalf of private clients, tend to use the registration-based methods, looking at information in the databases on their interviewees' past voting behavior, rather than relying on respondents to tell them about their voting history.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

You've supported HuffPost before, and we'll be honest we could use your help again . We view our mission to provide free, fair news as critically important in this crucial moment, and we can't do it without you.

Whether you give once or many more times, we appreciate your contribution to keeping our journalism free for all.

You've supported HuffPost before, and we'll be honest we could use your help again . We view our mission to provide free, fair news as critically important in this crucial moment, and we can't do it without you.

Whether you give just one more time or sign up again to contribute regularly, we appreciate you playing a part in keeping our journalism free for all.

Support HuffPost

The bad news is that theres no guarantee the same likely voter decisions that worked best in 2014 will work best in 2016 or other future elections. Pews conclusion is that choosing likely voters is likely to continue to vex pollsters, especially if no official record of past voting is available as an input to the models.