Home WebMail Saturday, November 2, 2024, 12:26 PM | Calgary | -0.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2016-09-21T10:16:05Z | Updated: 2016-09-21T10:16:05Z Focusing On Error In Polls Isnt Sexy, But Its Necessary | HuffPost

Focusing On Error In Polls Isnt Sexy, But Its Necessary

We dont actually know how much error is in any given poll. Its time to spend more time talking about that.

It’s that point in the election cycle where if you don’t like the results of one poll, you can wait five minutes and another one will come out. People are inundated with numbers, which they might not know how to interpret.

Polls contain “error” in them, which means the numbers are estimates of what people think, rather than precise data points. Pollsters and journalists will usually provide a “margin of error” that approximates how much the numbers might change due to the random chance of who is selected to participate in the poll.

But analysts know there’s more error in a poll than this margin indicates. Some say that’s why we shouldn’t report margins of error at all. And we often don’t do a good job of explaining any type of error to audiences.

Nate Cohn provided an excellent demonstration of polling error not accounted for in a margin of error by having four different sets of pollsters re-analyze data from The New York Times Upshot/ Siena Florida poll . Their results ranged from Democratic presidential candidate Hillary Clinton  being 4 points ahead to her GOP opponent Donald Trump ahead by 1 point, using the exact same data but creating their own likely voter calculations. This is an important lesson in non-sampling sources of error: The choices an analyst makes about how to define likely voters can substantially affect the outcome of the poll.

But Cohn takes the lesson a step further and argues that the existence of these other sources of error mean margin of error wasn’t worth reporting in the original pollClearly, the reported margin of error due to sampling, even when including a design effect (which purports to capture the added uncertainty of weighting), doesn’t even come close to capturing total survey error,” Cohn states. 

It isn’t actually unusual for polls not to report a margin of error. Some surveys don’t report margins of error or even mention the concept, particularly online surveys which don’t use random sampling the basis for a margin of error calculation. But the NYT Upshot/Siena poll was conducted by telephone, and a margin of error typically accompanies these types of polls.

Not reporting a margin of error brings up a big concern for those of us trying to communicate with readers about polling: If there’s not a prominently listed margin of error, do they understand that there’s error? Do they have any idea how much error there might be? 

Unlike analysts, who know there’s error and can quickly estimate a margin of error by looking at a poll’s sample size, most readers aren’t thinking about the uncertainty in polling estimates. It’s up to the experts to explain it.

The question, then, is how best to explain survey error. Cohn chose to mention it, but not give it an exact figure in the poll release story : “All polls, of course, are subject to a margin of error. But the margin of error does not include many other potential sources of error, like the choices of the many undecided voters, or decisions made by pollsters about how to adjust the poll.”

This is a good start. However, that still leaves readers to wonder just how much error there is in the poll. How much error do surveys usually have? Most non-experts probably don’t have the answer to that question, so could assume the survey only has half a percentage point of error. Or they might assume it has 8 points of error.

It also still doesn’t address error based on people who couldn’t be reached because of how the poll was conducted (“coverage error,” such as those who rely solely on mobile phones in a landline telephone survey), people who didn’t respond to the survey (“nonresponse error”) and potential bias in how the questions were asked and answers recorded (“measurement error”).

We should be able to address error better in the polling field. Pollsters release hundreds of polls in an election cycle, often with great care given to how the poll is conducted and how the sample is designed. It’s time to put more care into estimating and describing the error in those polls. Doing so is important to both the general public trying to understand polls and to pollsters themselves: If we’re providing realistic expectations of error, polling “misses” that are actually reasonable based on expected survey error might be better understood.

But that means putting aside the horserace for a while and focusing on the errors. Unfortunately the horserace is what drives attention and traffic and both pollsters and journalists need that to survive in their industries. Technical discussions of error (like this one) aren’t going to get many clicks.

However, Nate Cohn and the four sets of pollsters he worked with took the time to dive into the errors with very useful results. It’s a lesson many experts will likely use well beyond this year’s election. So maybe the risk is worthwhile.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost