Home WebMail Saturday, November 2, 2024, 10:31 AM | Calgary | -2.8°C | Regions Advertise Login | Our platform is in maintenance mode. Some URLs may not be available. |
Posted: 2015-02-03T18:47:56Z | Updated: 2015-02-03T19:59:01Z The 'Margin Of Error' Is More Controversial Than You Think | HuffPost

The 'Margin Of Error' Is More Controversial Than You Think

The 'Margin Of Error' Is More Controversial Than You Think
|
Open Image Modal

If you read polls in the news, you're probably familiar with the term "margin of error." What you may not know is that pollsters disagree fiercely about when it should be used.

In an actual debate last week, sponsored by the do-it-yourself sampling firm Peanut Labs, polling experts got together to argue whether a margin of error should ever be reported for surveys conducted online -- which is how more and more surveys are conducted.

Most industry standards and guidelines say that surveys drawn from nonrandom samples -- typically the case with online polling -- should not provide a margin of error when their results are generalized to the wider population. Yet, as debate moderator Annie Petit noted, many readers expect to see the margin of error, regardless of how the poll was done. So she threw out a provocative question: "Is it really so terrible to use a statistic that everyone understands so well?"

At HuffPost Pollster, which regularly conducts online surveys with YouGov, we don't have a perfect answer to that. But we've decided it's time for even greater transparency with our readers.

Let's start here: What is a margin of error ?

All surveys are subject to several known forms of error. One of those is relatively easy to predict and quantify, and that's the error produced by interviewing a random sample rather than the entire population whose opinion you're seeking.

Based on the sample size (and some other factors) and utilizing statistics, the pollster can calculate the margin of sampling error. This describes how close the sample's results likely come to the results that would have been obtained by interviewing everyone in the population -- in theory -- within plus or minus a few percentage points.

Over the years, the margin of sampling error has typically been provided to give readers a sense of a poll's overall accuracy. That simple idea requires some critical assumptions, however: It presumes that the sample was chosen completely at random, that the entire population was available for sampling and that everyone sampled chose to participate in the survey. It also assumes that respondents understood the questions and that they answered in the desired way. For pre-election surveys, it assumes that pollsters have accurately defined and selected the population of likely voters.

In the real world, these assumptions are never fully satisfied. If some part of a population is not sufficiently covered or does not respond, for example, and that missing portion is different on some characteristic or attitude of interest, the survey results could be off in ways not reflected in the margin of sampling error. Such "coverage" and "non-response" errors can be harder to detect, predict or numerically quantify, since we don't know how the people we don't interview will answer our questions -- that's the point of doing a survey in the first place.

In the early days of modern survey research, however, response and coverage rates were generally high. (Thirty or forty years ago, Americans were more likely to talk to pollsters, for one thing.) So it was more reasonable to assume that these other sources of error didn't matter much. The term "margin of sampling error" was regularly shortened to "margin of error" and used as a catch-all for any potential inaccuracy in the survey.

According to one academic study , even scientists and research professionals often wrongly interpret the margin of error to be an estimate of all possible error in polling data. It's not surprising the general public makes the same mistake .

All of this brings us back to the often contentious debate among pollsters about whether it is appropriate to report a margin of error for Internet-based surveys that do not begin with a random sample.

The idea of a random sample is that everyone in the larger population -- the group whose opinions the pollster wants to determine -- has a known probability of being chosen for the random sample. Hence, it's also called a probability sample. The opposite of a random sample is sometimes labeled a convenience sample, in which those conducting the survey gather the views of everybody who conveniently stops to answer questions.

Online surveys typically start out with the convenient: They use nonrandom methods to recruit potential respondents for "opt-in" panels and then select polling samples from these panels. But professional Internet pollsters don't stop there. To make the nonrandom sample look like the population, these pollsters use weighting and modeling techniques that are similar to, albeit more statistically complex than, the methods used with random-sample polls conducted by phone.

The argument against reporting a margin of error for opt-in panel surveys is that without random sampling, the "theoretical basis" necessary to generalize the views of a sample to those of the larger population is absent.

"A probability sample is a hallmark of good data," Gary Langer, then ABC News director of polling, wrote in 2009 . "By claiming sampling error, samples taken outside that framework try to nose their way out of the yard and into the house. They dont belong there. I have yet to hear any reasonable theoretical justification for the calculation of sampling error with a convenience sample."

The argument for using the margin of error with Internet surveys is that even nonrandom-sample surveys have sampling variability that is largely random and predictable. Moreover, the assumptions that online pollsters make in seeking to remove a range of errors through weighting and models are similar to the assumptions made by those who begin with random samples.

"Traditional surveys nowadays can have response rates in the 10% range," Columbia political scientist and statistician Andrew Gelman argued last year. "There's no 'grounding in theory' that allows you to make statements about those missing 90% of respondents. Or, to put it another way, the 'grounding in theory' that allows you to make claims about the nonrespondents in a traditional survey, also allows you to make claims about the people not reached in an internet survey."

In addition to low response rates, most phone surveys considered to be probability samples these days actually rely on two different sample sources: residential landlines and mobile phones. The cell phone samples are necessary to reach the growing number of Americans without landlines at home. Back when polls could rely solely on landline phones, most households had just one phone number, so a random sample of landline phone numbers would generate a random sample of households. Now there are often multiple cell phone numbers per household, and sometimes a landline as well, but we dont know when or how often that is the case. That makes it much harder to determine whether the probability of reaching any one household is the same as the probability of reaching any other household.

In recent years, the American Association for Public Opinion Research has waded into this controversy, recommending against the reporting of a margin of error for opt-in surveys and adding to its Code of Ethics a provision describing the "reporting of a margin of sampling error based on an opt-in or self-selected volunteer sample" as "misleading." That 2010 provision built on longstanding ethical guidance: "We shall not knowingly imply that interpretations should be accorded greater confidence than the data actually warrant." (AAPOR currently has a revision of its code in the works, motivated in part by the margin-of-error debate.)

This ongoing debate creates a dilemma for The Huffington Post's reporting of results from the opt-in online panel surveys we conduct in partnership with YouGov. Although YouGov calculates a "model-based margin of error" for each survey, we have not been reporting it when we discuss the survey results in HuffPost.

The problem: If we cite YouGovs margin of error, we violate AAPOR's Code of Ethics. If we leave out the margin of error, however, we fail to offer readers guidance on the random variation that's present with this type of survey, which we believe is also an ethical lapse. As members and proponents of AAPOR, we consider neither situation satisfactory. And the margin of error does offer valuable information when you're comparing two results from a survey or surveys -- it tells you how large differences have to be in order to mean something.

So we've come up with this solution: We'll add the following text to the methodological details we note when we report on HuffPost/YouGov surveys and link to the additional information prepared by YouGov:

"Most surveys report a margin of error that represents some, but not all, potential survey errors. YouGov's reports include a model-based margin of error, which rests on a specific set of statistical assumptions about the selected sample, rather than the standard methodology for random probability sampling. If these assumptions are wrong, the model-based margin of error may also be inaccurate. Click here for a more detailed explanation of the model-based margin of error."

Last week's debate on the margin of error didnt produce any real fireworks, partly because most of the participants conduct online panel surveys themselves. They were unified in hoping for better direction from industry standards.

Your Support Has Never Been More Critical

Other news outlets have retreated behind paywalls. At HuffPost, we believe journalism should be free for everyone.

Would you help us provide essential information to our readers during this critical time? We can't do it without you.

Support HuffPost