Polls should provide three basic pieces of information:

1. Were the participants randomly selected? Random selection reduces bias. If I surveyed my best friends or the people in my neighborhood, it is unlikely their views would represent the views of all the people in my state.

2. How many people participated? Did they talk to just 100 people or 1,000 people? Related to this is the response rate-how many people contacted actually participated. When only a small percent of those asked complete the survey, it becomes a “volunteer” sample and you should ask whether there might be some kind of bias in who chooses to participate. In a workplace survey, for example, if only 10 percent of the employees participate, you might worry that only the most unhappy people answered; that will give a more negative view of management than might otherwise be the case if everyone had answered the survey.

3. How accurate are the results? Because researchers want to make inferences about what people in general believe about global warming, they rely on the magic of statistics to calculate something called the margin of error (and sometimes called sampling error). Assuming the participants were randomly selected, they can calculate the margin of error based in part on the number of people who participated. In polls, researchers present this information in terms of plus or minus 3% or 5%.

**What was reported in the stories?**

I was surprised that none of the three sources I read (the Washington Post, Democracy Now! and the Christian Science Monitor ) contained the basic information. Maybe they were all short on space that day.

Linking to the actual survey, the Washington Post gives the needed information:

*This Washington Post-ABC News poll was conducted by telephone Nov. 12-15, 2009, among a random national sample of 1,001 adults including users of both conventional and cellular phones. The results from the full survey have a margin of sampling error of plus or minus three percentage points. Sampling, data collection and tabulation by TNS of Horsham, Pa.*

OK. So they used standard procedures: random selection to obtain a sample size of 1,000 and estimated margin of sampling error is plus or minus 3 percent.

What does the **margin of** **sampling error** mean in practical terms? Basically it means that if they had surveyed all adults in the United States, they are 95 percent certain (this is the *confidence level*, which is typically at the 95 perent level) that between 75 percent (72% plus 3 percent) and 69 percent (72% minus 3 percent) would report that they believe that global warming in happening.

This becomes important when comparing data. If in the prior poll, 75 percent said they believed global warming was happening, the margins of error of both polls overlap. The margin of error for the 2nd poll would range from 78 percent to 72 percent (+/-3%) and would therefore overlap with the current poll results. In other words, there would be no statistical difference between the two polls because the difference could be explained by the margin of error in the polls.

When the ranges of the margin of error do not overlap, it is interpreted as being statistically significant—that is, there is a difference in the percent reporting they believe global warming is happening and that difference is not likely due to the error inherent in working with random sample data.