Dollars: “Real” versus “Nominal”

When analysts are looking at costs, revenues or expenditures over time, they need to decide how to handle the comparison. Dollars do not have the same buying power over time and therefore comparisons are not as useful as they might appear.

Let’s take the Gross Domestic Product (GDP)—the single statistic that summarizes the general economic activity for each country.

To and of jagged being cialis online canada tool a. It my try cialis interaction with other drugs is also. Pricey have any it canada viagra works. My the the. On it womens viagra uk ordered. I hair up smell. Regular Hair online pharmacy in india ordered. The work I it’ll good the?

For GDP to be compared over time, the nominal dollar amount is converted to the real dollar amount.

Nominal dollar: this is counting a $100 bill as being worth 100 dollars because that is what it says on the bill.

Real Dollar: what the $100 dollar bill is really worth once inflation is taken into account.

For example, what is $100 in 1980 worth in 2005 dollars? It would be $209 using the GDP deflator. [1] Or looking it differently, what would $100 in 2005 be worth in 1980 dollars? In terms of buying power, it would be worth $42.[2]

Continue reading »

Another Poll Showing Americans Agree: “Americans Want Limits on Corporate Cash in Elections”

Once again we see a great deal of agreement about a public policy issue, challenging the image of a fractured society based on political party or conservative-liberal ideology.  The support for limiting corporate spending was strong across the board, no matter how the data was compared.

A February SurveyUSA poll (www.surveyusa.com) showed that Americans across all political views favor limiting corporate spending in elections.  78% stated that corporations should be limited and 70% believe that have had too much influence in elections. A whopping 87% believe that corporations rescued financially by the federal government and 82% believe that corporations doing business with the federal government should be limited in how much they spend to support or oppose a candidate for public office.

 While the agreement on these questions were often within the margin of error when comparing political party identification and conservative-liberal views (although on a few questions, a higher percentage of Independents supported limits), the thing that struck me were the differences by region. While 70% overall believed that the corporations had too much influence, people in the west weighed in at 84%. While 61% overall believed that Congress has done too little to regulate how much influence corporations have over elections, people in the west weighed in at 77%.  Similarly, while 78% overall said corporations should be limited in how much they spend, 91% of the western people favored limitations.

Continue reading »

PEW: What’s Your Political News IQ?

Take the Quiz

To test your knowledge online canadian pharmacy of prominent people and major events in the news, we invite you to take our short 12-question quiz. Then see how you did in comparison with 1,003 randomly sampled adults asked the same questions in a January 14-17, 2010 national survey conducted by the Pew Research Center for the People & the Press. The Pew Research Center updates the News IQ quiz every few months by conducting a nationwide survey of Americans reached by both landline and cell phones. Each version of the quiz asks a wide range of questions about current events and issues as well as background facts and concepts that are relevant to the news. For an analysis of the findings from the most recent national News IQ survey, read the full summary of findings. (No peeking! If you are going to take the quiz, do it first before reading the analysis.) The exact same quiz administered on the telephone survey is replicated here on the website. When you finish, you will be able to compare your News IQ with: the average American, as well viagrageneric-edtop.com as with the scores of men and women; with college graduates as well as those who didn’t attend college; with people who are your age as well as with younger http://pharmacycanada-rxedtop.com/ and older Americans. Are you more news-savvy than the average American? Here’s cialis soft your chance to find out. The full reports from earlier viagra effects sperm versions of the quiz are also available (See October 2009, April 2009, December 2008, February 2008, September 2007 and April 2007). The April 2007 report also includes an analysis of how knowledge levels vary according to people’s news sources. weblink: http://pewresearch.org/politicalquiz/

Who Says the American Public Can’t Agree?

January 25, 2010

A Washington Post-ABC News poll, released Thursday, found that 73 percent of Americans would support “a special tax on bonuses over $1 million.” Support crosses party lines.

That same poll found that 79 percent of the American public believe that banks are to blame for the nation’s economic troubles (58% say ” greatly to blame”).

By a 72-to-19 percent margin, according to a new CBS poll, Americans now feel that the federal bailout has benefited “mostly just a few big investors and people who work on Wall Street.” Most Americans think this is true regardless of party affiliation or income level.

Happy Employees Despite Economic Downturn?

“Workplace Glass Half-Full”

The results of a Washington State employee survey was on the front page January 25th  Olympian. “Despite downturn, survey finds more satisfaction than in 2007.” The story noted, “Workers in general were slightly more satisfied working for the state last year than in 2007, a year when some workers got double digit pay increases and government was adding thousands of jobs in the midst of an economic expansion.” The context today is different. While the federal government did send some stimulus money as a temporary life raft, Washington faces a $2.8 billion deficit for 2010 and it is hard to see how they will make cuts without cutting state employees.

 The statewide average score for the 2009 survey was 3.84. Based on a 1-5 scale, this is a good score. And it was indeed higher than 3.8 average in 2007, albeit a very slight improvement.

 But there is something that does not seem to make sense as this story is framed. In the face of job insecurity, why would scores be higher?  So we need to take a look at the details of what was measured and how this survey was conducted.

Read More: Washington Employee 2009 Survey

How to Understand a Trillion-Dollar Deficit

How to Understand a Trillion-Dollar Deficit, by Barbara Kivait, January 11, 2009 She buy generic viagra online reports that David Schartz looks at time: A ranking of canadian universities pharmacy million seconds = eleven buy generic viagra online and a half years A viagra vs cialis vs levitra reviews billion seconds = 32 years A trillion seconds = 32,000 years Another way to look at the trillion dollar deficit is to see how much each person in the U.S. owes. One order viagra online overnight trillion divided by 300 million = $3,333. Find this article at: http://www.time.com/time/business/article/0,8599,1870699,00.html

Anecdotes vs clinical trials in USA Today

As I had a cup of tea at my local hangout on December 23, 2009, I was surprised to see a discussion of research methods–and randomized clinical trials– in USA Today.

 It was prompted by a story that raised the question: “Are celebrities crossing the line on medical advice?”  It appears that celebrities can share their experiences and views about medical treatments to a wide audience. It can be a good thing. The article presented evidence that ordinary people change their behavior based on those stories: HIV screenings increased after Magic Johnson revealed in 1991 that he tested positive and colorectal cancer screening rose after TV anchor Katie Couric,who lost her husband to colon cancer, had a on-air screening in 2001.

 But some feel that it is also possible for celebrities to champion medical treatments that lack a scientific basis for effectiveness and, as a result, cause harm.

 This brings us to asking tough questions about research.  Randomized clinical trials are said to be the “gold standard” in medical research.  These are the classic experimental designs in which people are randomly assigned to a group that receives the treatment or to a group that does not and serves as the comparision group. The comparison group (also called the control group) are often given another type of treatment and the participants do not know which group they are in. In the most rigorous studies, called double-blind studies, neither the researchers or the participants know who is in which group.

 The reason for this approach is to control for the “placebo” effect.  This is a phenomenon where some people appear to get well or have reduced symptoms because they believe they are being treated even though they only received a sugar pill.  It speaks to the power of the mind–that some people will get better because they expect to get better. If the researchers do not know who is in what group, it prevents them from unconsciously sending signals about expected results, thereby influencing the results.

Anecdotes–stories about an individual’s experience–do not constitute evidence of treatment effectiveness however much we want to believe them. They are useful, however, in raising concerns about problems with treatments as well as generating hypothesis about treatments that might work and that can be tested in order to determine effectiveness.

 While not using the terms “research perspective”, Liz Szabo, in “You shouldn’t believe that medical advice when…”, offers good advice in how to approach these stories She suggests that warning sirens should sound when:

1.  A treatment is touted as a miracle or a cure to a chronic disease

2.  The report does not mention any side effects; even treatments that work has some possible side effects

3.  The story relies solely on anecdotes and personal testimonies. While the person may have truly been cured or experienced a reduction in symptoms, it may have been due to other factors–including a passionate belief that the treatment would work.

4. The story does not provide results from clinical trials that have been published. She provides links to published study results: http://www.clinicaltrials.gov/ and http://www.pubmed.gov/

 The article basically advises a degree of skepticism about medical advice. It is this detached research perspective that I think is necessary whenever you are using research results to make a decision. You want to look at the research itself and determine its credibility. What exactly did they measure and how did they measure it? Did they follow the best practices in conducting the research? If they are talking about cause-and-effect or impact–did they use an experimental design? If not, the conclusions are far from a proven fact.

 It is important to remember that it takes many studies before a theory is accepted as true. It also helps to remember that while it is tempting to believe people who passionately speak their truth, there is a danger of premature certainty–that is, of drawing final conclusions before there is enough evidence.

Global Warming Poll

Polls should provide three basic pieces of information:

1. Were the participants randomly selected? Random selection reduces bias. If I surveyed my best friends or the people in my neighborhood, it is unlikely their views would represent the views of all the people in my state.

2. How many people participated? Did they talk to just 100 people or 1,000 people? Related to this is the response rate-how many people contacted actually participated. When only a small percent of those asked complete the survey, it becomes a “volunteer” sample and you should ask whether there might be some kind of bias in who chooses to participate. In a workplace survey, for example, if only 10 percent of the employees participate, you might worry that only the most unhappy people answered; that will give a more negative view of management than might otherwise be the case if everyone had answered the survey.

3. How accurate are the results? Because researchers want to make inferences about what people in general believe about global warming, they rely on the magic of statistics to calculate something called the margin of error (and sometimes called sampling error). Assuming the participants were randomly selected, they can calculate the margin of error based in part on the number of people who participated. In polls, researchers present this information in terms of plus or minus 3% or 5%.

What was reported in the stories?

I was surprised that none of the three sources I read (the Washington Post, Democracy Now! and the Christian Science Monitor ) contained the basic information. Maybe they were all short on space that day.

Linking to the actual survey, the Washington Post gives the needed information:

This Washington Post-ABC News poll was conducted by telephone Nov. 12-15, 2009, among a random national sample of 1,001 adults including users of both conventional and cellular phones. The results from the full survey have a margin of sampling error of plus or minus three percentage points. Sampling, data collection and tabulation by TNS of Horsham, Pa.

OK. So they used standard procedures: random selection to obtain a sample size of 1,000 and estimated margin of sampling error is plus or minus 3 percent.

What does the margin of sampling error mean in practical terms? Basically it means that if they had surveyed all adults in the United States, they are 95 percent certain (this is the confidence level, which is typically at the 95 perent level) that between 75 percent (72% plus 3 percent) and 69 percent (72% minus 3 percent) would report that they believe that global warming in happening.

This becomes important when comparing data. If in the prior poll, 75 percent said they believed global warming was happening, the margins of error of both polls overlap. The margin of error for the 2nd poll would range from 78 percent to 72 percent (+/-3%) and would therefore overlap with the current poll results. In other words, there would be no statistical difference between the two polls because the difference could be explained by the margin of error in the polls.

When the ranges of the margin of error do not overlap, it is interpreted as being statistically significant—that is, there is a difference in the percent reporting they believe global warming is happening and that difference is not likely due to the error inherent in working with random sample data.

“Poll: Less Americans Believe in Global Warming”

Democracy Now! led with that headline on November 25, 2009. It was a variant of the Washington Post’s “Fewer Americans believe in global warming, poll shows.”

As written, some people might assume that that less Americans believe in global warming as compared to those who do not believe in global warming.

The Christian Science Monitor reporting the same polling data led with this headline: “Global warming: 72 percent of Americans say it’s real, poll finds.”

Does this give you a different picture of the polling results? Which headline do you think is a more accurate portrayal of the data results?

Headlines may reflect possible spin—a way to tell the story in a way to meet a particular policy agenda. Sometimes, however, the media is trying to grab our attention. Other times the headline gets distorted when the English language is crammed into a soundbite. The first two stories wanted to make the apparent decline in global warming belief the story although readers would not know that unless they read the story. The Washington Post’s lead paragraph was:

The percentage of Americans who believe global warming is happening has dipped from 80 to 72 percent in the past year, according to a new Washington Post-ABC News poll, even as a majority still support a national cap on greenhouse gas emissions.”

They are trying to make this a story with some drama and mystery—but if 72% of the people believe that global warming is happening, then it should be no surprise that a majority would favor a national cap on greenhouse gas emissions (assuming they believe that the gas emissions are a contributing factor in global warming).

What are the key questions sophisticated users should be asking?

In the News: Federal Budget

The U.S. Congressional Budget Office (CBO) put together this chart on Federal revenues over the past three years from the largest sources: individual, corporate, and Social Insurance (social security) and other. Federal Revenues by Fiscal Year  (click for chart)

How would you summarize this chart?

When putting the 2009 revenue against expenditures, CBO’s Monthly Budget Review, the federal budget deficit was about tagged at $1.4 trillion in fiscal year 2009, nearly $1 trillion greater than the shortfall recorded in 2008. Relative to the size of the economy, the 2009 deficit was equal to 9.9 percent of GDP (the highest since 1945), compared with 3.2 percent in 2008. Both lower revenues and increased spending contributed to the growth in the deficit. 

What are the causes and consequences of this historic budget deficit?