As I had a cup of tea at my local hangout on December 23, 2009, I was surprised to see a discussion of research methods–and randomized clinical trials– in USA Today.
It was prompted by a story that raised the question: “Are celebrities crossing the line on medical advice?” It appears that celebrities can share their experiences and views about medical treatments to a wide audience. It can be a good thing. The article presented evidence that ordinary people change their behavior based on those stories: HIV screenings increased after Magic Johnson revealed in 1991 that he tested positive and colorectal cancer screening rose after TV anchor Katie Couric,who lost her husband to colon cancer, had a on-air screening in 2001.
But some feel that it is also possible for celebrities to champion medical treatments that lack a scientific basis for effectiveness and, as a result, cause harm.
This brings us to asking tough questions about research. Randomized clinical trials are said to be the “gold standard” in medical research. These are the classic experimental designs in which people are randomly assigned to a group that receives the treatment or to a group that does not and serves as the comparision group. The comparison group (also called the control group) are often given another type of treatment and the participants do not know which group they are in. In the most rigorous studies, called double-blind studies, neither the researchers or the participants know who is in which group.
The reason for this approach is to control for the “placebo” effect. This is a phenomenon where some people appear to get well or have reduced symptoms because they believe they are being treated even though they only received a sugar pill. It speaks to the power of the mind–that some people will get better because they expect to get better. If the researchers do not know who is in what group, it prevents them from unconsciously sending signals about expected results, thereby influencing the results.
Anecdotes–stories about an individual’s experience–do not constitute evidence of treatment effectiveness however much we want to believe them. They are useful, however, in raising concerns about problems with treatments as well as generating hypothesis about treatments that might work and that can be tested in order to determine effectiveness.
While not using the terms “research perspective”, Liz Szabo, in “You shouldn’t believe that medical advice when…”, offers good advice in how to approach these stories She suggests that warning sirens should sound when:
1. A treatment is touted as a miracle or a cure to a chronic disease
2. The report does not mention any side effects; even treatments that work has some possible side effects
3. The story relies solely on anecdotes and personal testimonies. While the person may have truly been cured or experienced a reduction in symptoms, it may have been due to other factors–including a passionate belief that the treatment would work.
4. The story does not provide results from clinical trials that have been published. She provides links to published study results: http://www.clinicaltrials.gov/ and http://www.pubmed.gov/
The article basically advises a degree of skepticism about medical advice. It is this detached research perspective that I think is necessary whenever you are using research results to make a decision. You want to look at the research itself and determine its credibility. What exactly did they measure and how did they measure it? Did they follow the best practices in conducting the research? If they are talking about cause-and-effect or impact–did they use an experimental design? If not, the conclusions are far from a proven fact.
It is important to remember that it takes many studies before a theory is accepted as true. It also helps to remember that while it is tempting to believe people who passionately speak their truth, there is a danger of premature certainty–that is, of drawing final conclusions before there is enough evidence.