Meta-Analysis is a useful analytical technique to make sense of many small studies. Experimental designs are often small. Science is built on a series of hypothesis testing, and conducting small, but controlled experiments, can provide useful insights. Large studies, however, are expensive and few researchers have the resources to conduct them using an experimental design. Even quasi-experimental designs are costly.
Over time, however, researchers can pull together the data from many small studies that are trying to answer the same research question. Meta-Analysis has the potential to see the larger patterns and determine whether there are statistically significant results. That said, it is not an easy analytic technique
In the soft drink/obesity research, one frequently cited study is “Effects of Soft Drink Consumption on Nutrition and health: A Systematic Review and Meta-Analysis,” by Lenny R. Vartanian, Marlene B. Schwartz, and Kelly D. Brownell (2007), American Journal of Public Health, April 2007, vol 97, No. 4: 667-675.
The health impacts of soft drinks has long been debated. They point out that in 1942, the American Medical Association warned against added sugar in drinks. Soda was not consumed in large quantities back then. The authors report “at that time, annual US production of carbonated soft drinks was 90 8-oz servings per person; by 2000 this number had risen to more than 600 servings.” (p. 667)
Needless to say, this is a political issue. New York City Mayor Michael Bloomberg spearheaded an effort to limit sale of soft drinks over 16 ounces. There was a storm of protest—and a court case that declared the law “arbitrary” and struck it down. However, the issue is not likely to go away.
But what does science say? Is it as harmful as anti-soda advocates claim? The researchers of this study stated objectives “were to review the available science, examine studies that involve the use of a variety of methods, and address whether soft drink consumption is associated with increased energy intake, increased body weight, displacement of nutrients, and increased risk of chronic diseases.” (p. 667)
Meta-Analysis is not easy and it is worth spending some time on the methods these researchers used. First, they needed to find all the relevant studies and they described their methods for computer searches using key terms. In addition, they contacted authors of the articles to find if they had additional data.
Eighty-eight studies that met their criteria. Of course, these studies often differ; these differences include the use of different designs, different people, different variables, and assessment methods. These differences, they found, made a difference in terms of the “effect size”, meaning the strength of the impact or the relationship between the dependent and independent variables. (There are statistical tests that are used to determine the extent of the diversity of the studies—a subject for advanced researchers who will want to track the debate about which statistical test is the best—well beyond where I want to take you). What is important here, is they opted to group their studies in terms of design as well as whether the study was industry funded.
What is important to know is that they did a lot of testing and proceeded to calculate the average effect size, which are r values, a statistical measure of association. A measure of association varies between 0 and 1. Actually, it varies between -1 and +1; the sign show the nature of the relationship. If soda consumption goes up and weight goes up, the relationship is said to be positive and will have a plus sign (actually, there is no sign noted and it is assumed to be positive). If soda consumption goes up and weight goes down, the relationship is said to be negative, meaning that the changes are in opposite directions. Negative relationships will always have a minus sign. The terms “positive” and “negative” mere label the direction of change, they are not judgments about whether the relationships are good or bad.
The closer to 0, the weaker the relationship between the variables; the closer to 1, the stronger the relationship. Converting the r values into English, however, is a judgment call. The authors reported: “We considered an effect size of 0.10 or less as small, an effect size of 0.25 as medium, and an effect size of 0.40 or above as large.” (p. 668). Of course, the problem of interpreting results in English is problematical, because it does not provide guidance for r values that are between those cut-points.
Is there a relationship between the consumption of soft drinks and calorie intake?
Overall, the effect size was 0.16 between consumption of soft drinks and calorie intake. I would call this a weak relationship. To look more closely at the data, they grouped the studies in terms of design: Cross-sectional, Longitudinal, and Experimental designs. But even with this, they did not find any large effect. The strongest effect was for an experimental studies using sugared soda and calorie intake: the r-value was 0.33. (p. 669).
One analysis was interesting: when they compared studies that were industry studies or not: The industry studies had lower effect sizes. (p. 669).
Is there a relationship between soft drink consumption and Body Weight?
Overall, across all the studies, the r-value was 0.08. (p. 669). This is very close to zero, so I would say this is a darn weak relationship. But when looking at just the experimental studies, the researchers state that the effect size was 0.24 and that the results were statistically significant. In social science, it is hard to find r-values that are above .2, but clearly, it is true even here. This is a moderate relationship, but still, it is not a strong relationship.
Is there a relationship between soft drink consumption and Milk consumption? The found a negative overall relationship: that is, when soft drink consumption increased, milk consumption decreased. The effect size as –p.12. There were no experimental studies that looked at milk consumption. There is a lot of variation here, but when soda intake was measured rather than self-reported in the longitudinal studies, there was a -0.58 effect size: this is a pretty strong relationship showing that as soft drink consumption increased, milk consumption decreased.
Again, industry-funded studies had notably lower effect sizes. (p. 671)
The last question—is there a relationship between soft drink consumption and health outcomes—for the most part, the r-values are close to zero. They report on separate studies, not always reporting r-values. It appears to me that there were insufficient studies here to do a meta-analysis for this question, although the authors do not state that. However, from a common sense perspective, negative health outcomes likely to take years to develop and a host of other factors other than soda consumption are likely to contribute.
While they recognize that the “intake of soft drinks and added sugars, particularly high fructose corn syrup, has increased coincident with rising body weights and energy intakes in the population of the United Sates,” they recognize that this represents “only broad correlations.” P. 672.
This meta-analysis does not, however, firmly establish a strong link between soft drinks and health outcomes. Aside from the methodological issues of the variety of methods and designs-and they suggest the use of more experimental designs—many factors impact health outcomes. The do note, however, the disparity in terms of effect sizes in industry-funded research.
But in the end, their conclusions go beyond what they found in their meta-analysis. While it is true, as they say, the found “a clear and consistent association between soft drink consumption and increased energy intake,” they do not also note that this is a weak relationship. And while there are many reasons to “recommend a reduction in soft drink consumption,” (p. 673), not the least of which is that soft drinks with added sugar are empty calories—their study does not show a strong relationship between consumption and body weight or between consumption and unhealthy outcomes.
The challenge in science is to resist the temptation to draw conclusions that go beyond the research. Given that some people (reporters included) might look at the introduction and conclusion, and whose eyes glaze over at the tables of numbers and Greek letters, it becomes a problem. What happens when reporters only look at the conclusions that go beyond the actual findings? Oats reduce cholesterol, eggs increase cholesterol, or consumption of red meat causes heart disease make the news but may not be accurate reports. While some studies might find an association, rarely do the media report effect sizes or the strength of the relationship. Nor do we see a critique that looks at whether the observed relationship is actually causal.
Perhaps we expect too much from science. In the policy arena, we want proof. But demonstrating causality is difficult when looking at something as complex as weight gain and disease. We get caught up in looking for a single cause to explain something as complex as the human body and how it works (or fails to work). Nutrition is a factor, without a doubt. But so are the environment, the chemicals that we come into contact with, and the strength of endocrine system. Trying to sort it out takes time and resources. But we want the quick fix—whether it is in personal actions (take a pill or this “sure to lose weight” power drink) or a policy like banning soda. And while I can’t see a nutritional downside in banning a non-food like soda (although there would be an economic impact), this highlights the challenges of figuring out public policy recommendations that should be applied to everyone.