But if you'd like to know why I'm calling bullshit on the recent hype over an epidemiological study "proving" that meat-eating is bad for low-carb dieters, read on.
First, some background about epidemiological studies in general.
Epidemiology -- the interdisciplinary, observational study of factors that impact public health -- is pretty damn cool. It's a useful tool for researchers examining everything from the source and vector of communicable diseases to the best ways to treat illnesses. Combining mathematics with biology and social science, epidemiology allows scientists to draw inferences and point out statistical correlations between certain environmental conditions and health outcomes. Epidemiology helps scientists develop and hone their hypotheses.
One celebrated example -- the pioneering example -- of epidemiology's ability to help point the way to sources of disease was the subject of Steven Johnson's suspenseful 2006 book "The Ghost Map." In it, Johnson describes how a London doctor named John Snow came to the conclusion that a horrific mid-nineteenth century cholera outbreak stemmed from a contaminated pump from which residents of the impoverished Golden Square neighborhood drew their drinking water. As described by the New York Times' book review of "The Ghost Map," Snow's on-the-ground detective work and statistical analyses led him to create a "ghost map" that "showed the correlation between cholera cases and walking distance to the Broad Street pump."
One week after the outbreak began, having heard Snow’s arguments, the local Board of Governors ordered the shutdown of the Broad Street pump. Soon afterward the epidemic sputtered to an end. And that’s the satisfying denouement of the tale in its often-told, bare-bones form: John Snow pioneers the science of epidemiology and, by having a pump handle removed, saves hundreds of lives. Among the points usually omitted, however, is that Snow himself never managed to see or identify what it was, in the water, making people sick. He got the epidemiology, but not the bacteriology.And therein lies the catch: Epidemiological studies can help draw inferences and find correlations, but they can never prove causation.
Observational studies "cannot prove that a specific risk factor actually causes the disease being studied. Epidemiological evidence can only show that this risk factor is associated (correlated) with a higher incidence of disease in the population exposed to that risk factor. The higher the correlation, the more certain the association, but it cannot prove the causation."
Epidemiologists understand this, and are the first to acknowledge that correlation does not imply causation. After all, health issues are usually affected by a chain or confluence of different conditions; you can't just focus on just one potential risk and ignore the other environmental factors at play. But observational studies don't take place in a sterile, controlled lab environment. By definition, real-world epidemiological studies can't control for all of the environmental factors that might be causative agents in disease or illness -- including all manner of lurking and confounding variables.
Example: I observe that diet shakes are mostly used by fat people. That's a correlation. But there's no evidence here that the diet shakes are causing people to become fat. There are other factors that are causative but invisible to me because I wasn't looking at these other variables.
The best that an observational study can do is to point to the strong likelihood that a correlation could actually be a causation. And to do that properly, you gotta kick the tires. Hard.
One method of kicking the tires is to run through the widely-accepted Bradford-Hill criteria, which pose nine challenges to observational studies trying to establish causation:
- Strength: How significant is the correlation? The smaller the correlation, the less likely there's a causal relationship.
- Consistency: Are the findings consistent across different studies by different people using different sample populations in different locations? If not, you have a problem.
- Specificity: How isolated is the purported cause and effect? Causation's more likely if other potential causes have been ruled out.
- Temporality: The effect can't precede the cause.
- Dose-Response Relationship: Does the magnitude of the health outcome change in accordance with greater or lesser exposure to the purported cause?
- Plausibility: Is there a plausible way that the "cause" could have led to the "effect"? In other words, under what theory is the cause leading to the effect?
- Coherence: Are the epidemiological findings compatible with existing theory and laboratory findings?
- Experiment: Can the "causal" nature of the relationship be re-created in a controlled experiment?
- Alternate Explanations: Have other possible explanations been taken into account, and were they ruled out? Multiple hypotheses should be considered before finding a cause-and-effect relationship.
So how can you prove causation? You can't just rely on an epidemiological study alone. Rather, you have to conduct a randomized, placebo-controlled trial. Otherwise, all you have is an unproven hypothesis.
But for whatever reason, a lot of smart people don't seem willing or able to grasp the concept that correlation doesn't imply causation. It seems like every time we turn around, we're hit in the face with headlines implying causation where -- in reality -- there may have only been the faintest hint of correlation (or none at all).
It turns out that journalists (and even scientists) don't bother to kick the tires. They don't critically examine the actual findings of the study. Instead, they just skim the abstracts, come up with an attention-grabbing headline ("IS TOO MUCH MEAT KILLING LOW-CARB EATERS?"), and let the bullshit fly. Who cares if the study's findings are accurately reported, or if the public gets the wrong idea?
And, of course, we're complicit, too. We just eat up this crap rather than putting on our thinking caps.
[T]he problem ... lies with the very nature of epidemiologic studies -- in particular those that try to isolate causes of noninfectious disease, known variously as "observational" or "risk-factor" or "environmental" epidemiology.
The predicament of these studies is a simple one: Over the past 50 years, epidemiologists have succeeded in identifying the more conspicuous determinants of noninfectious diseases — smoking, for instance, which can increase the risk of developing lung cancer by as much as 3000%. Now they are left to search for subtler links between diseases and environmental causes or lifestyles. And that leads to the Catch-22 of modern epidemiology.
On the one hand, these subtle risks -- say, the 30% increase in the risk of breast cancer from alcohol consumption that some studies suggest -- may affect such a large segment of the population that they have potentially huge impacts on public health.
On the other, many epidemiologists concede that their studies are so plagued with biases, uncertainties, and methodological weaknesses that they may be inherently incapable of accurately discerning such weak associations.
As Michael Thun, the director of analytic epidemiology for the American Cancer Society, puts it, "With epidemiology you can tell a little thing from a big thing. What’s very hard to do is to tell a little thing from nothing at all." Agrees Ken Rothman, editor of the journal Epidemiology: "We’re pushing the edge of what can be done with epidemiology." With epidemiology stretched to its limits or beyond, says Dimitrios Trichopoulos, head of the epidemiology department at the Harvard School of Public Health, studies will inevitably generate false positive and false negative results "with disturbing frequency."
Most epidemiologists are aware of the problem, he adds, "and tend to avoid causal inferences on the basis of isolated studies or even groups of studies in the absence of compelling biomedical evidence. However, exceptions do occur, and their frequency appears to be increasing."Which brings us (at long last) to Tuesday's "news" that low-carb dieters are better off avoiding meat.
Yikes! If this is correct, all of us paleo eaters are super-fucked!
Here's what CBS had to say (which is pretty typical of what other news organizations regurgitated):
A new study by Harvard scientists shows that the death rate among people who eat a diet that's low in carbs and high in animal protein is higher than that of people who follow other eating patterns.Note that the author of this piece was careful to say that eating meat was simply "associated with" increased risk of mortality, rather than saying outright that eating meat will kill you. But is there any doubt that most people come away with the impression that low-carb dieters who ingest animal products are at greater risk of keeling over than veggie-dominant low-carbers?
In contrast, people who ate a low-carb diet that included lots of vegetables had a lower risk of death, the study found.
To conduct the study, researchers tracked the eating habits and health of 85,000 women and 45,000 men for more than 20 years. The study was published in the September7 issue of "Annals of Internal Medicine."
Animal-based proteins and fats are associated with increased mortality rates, including increased cardiovascular mortality and increased cancer mortality, a new study published in the Annals of Internal Medicine concludes. But low-carbohydrate, high-protein diets composed mostly of plant-based proteins and fats were associated with lower mortality rates overall and lower cardiovascular mortality rates.
I guess that I shouldn't be surprised that the media would once again sloppily create the impression that correlation equals causation, and make hyperbolic pronouncements about a new study's findings in order to sell newspapers.
But this study is particularly vexing because it's not just the press that's distorting the study's data. The scientists responsible for the study stated in their abstract that a low-carb diet "based on animal sources was associated with higher all-cause mortality in both men and women, whereas a vegetable-based low-carbohydrate diet was associated with lower all-cause and cardiovascular disease mortality rates." They, too, are careful not to say that causation was established -- but the numbers actually tell a different story: Contrary to what the abstract says, there isn't even a correlation here.
Once again, super-brilliant stat nerd Denise Minger took the time to set the record straight. Here's just some of what she found:
- The so-called "low-carbers" in the Harvard study were actually eating up to almost two-thirds of their diet as carbohydrates. How exactly is this low carb? As Denise points out: "Even the lowest low-carb eaters were still eating over 37% of their calories from carbohydrates. Whoever decided to call this study 'low carbohydrate' is nuttier than a squirrel turd."
- The veggie-eaters "ate more fruits, vegetables, and whole grains than the Animal Group -- which begs the question: What kinds of carbohydrates filled this macronutrient void for the animal-food eaters? Could it've been refined grains and processed carbs, which the study conveniently forgot to document?"
- According to the data, the meat-eaters were "more likely to smoke and had higher BMIs" than the vegetable-eaters, which strongly suggests that the meat-eaters "in the aggregate, may have been somewhat less health-conscious than the dieters lumped into the vegetable category." But the study didn't bother looking at the possible effects of confounding variables like other diet and lifestyle choices.
- Don't be fooled into thinking that the veggie-eaters were vegetarian. In fact, "[t]hey derived almost 30% of their daily calories from animal sources (animal fat and animal protein), versus about 45% for the Animal Group. If we compare the middle (fifth) decile, the Vegetable Group was eating a greater percent of total calories from animal foods than the Animal Group was." Thus, the differences between the groups aren't as significant as they've been made out to be. Looks to me like the researchers had a difficult time distinguishing "a little thing from nothing at all."
- Even though two groupings of veggie-eaters ate "the same amount of red meat and nearly the same amount of total animal foods," cancer and cardiovascular mortality rate was lower in one group than the other. One veggie-eating group actually showed a lower risk of cardiovascular mortality than the meat-eating group, even though this bunch of veggie eaters "was eating a slightly greater proportion of animal foods." This certainly suggests that "animal products aren’t the driving force behind differences in mortality rates."
Here’s a clue. Every time the researchers made multivariate adjustments to the data to account for the risk factors they did document (including physical activity, BMI, alcohol consumption, hypertension, and smoking, among other things), the hazard ratio went down for the Animal Group (meaning it got better) and it went up for the Vegetable Group (meaning it got worse). That indicates pretty clearly that the Animal Group had more proclivity to disease right from the get go, regardless of meat consumption, and the Vegetable Group may have been more health-aware than most folks.
In other words, it looks like what this study really measured was a Standard American Diet group (aka Animal Group) and a slightly-less Standard American Diet group (aka Vegetable Group). Both ate sucky diets, but the latter had slightly less suckage. You can bet the farm that neither was anything close to “low carb.” And if you have two farms, you can bet the other one that neither diet group was anything near plant-based, so I’m not sure the vegan crowd has much to gloat about here.
Denise's summary:
Bottom line: In this study, when you look closer at the data, differences in mortality appear to be unrelated to animal product consumption. Changes in cancer and cardiovascular risk ratios occur out of sync with changes in animal food intake.
Until a controlled, large-scale, randomized study of low-carb diets is done, I wouldn't put much faith in the purportedly "causal" connections found in observational studies.