avatarSteven Anthony

Summary

The article critiques the field of nutrition science, labeling much of it as pseudoscience due to reliance on unreliable and invalid data collection methods, lack of experimental control, and misinterpretation of results.

Abstract

The article "Nutrition Pseudoscience" by an unnamed author on Medium argues that most of what is considered nutrition science is actually pseudoscience. It explains that true science involves a systematic study through observation and experimentation, which is often not the case in nutrition studies. The author points out that many nutrition studies, such as those on coffee consumption, rely on self-reported data that are neither reliable nor valid. These studies lack experimental control and often misuse statistical modeling to draw conclusions. The author emphasizes that large, longitudinal studies without proper experimental design cannot establish causal relationships, and the use of terms like "associated" in research titles indicates a lack of causal evidence. The article calls for a more rigorous scientific approach in nutrition research, suggesting that Randomized, Controlled Trials (RCTs) are necessary to determine the actual health impacts of foods and beverages. It also criticizes the media and some nutrition practitioners for misinterpreting and misrepresenting research findings to the public.

Opinions

  • The author believes that the field of nutrition often fails to adhere to the scientific method, leading to pseudoscience rather than true scientific inquiry.
  • Self-reported diet history questionnaires are criticized for producing unreliable and invalid data due to inaccurate recall and lack of standardized measurement.
  • The author asserts that observational studies in nutrition cannot establish causality and that the term "associated" in research titles is an admission of this limitation.
  • There is a strong opinion that statistical control in observational studies is insufficient compared to actual experimental control, which is necessary to determine the health effects of dietary components.
  • The article suggests that the field of nutrition should employ more Randomized, Controlled Trials (RCTs) to obtain credible evidence.
  • The author is critical of the media and some nutrition professionals for misinterpreting research findings and conveying a sense of certainty where there is none.
  • The author advocates for better training for nutrition practitioners and journalists in interpreting research reports to prevent the spread of misinformation.

Nutrition Pseudoscience

Why we don’t know what we think we know

High-quality research in the field of nutrition is nearly impossible to conduct — so it isn’t. Image licensed via freepik.com

Oxford Languages defines science as “the intellectual and practical activity encompassing the systematic study of the structure and behaviour of the physical and natural world through observation and experiment.” This same source defines pseudoscience as “a collection of beliefs or practices mistakenly regarded as being based on scientific method.”

When it comes to the field of nutrition, the reason we don’t know what we think we know is because most of what we think of as “nutrition science” is, in fact, pseudoscience.

Simply put, the scientific method is the process of establishing facts through testing and experimentation. The process typically starts with an observation. Based on this observation, a hypothesis is articulated. The hypothesis suggests that certain results should occur under specific experimental conditions. An experiment is then conducted, and the actual results of the experiment are compared to those expected based on the hypothesis. If the actual and expected results match, the researcher has evidence to support their hypothesis. If the results don’t match those expected by the hypothesis, the researcher will typically refine the hypothesis, design a new experiment to test this refined hypothesis, run the new experiment and compare results. This process is repeated until a refined hypothesis is supported, or the basic hypothesis is abandoned.

It’s a process of observation, developing hypotheses, experimentation, discovery, and refinement.

I was inspired to write this article because I’ve seen so many stories here on Medium discuss research reports in the field of nutrition and completely fail to understand the level of pseudoscience they are dealing with.

Coffee is coffee, right? Wrong! Image licensed via freepik.com

Recently, I’ve seen a few stories here discussing how the amount of coffee you drink can impact your health. Each one misrepresented the already poor analysis done by people in the field of “Nutrition Science.” Just as an example, one such story has the title “A Recent Study Has Revealed the Long-Term Impact of Drinking Coffee Every Day.” The title of the research report the Medium story is based on is “Light-to-moderate coffee drinking associated with health benefits.” (see a summary here)

So, what is it about the research report, published by the European Society of Cardiology, that makes it pseudoscience? Let’s look at the details of the research paper through the lens of the definition of science and the scientific method to see why the research paper falls short of being “science.”

Science starts with observation. And, as it turns out, the research paper starts with observation — a lot of observation. According to the research report, “[The] study investigated the association between usual coffee intake and incident heart attack, stroke, and death. The study included 468,629 participants of the UK Biobank with no signs of heart disease at the time of recruitment.”

So, they observed how much coffee 468,629 people drank, and they report doing so for between 10 and 15 years for each participant. After all the data were collected, they grouped people into one of three groups, “according to their usual coffee intake: none (did not consume coffee on a regular basis), light-to-moderate (0.5 to 3 cups/day) and high (more than 3 cups/day) and looked at the proportion of each group that suffered from various health conditions — including death.

Are we to believe that the amount of coffee 468,629 people drank was observed, and recorded, every day for 10 to 15 years? Hardly. The clue here is the term I emphasized above: according to their usual coffee intake. Each participant’s “usual coffee intake” is based on their self-reported responses to a Diet History Questionnaire. This is a link to a popular one.

The researchers of this particular report don’t mention the specific questionnaire used, but the one in the link is typical — over 100 pages long, asking questions like:

Coffee, caffeinated or decaffeinated (including brewed coffee, instant coffee, or espresso shots; NOT including espresso drinks such as latte, mocha, etc.)

o [Check here if…] You drank coffee in the past 12 months.

Over the past 12 months, how many cups of coffee, caffeinated or decaffeinated (including brewed coffee, instant coffee, or espresso shots; NOT including espresso drinks such as latte, mocha, etc.), did you drink?

o Less than 1 cup per month

o 1–3 cups per month

o 1 cup per week

o 2–4 cups per week

o 5–6 cups per week

o 1 cup per day

o 2–3 cups per day

o 4–5 cups per day

o 6 or more cups per day

The questionnaire asks about many other foods and beverages, as well as various physical and lifestyle activities.

The researchers then take the annual number of cups of coffee reported from their questionnaire and calculate an “average daily coffee consumption” amount for each participant. It is then assumed that each participant consumed their “average daily coffee consumption” amount every day for the next 10–15 years, without variation.

Clearly, to base “average daily coffee consumption,” over 10 to 15 years, on actual daily coffee consumption for each participant would be impossible. So, the method used provides a practical solution. But is it a good solution? If you drink coffee, do you remember how many cups you had last week, let alone over the last year?

The answer to both questions is no.

For one thing, people can’t remember how much they consume over a year’s time. This means the data aren’t reliable. If you took the survey 3 days apart, you’d likely give very different answers across the 119 pages. Neither of them being accurate.

For another, diet history questionnaires like the one linked above don’t measure in standard units. What size cup of coffee I drink might be twice the size, or half the size as your coffee cup. My “1 cup per day” might equal someone else’s “4–5 cups per day” in terms of volume. This means the data aren’t valid — they don’t measure “amount of coffee,” what’s being measured is only the number of cups of coffee consumed, regardless of the volume they can hold. They also don’t measure coffee strength. So, the only valid measure we have is of cups, not the amount of coffee consumed in those cups. This is a fatal flaw in the approach to the question of whether light-to-moderate coffee drinking is associated with health benefits — because they don’t really have a reliable or valid measure of coffee drinking.

Another hit to data validity comes from assuming people consumed the same amount of coffee every day for 10 to 15 years. These studies typically ask people to fill out the questionnaire when they start the study. If I had started this study 15 years ago, I’d be in the None Group. Now, I’d be in the High Group. When looking to see if a food is related to health conditions, this kind of group assignment error is a big one!

Garbage In — Garbage Out Image licensed via freepik.com

So, the researchers start with a bunch of data that are neither reliable nor valid measures of what they are trying to look at, and they concoct an “average daily coffee consumption” value out of it for each participant. This meaningless value is then used to group people — and not without error.

Then, more observations are made: what proportion of each group end up having various health issues. At least these observations are relatively straightforward — enough so that we can take them at face value.

The reported results show an association between their invalid measure of coffee consumption and, to pick just one, cardiovascular outcomes. They report a “17% lower risk of death from cardiovascular disease for those in the moderate consumption group (0.5 to 3 cups per day).” That seems like a big difference — but is it? The difference in risk of cancer between smokers and non-smokers was between 1000% and 3000%, depending on the study. Risk differences of under 200% are typically viewed as statistical noise — unless you are trying to get something published.

The researchers conclude that “[Their] findings suggest that coffee consumption of up to 3 cups per day is associated with favourable cardiovascular outcomes.” Of course, this conclusion is not supported by the data, as they don’t have a reliable or valid measure of coffee consumption. And the effect is also so small, their conclusion isn’t supported by the analysis, either.

You aren’t using the scientific method if you aren’t conducting experiments. Image licensed via freepik.com

And did you catch that all they have are observations? There is no experiment here and because of that, no experimental control. We have no idea if coffee consumption has anything to do with cardiovascular issues. Even if the measures they had were reliable and valid, there were no controls on the other things people ate, drank or did — and any one or combination of things could have been the cause of the cardiovascular events they observed across the groups. The researchers do point out that they applied some statistical modeling to control for some factors like smoking and activity level. But as someone with an advanced degree in research methods and applied statistics, I can assure you that statistical control isn’t the same as actual, experimental control.

These large, longitudinal studies seem impressive. But they are not experiments. They are not part of the scientific method. Even meta-analyses, that combine the results of several of these large, longitudinal studies are no stronger than the data quality in the original studies they combine. Combining 20 piles of crappy data just makes a mountain of crappy data.

To determine if coffee has a beneficial (or detrimental) effect on our health, you’d need to rely on an experimental design referred to as a Randomized, Controlled Trial (RCT).

With an RCT, the researcher randomly assigns participants to one of at least 2 groups. The Control Group gets no treatment or intervention; the Test Group gets the treatment or intervention. This is how drug trials are done: the Test Group gets the new drug while the Control Group doesn’t — they get a placebo. After a course of treatment, researchers look at the health markers that are supposed to improve on the drug improve for the Test Group versus the Control Group.

Examining the effect of coffee drinking on long-term health would bring a couple of complications for an RCT. For one thing, if you randomly assigned people to the Control and Test groups (and you could have multiple Test Groups to look at levels of coffee consumption), you’d have to force non-coffee drinkers who were randomly assigned to one of the Test Groups to drink coffee, and force coffee drinkers assigned to the Control Group to not drink any. This would likely cause compliance issues over 10 to 15 years.

You’d also want to standardize how much coffee, and the type of coffee that was consumed by each person in each Test Group. The most difficult thing would be to control all other foods and beverages each Study participant ingests over the period of the study. If your hypothesis is that coffee is what impacts health, you need to control other foods that could impact health — you’d also need to control the activities of Study participants for the 10 to 15 years.

The good news is you don’t need to have 400,000 participants in an RCT (because of the strength of the experimental design). However, with a study lasting up to 15 years, you’d still need to start with maybe 5,000 per Group in order to account for people dropping out of the study for various reasons. And even short-term RCTs in the field of nutrition fail to adequately control the experimental conditions. Most diet studies, even those of a few months in length, simply instruct Groups what to eat — with no actual control over compliance.

It’s easy to see why the field of Nutrition doesn’t turn to the scientific method for insights, but this is not a license to treat the non-scientific examination of poor-quality data as if it were. And the fact that this pseudoscience is treated like science is why there is so much confusion over what is good for us and what isn’t.

Finally, we have an issue related to the inadequacy of training of practitioners in the field of nutrition and writers who report on nutrition reports — training specific to reading and interpreting research reports found in various journals. I don’t know if the author of the Medium story I reference has training in the field, but I’ve seen those with credentials in the field make the error described below.

They see a research report with a title like “Light-to-moderate coffee drinking associated with health benefits,” and a conclusion in the report saying something like “[Their] findings suggest that coffee consumption of up to 3 cups per day is associated with favourable cardiovascular outcomes,” and end up writing a story with a title like “A Recent Study Has Revealed the Long-Term Impact of Drinking Coffee Every Day.”

Here’s the problem: When a researcher finds a causal relationship between, say, a food and a health condition — i.e., reveals the impact — they will never include the word “associated” in the title of their report. When you see “associated” in a title or a conclusion, it’s an admission that no causal link was found.

And you’ll be hard-pressed to find a study looking at data from one of those diet and activity questionnaires given to people over a 5, 10, 15 or more-years that says they found a causal link between anything — because those big databases lack the data quality and experimental controls needed to make claims of causation. So, what you see is the word “associated” or “linked.” So, to be clear, the researchers said “[Their] findings suggest that coffee consumption of up to 3 cups per day is associated with favourable cardiovascular outcomes,” because they don’t know what, if anything, caused the observed differences in cardiovascular outcomes across the groups.

So, to the author of that Medium study (and those who have authored similar stories on the impact of a particular food on health): No — a recent study HAS NOT REVEALED the long-term impact of drinking coffee every day. You can’t treat pseudoscience like science.

Thank you for reading this article — hopefully it contained something you found useful.

If you aren’t a member of Medium but are thinking of joining, please join through my page! If you do sign up to Medium through my page, some of your membership fee goes to me (but you still pay the normal membership price).

With a paid membership to Medium, you will get to read more of my work plus you get unlimited access to thousands of Medium writers. And it’s only about $5.00 a month!

Image by author
Nutrition
Science
Health
Lifestyle
Food
Recommended from ReadMedium