Learn → Module 09

Evaluating any nutrition claim

A reusable seven-question checklist for assessing any nutrition claim — a TikTok, a podcast hot take, a news headline, a supplement label, a doctor's offhand remark — in under a minute. Synthesizes the study-design hierarchy, the relative-vs-absolute-risk trap, and the industry-funding signature into a tool you can run on anything.

16 min read

Evaluating any nutrition claim

TL;DR. A claim shows up. A TikTok clip. A podcast quote. A headline. A supplement label. An aside from your doctor. You can run 7 questions on any of them in under a minute. What is the claim? Where did it come from? Who paid for the study? What kind of study was it? What is the real change in risk, in plain numbers? Did anyone replicate it? Who is telling you, and what do they sell? This module is a checklist. The goal is to stop you from redoing your kitchen every time a confident stranger talks.

What you'll learn

  • The 7-question checklist, in the order that filters fastest.
  • How to turn a scary percentage into the real change in risk.
  • How to find the actual paper behind any "studies show" headline in 2 clicks.
  • Why most supplements fail randomized trials but win observational ones.
  • The animal / cell / human gradient, and why dose ratios usually answer the question.
  • The vitamin-E-in-pills-vs-vitamin-E-in-almonds problem, and why it shows up again and again.
  • What Mendelian randomization can and cannot tell you.
  • The repeating tells of wellness-influencer and journalist content.
  • When the honest answer is "we don't know," and what to do then.

1. The 7 questions

Here is the checklist. Run the questions in order. Each one filters out a class of bad claims so the later questions have less to do.

Q1. What is the actual claim? Most viral statements fall apart when you ask for specifics. "Seed oils cause inflammation." In whom? At what dose? Compared to what? "Coffee causes cancer." Which preparation? Which cancer? How much coffee? A claim with no population, no dose, no outcome, and no comparison is a slogan, not a finding.

Q2. Where did it come from? A study, a headline, an anecdote, or a vibe? Trace it backward. A WhatsApp forward cites a podcast. The podcast cites an article. The article should cite a paper. The paper has a DOI (a unique digital ID). Each hop adds error. If the chain ends with no paper, that is your answer.

Q3. Who paid for the study? Lesser et al.'s 2007 PLoS Medicine analysis found that industry-funded beverage studies favor the sponsor 4 to 8 times more often than independent ones. Spector cites a subset where industry-funded drink studies favor the sponsor 20 times more often. Food companies fund about 11 times more nutrition research than the NIH does. This does not make industry findings wrong. It shifts the prior. Read the funding line, the author disclosures, and the funding of any meta-analysis (a study that pools many studies together) that includes the paper.

Q4. What kind of study is it? A short version of the hierarchy from module D1. A randomized controlled trial (RCT, where people are randomly assigned to one diet or another) with hard outcomes is the strongest. Then a large prospective cohort (a group followed forward in time) with repeated diet check-ins, replicated across populations. Then Mendelian randomization (MR, a method that uses inherited genes as a stand-in for lifelong exposure). Then a controlled feeding trial. Then a single cohort. Then a cross-sectional or ecological study (a snapshot of a whole group, which can lead to the ecological fallacy: drawing personal conclusions from group-level data). Then a case-control study (comparing sick people to healthy people after the fact, which suffers from recall bias). Then animal or cell-culture work, which can only show a mechanism. Then anecdote, which is not evidence. A 30-person feeding trial and a 500,000-person cohort with 20-year follow-up should not produce the same level of confidence even if they reach the same number. Rank the design before you read the headline.

Q5. What is the absolute (not relative) change in risk? This is the most weaponized number in nutrition reporting. A 30% relative increase sounds huge. The real change is usually tiny. We work this out in the next section.

Q6. Was it peer-reviewed and replicated? A single paper is one roll of a noisy die. A finding holds up when it shows up in different populations, different research groups, and different study designs. A finding that lives in 1 design only (only mice, only case-control, only 1 cohort) is weaker.

Q7. Who is telling you, and what do they make money on? Every claim arrives through a human with a product, a brand, a clinic, a supplement line, or a book to sell. A registered dietitian on salary at a hospital and an influencer pushing a $79 "metabolic reset" course are not the same source, even if they say the same sentence.

Most claims die at Q1 or Q2. They cannot survive being asked what they mean or where they came from.

2. The relative-risk trap, with worked examples

Almost every viral nutrition headline reports relative risk. Almost none reports absolute risk. The gap is the largest source of public confusion in the field.

Example 1: processed meat and colorectal cancer. The IARC's 2015 Group 1 classification produced headlines like "processed meat raises colorectal cancer risk by 18%." That 18% is a relative risk per 50g daily serving. The absolute numbers underneath: lifetime colorectal cancer risk in the Western population is about 5%. An 18% relative jump on a 5% baseline lifts lifetime risk to about 5.9%. That is a 0.9-percentage-point change. Spector's version of the same data: the average Italian meat eater's processed-meat cancer risk equals smoking 3 cigarettes per year. Same data. Different framing.

Example 2: the 2018 Lancet alcohol study. "No safe level of alcohol" ran the same trick in the other direction. The absolute numbers: 1 drink per day raises the risk of an alcohol-related event by about 1 case per 1.25 million bottles of wine. The statement is technically true. A non-zero risk exists. At moderate intake the size of the risk is trivial.

Three rules to apply when a percentage hits the page.

  1. What is the baseline rate? Without it, a relative risk has no meaning. A 50% jump on a 0.0002% risk is a different sentence from a 50% jump on a 5% risk.
  2. What is the absolute change? Translate "X% higher" into "X.Y percentage points," or "1 extra case per Z thousand people-years," or into number needed to harm (NNH: how many people need the exposure for 1 to be affected).
  3. What is the sample size and follow-up length? A 2% relative effect across 500,000 people over 20 years is plausible. The same effect in 200 people over 6 weeks is noise.

The habit of turning relative into absolute is the single best move a reader can build. Most headlines die on it.

3. The "studies show" tell: find the actual paper

Headlines do not link the paper. Press releases do not link the paper. Podcasts do not link the paper. The 2-click recovery: search Google Scholar or PubMed for the lead author's last name, plus the journal, plus the year. The paper will surface near the top. Find the DOI in the citation. Resolve it at doi.org, or look for an open-access version on the author's institutional page, PubMed Central, medRxiv, or bioRxiv.

Then read 3 things before anything else. The abstract conclusion. The funding declaration in the small-print footer. The limitations section. A surprising number of "groundbreaking" headline papers list, in their own limitations section, the same confounders that should have stopped the headline.

If the chain ends with no paper, the claim is anecdote, mechanism speculation, or marketing.

4. The supplement evidence question

Half of Americans take supplements. The global market is heading toward about $200 billion. Most major supplements tested in a proper randomized trial have failed or shown harm. The structural reason is healthy-user bias: people who take supplements differ from people who do not on dozens of measurable axes (income, education, smoking, exercise, regular medical care) and dozens of unmeasured ones. Observational studies see those differences and read them as a supplement effect. Randomized trials assign supplements at random, so the differences cancel out. They do not see the effect.

The pattern repeats.

  • Vitamin D for fractures. Glowing observational data, then an MR analysis of over 500,000 people and 188,000 fractures found no causal effect.
  • Fish oil for heart disease. Dropped by the AHA after a 25,000-person trial and a 79-trial meta-analysis covering 112,000 people.
  • Calcium for bones. The WHI trial in 35,000+ women found small density gains but no fewer hip fractures, plus more kidney stones.
  • Vitamin E for cancer prevention. SELECT increased prostate cancer.

When a new supplement claim shows up, ask 3 things. Is the supporting evidence observational (and so probably healthy-user-biased)? Is there any RCT with hard outcomes? What did it show? A supplement with a strong observational signal and either no RCT or a null RCT usually does nothing useful.

5. The animal / in vitro / human gradient

Most viral mechanism claims start in cell culture or rodent studies. Mice are not 70 kg humans. Three filters.

  • Species. Rodent metabolism differs from human metabolism in many specific ways. The acrylamide cancer scare came from rodents fed doses no human reaches in a lifetime. Saccharin caused bladder cancer in rats via a pH mechanism that does not exist in humans.
  • Dose. Cell-culture studies routinely use doses 10 to 10,000 times higher than physiological levels. A compound that kills cancer cells at 100 micromolar may never reach 1 micromolar in any real human cell. Always check the dose against realistic intake.
  • Outcome. Mechanism studies measure stand-in markers (gene expression, cytokine levels, oxidative stress). These may or may not match anything clinical. A diet that "reduces inflammation markers" may or may not reduce any inflammatory disease.

Spector's rule: mechanism work is the start of a question, not the end. A finding stuck in mice or cells is hypothesis-generating. If it survives translation to humans, at realistic doses, with real outcomes, it earns the next tier of trust.

6. The single-nutrient fallacy: vitamin E in pills versus almonds

The clearest case study in nutritionism's failure is vitamin E. Observational data through the 1990s showed people with higher dietary vitamin E intake had less heart disease. The mechanism story was plausible: vitamin E is an antioxidant, and it protects LDL from oxidation. The pharmaceutical move was obvious. Put vitamin E in a pill and test it.

The HOPE-TOO trial (Lonn et al., JAMA 2005), GISSI Prevention, and the Women's Health Study together randomized tens of thousands of people to vitamin E or placebo. None showed a cardiovascular benefit. HOPE-TOO showed a small, real increase in heart-failure hospitalizations in the vitamin E group. SELECT showed more prostate cancer in the vitamin E group.

The observational data were not wrong about almonds. Vitamin E does not work alone. Almonds contain vitamin E plus dozens of other compounds: monounsaturated fats, magnesium, fiber, polyphenols, plant sterols. They act together. Pull out alpha-tocopherol, put it in a gelcap, and the system that did the work is gone. Pollan's line: a leaf of thyme contains dozens of antioxidants; isolate 1 and you have a different compound in a different context.

The same lesson shows up elsewhere. Beta-carotene supplements increased lung cancer in smokers (CARET). Calcium supplements appear to raise cardiovascular risk; calcium from food does not. Whenever you see "compound X has benefits, and you can take it as a pill," start from the prior that the pill does not act like the food.

7. Mendelian randomization: what it can and cannot tell you

Mendelian randomization (MR) is one of the strongest tools modern nutrition epidemiology has. It is also one of the easiest to misread. The method uses genetic variants linked to an exposure (for example, a variant that raises lifetime LDL, or a variant that alters lifetime vitamin D) as a stand-in for the exposure itself. Genes are handed out at random at conception, so the variant works like a lifelong randomized trial that lifestyle cannot confound. If gene-variant carriers have more or less of an outcome, that is causal evidence, not just correlation.

The standard positive case is LDL and cardiovascular disease. Multiple MR analyses across hundreds of thousands of people show that lifelong genetically higher LDL causes CVD. The standard negative case is vitamin D and fractures. The MR analysis of over 500,000 people and 188,000 fractures found no causal effect of vitamin D status on fracture risk. Observational data had suggested benefit for years. The genetic design said no.

MR is a sharp tool for a narrow class of questions. It cannot speak to exposures without strong genetic instruments (most foods, most dietary patterns). It cannot speak to non-linear effects. It cannot speak to short-term effects. When a claim cites MR, check the instrument and the population size. When a claim could be tested by MR but has not been, that is also a tell.

8. The wellness-influencer pattern

A short list of moves shows up over and over in influencer-driven nutrition content. Spotting them is most of the work.

Credential laundering. "Doctor" can mean MD, DO, PhD, ND (naturopath), DC (chiropractor), or an honorary doctorate. "Nutritionist" is unregulated in most US states. "Registered dietitian" (RD) is the credentialed equivalent. "Biohacker" has no certification. "Functional medicine practitioner" is a private credential, not a state license. Read the actual letters.

The mechanism aside. A confident-sounding mechanism story ("seed oils oxidize and damage your mitochondria"), served up as if it were outcome data. Mechanism stories are cheap. Outcome data are expensive. A persuasive mechanism with no outcome data is a hypothesis, not a finding.

The sponsorship disclosure. Most influencer videos contain a "use code X for 20% off" segment. The product on offer is the answer to Q7.

The single-study cite. 1 study. Often small. Often in mice. The rest of the literature (systematic reviews, contradicting cohorts, null trials) is not mentioned. Find the paper, then run Google Scholar's "cited by" link to read the response literature.

The "they don't want you to know" frame. An authority-distrust narrative borrowed from the real history of industry capture (covered in module D3) and applied to claims that have nothing to do with industry. Distrust authorities. That includes the influencer.

The "everything you've been told is wrong" tell. A creator whose every claim is a reversal is selling the reversal as a product.

9. The journalist tells

Headlines are written by editors, not researchers. Phrases that should slow your reading.

  • "Scientists baffled." Almost no scientists are baffled. The headline is doing emotional work.
  • "This 1 trick." No real finding fits in a trick.
  • "The food they don't want you to eat." No "they" exists. The food is in every supermarket.
  • "New study finds." "New" usually means "single," which is the weakest position in evidence.
  • "Linked to." Almost always observational. Almost always a relative risk inflated for the headline.
  • "May reduce risk of." "May" is doing the work. Often a mechanism study or a single weak link.
  • "Up to X percent." "Up to" is doing the work. The real effect could be near zero.
  • "Doctors hate this." Marketing copy.

None of these tells proves the underlying paper is bad. They prove the headline is not a fair summary of it.

10. What "I don't know" means

The honest answer in nutrition is often uncertainty. The field is young. The tools are limited. The funding is captured. The replication crisis is real. A reader who can say "I don't know" is in a better position than a reader who is confident about the latest TikTok.

The point is not paralysis. The point is calibration. Build your eating around the findings that have lasted. Patterns that show up across study designs, populations, and decades. Mediterranean style. Mostly plants. Less ultra-processed food. Less sugar-sweetened soda. Whole grains over refined. These are not maybes. They are the strongest signals the field produces. Hold the rest with light hands.

When a new claim shows up, run the 7 questions. Most die at Q1, Q2, or Q3 and need no more attention. The ones that survive deserve real reading. The very few that survive that reading deserve a small change in behavior. Usually small. Often reversible. Never the kitchen rebuild the headline implied.

FAQ

Q: Should I trust [X] podcast?

Trust the claim, not the host. A podcaster with strong credentials can make a poorly evidenced claim. A podcaster with no credentials can repeat a well-evidenced finding. Run the checklist on the specific claim, not on the personality.

Q: Why do studies contradict each other?

People respond differently. PREDICT showed a 10-fold variation in glucose response to the same meal. Different study designs catch different signals. About 5% of well-run studies produce a "significant" finding by chance alone. And industry funding skews the literature in predictable directions. The real signal is when results converge across designs, populations, and laboratories.

Q: Are RCTs always best?

No. Willett spends Chapter 3 of Eat, Drink, and Be Healthy defending cohort designs against the reflex that "RCTs are gold." Long-term food RCTs are mostly not feasible. You cannot blind people to broccoli. Diet adherence collapses. Hard outcomes take decades to show up. The $415-million WHI low-fat trial got undermined when the "low-fat" arm did not eat much less fat than the control arm. A well-run prospective cohort with repeated diet check-ins can be more informative than a small short RCT.

Q: What is a meta-analysis, and should I trust them?

A meta-analysis pools many studies on the same question into 1 number. The quality depends on the studies inside it. A meta-analysis of 15 weak observational studies produces a confident-looking number that is no stronger than its inputs. Read the funding line. Read the inclusion rules. Check whether the pooled studies actually agree with each other. The 2019 Canadian meta-analysis that declared red meat safe was funded through ILSI and left out most of the harm-direction data.

Q: Who counts as a credible nutrition source?

A starter list. Registered dietitians with no product line. The Harvard T.H. Chan School of Public Health Nutrition Source. The NIH Office of Dietary Supplements. The Cochrane Collaboration. Researchers with long publication records who declare their conflicts. None are perfect. The 7 questions still apply.

Q: Is anything in PubMed reliable?

PubMed lists papers. It does not vouch for them. A paper on PubMed has cleared peer review at some journal. The journal may or may not be a strong one. Reliability depends on design, sample size, funding, and replication. Not on the fact of being indexed.

Q: Should I follow my doctor or my dietitian?

For medical and drug questions: doctor. For day-to-day nutrition planning and disease-specific eating patterns: dietitian. Most US physicians get fewer than 3 practical hours of nutrition training in a 6-year degree. Registered dietitians are the credentialed specialists. For complex cases (diabetes, kidney disease, eating disorders, IBD): both, coordinated.

Q: What if I run the 7 questions and the answer is still ambiguous?

That is often the correct answer. Most live nutrition questions sit in an evidence gray zone. The right response is calibration, not certainty. Make small changes you can sustain. Avoid big changes based on small evidence. The goal is not to know everything. The goal is to stop getting knocked around by every confident stranger.

Sources

  • Pollan, M. In Defense of Food: An Eater's Manifesto — Chapter 9, "Bad Science," for the methodological dissection and the FFQ underreporting figures. Penguin, 2008.
  • Nestle, M. Food Politics — Chapters 5 and 6 for the anatomy of industry funding of nutrition professionals and journals. University of California Press, 2002 (10th-anniversary ed. 2013).
  • Spector, T. Spoon-Fed — Introduction, Chapter 5 (supplements), Chapter 9 (meat and the IARC), and Chapter 20 (alcohol) for the relative-vs-absolute-risk worked examples. Vintage, 2020.
  • Willett, W. Eat, Drink, and Be Healthy — Chapter 3 for the study-design hierarchy from the Harvard cohorts' perspective. Free Press, revised 2017.
  • Schatzker, M. The Dorito Effect — Chapters 2 and 3 for the dilution effect and the legal/process distinction behind "natural flavor." Simon & Schuster, 2015.
  • Kearns, C., Schmidt, L., Glantz, S. "Sugar industry and coronary heart disease research: a historical analysis of internal industry documents." JAMA Internal Medicine 176(11):1680-1685, 2016. DOI: 10.1001/jamainternmed.2016.5394.
  • Lesser, L. et al. "Relationship between funding source and conclusion among nutrition-related scientific articles." PLoS Medicine 4(1):e5, 2007. DOI: 10.1371/journal.pmed.0040005.
  • Lonn, E. et al. "Effects of long-term vitamin E supplementation on cardiovascular events and cancer: the HOPE and HOPE-TOO trial." JAMA 293(11):1338-1347, 2005. DOI: 10.1001/jama.293.11.1338.
  • Hall, K. et al. "Ultra-processed diets cause excess calorie intake and weight gain." Cell Metabolism 30(1):67-77.e3, 2019. DOI: 10.1016/j.cmet.2019.05.008.
  • IARC Monograph 114 (2018), "Red Meat and Processed Meat," for the source of the 18-percent-per-50g processed-meat colorectal cancer relative risk used as the worked example.

Related glossary

  • Epidemiology — the study of how diseases distribute across populations.
  • Randomized controlled trial — the design, its limits in food research.
  • Cohort study — the prospective design that powers most nutritional epidemiology.
  • Case-control study — the retrospective design that produces most early cancer-and-food findings, with its recall-bias caveat.
  • P-hacking — selective analysis and reporting that inflates the false-positive rate.
  • Ecological fallacy — drawing individual-level conclusions from population-level data.
  • Relative risk — the ratio between exposed and unexposed groups' incidence; the most weaponized statistic in nutrition reporting.
  • Absolute risk — the actual percentage-point change in incidence; the number that should anchor any reporting.
  • Number needed to treat / harm — how many people need the exposure for one to be affected; the most intuitive translation of effect size.
  • Conflict of interest — disclosed financial ties between researchers and product manufacturers.
  • Mendelian randomization — using genetic variants as instrumental variables for lifelong exposures.
  • Systematic review — a pre-specified search-and-inclusion protocol summarizing a literature.
  • Meta-analysis — a statistical pooling of multiple studies' results.