How we work

Our methodology

Every entry on Evidentia Nutrition follows the same appraisal process. This page explains how we assess evidence, assign ratings, and handle uncertainty. The full methodology and decision appendix are available on request via the contact page.

How do you decide what counts as good evidence?

We follow an evidence hierarchy that prioritises human outcomes over surrogate markers, and controlled evidence over observational data. Systematic reviews and meta-analyses of randomised controlled trials sit at the top. Mechanistic and animal studies are used to explain biological plausibility, but they are not used as the basis for evidence ratings. We name the specific appraisal tools we use: RoB 2 for randomised trials, AMSTAR 2 for systematic reviews, and the Newcastle-Ottawa Scale for observational studies.

We also acknowledge the specific methodological challenges of nutrition research: the dominance of short-duration trials, surrogate endpoints, industry funding bias, and the difficulty of blinding dietary interventions. These are noted where relevant in every entry.

What do the ratings mean?

Ratings are assigned to specific outcomes, not to ingredients in general. An ingredient may have a Strong rating for deficiency correction and an Emerging rating for cognitive performance. Both appear on the same entry.

Strong

Multiple well-designed RCTs or a high-quality systematic review with consistent findings, adequate sample sizes, low risk of bias, and outcomes directly relevant to the claim. Requires independent replication.

Moderate

Some controlled evidence with limitations in scale, design consistency, or applicability. May include well-designed observational evidence with plausible mechanistic support.

Emerging

Early-stage evidence: single trials, small samples, short duration, or predominantly mechanistic data. Positive signal exists but is insufficient for confident claims.

Insufficient

Human evidence is absent, too weak to interpret, or so contradictory that no directional conclusion is possible. This is not a negative finding: it means we do not yet know.

How do you handle conflicting studies?

Conflicting evidence is the norm in nutrition research, not the exception. Where studies reach different conclusions, entries explain the conflict and explore possible reasons: differences in population studied, baseline status, dose, form, duration, outcome measurement, or study quality. We give readers the tools to understand why studies disagree rather than presenting a false consensus.

Negative trials and null findings are as important as positive ones and are always included in evidence summaries.

What is form-specific evidence and why does it matter?

Evidence for one preparation of an ingredient does not automatically transfer to other preparations of the same ingredient. Magnesium bisglycinate, citrate, malate, and oxide have different bioavailability profiles and different evidence bases. The same is true for curcumin phytosome versus standard curcumin, methylcobalamin versus cyanocobalamin, and methylfolate versus folic acid.

Where evidence is form-specific, entries state clearly which form was studied and do not extend that evidence to other forms without a specific basis for doing so. This is one of the most consistently under-communicated principles in public nutrition information.

Who writes and reviews the entries?

Entries are written and reviewed by the founding director, a surgeon and professor with an Oxford MSc in Evidence-Based Healthcare. An advisory board of registered dieticians, clinical academics, and public health specialists provides oversight of the methodology and reviews entries in areas requiring specialist input before publication.

Conflicts of interest are declared on the about page and where relevant on individual entries. The founding director holds a clinical leadership role at Personally, a personalised supplement company. This relationship is disclosed, and a recusal framework governs which entries are independently reviewed as a result.

How often is content updated?

All entries are reviewed on a standard cycle of 18 to 24 months, depending on the pace of evidence development in that area. Entries are also reviewed out of cycle when a significant new trial is published, when a regulatory decision or safety alert is relevant, or when a credible correction is submitted by a reader or practitioner. The last reviewed date and version number appear on every entry. Material corrections are noted visibly on the entry rather than silently amended.

Full methodology documents
The complete Editorial and Appraisal Methodology, Decision Appendix, and Governance Framework are available on request. Contact us via the contact page.