Short, citable methodology briefs covering the analytic and design conventions the Initiative uses in its validation work — and that we recommend to other research groups producing comparable evidence.
Methodology brief · Feb 17, 2026 Okafor & Henriksen
Claims that a new dietary assessment method is non-inferior to an established one require a pre-specified equivalence margin with documented clinical or operational justification. This brief describes the Initiative's convention for selecting, pre-specifying, and reporting equivalence margins for image-based and AI-assisted assessment studies.
Methodology brief · Dec 9, 2025 Okafor & Rivera
Many dietary assessment validation studies rely on a single rater for reference coding, which leaves measurement reliability undocumented. This brief describes a blinded re-rating protocol for a random sub-sample, with concordance metrics, acceptance thresholds, and reporting requirements.
Methodology brief · Sep 1, 2025 Rivera & Weiss
Weighed-food and image-based dietary assessment studies sit at a boundary between minimal-risk food science and human-subjects research with identifiable images and health data. This brief summarises the ethical review considerations that the Initiative applies, including consent for image data, incidental finding policies, and data-retention rules.
Methodology brief · Jul 7, 2025 Patel
Cuisine-level stratification of evaluation sets is common in image-based dietary assessment yet inconsistent across studies. This brief proposes definitions, an allocation scheme, and minimum stratum sizes for stratified inference, drawing on a pragmatic taxonomy rather than a contested cultural one.
Methodology brief · May 13, 2025 Okafor
Sample-size planning in image-based dietary assessment validation is frequently retrospective and underpowered. This brief sets out pre-specification rules for n based on the width of the MAPE confidence interval, the LoA confidence interval, and category-stratified inference needs.
Methodology brief · Mar 10, 2025 Weiss & Henriksen
The dietary assessment literature often cites accuracy figures drawn from vendor white papers alongside figures from independent validation studies, without distinguishing provenance. This brief proposes an editorial convention for labelling vendor-reported and independently-replicated numbers in Initiative-produced evidence summaries.
Methodology brief · Jan 27, 2025 Rivera & Patel
Weighed-food reference measurements are only as reliable as the scale behind them. This brief sets out a calibration, verification, and documentation checklist for kitchen scales used as reference instruments in dietary assessment validation studies, including a drift-check schedule and tare-handling rules.
Methodology brief · Dec 4, 2024 Rivera
USDA FoodData Central (FDC) exposes multiple, partially overlapping data types with different analytical provenance and intended uses. This brief summarises the distinctions between Foundation Foods, FNDDS (Survey), and SR Legacy, and offers decision rules for selecting the appropriate entry in validation and epidemiologic work.
Methodology brief · Oct 21, 2024 Okafor
Mean Absolute Percentage Error (MAPE) is widely reported for image-based and AI-assisted dietary assessment, but conventions for rounding, thresholds, and uncertainty differ. This brief describes the rounding rule, reporting thresholds, and bootstrap confidence interval procedure used in Initiative work.
Methodology brief · Sep 17, 2024 Okafor
The Initiative adopts a consistent convention for reporting 95% limits of agreement (LoA) in dietary assessment validation. This brief describes the Bland-Altman procedure we follow, how we handle proportional bias, and what should appear in every agreement plot and table.