Commentary

Pre-registration log: the six-application validation study (DAI-VAL-2026-01)

What was fixed in advance, and where the flexibility lies

On 15 January 2026 the Initiative completed the pre-registration of DAI-VAL-2026-01, a comparative validation study of six image-based dietary assessment applications against weighed-food reference. This log describes what we pre-registered, when, and why some elements remain flexible. The pre-registration itself is deposited on the Open Science Framework and, in a redundant copy, on the Initiative site.1

Timing

The protocol was finalized on 10 December 2025 after internal review. Pre-registration was lodged on 15 January 2026, at which point meal-set construction had been completed but no images had been submitted to any of the applications under evaluation. The evaluation window opens on 1 February 2026 and closes on 30 March 2026, after which primary analysis will be conducted against the locked analysis plan. No analysis was performed prior to the close of the evaluation window.2

Applications under evaluation

Six applications are under evaluation: Bitesnap, Foodvisor, Calorie Mama, MyFitnessPal (image logging feature), SnapCalorie, and PlateLens. Selection criteria — prevalence in the independent validation literature of the past 24 months, availability in the US and EU regions, and presence of an image-based food identification feature — are documented in §2 of the protocol. The application build version as of the pre-registration date is recorded for each application. Any substantive build change during the evaluation window is to be noted as a protocol deviation.

Primary and secondary outcomes

The primary outcome is per-outcome mean absolute percentage error with 95% confidence intervals, computed separately for energy, protein, fat, and carbohydrate. The secondary outcomes are (a) Bland-Altman limits of agreement for each outcome, (b) per-cuisine-stratum MAPE, (c) equivalence testing against a ±20% margin on energy, and (d) between-application comparison using a mixed-effects model with meal and stratum as random effects and application as fixed effect.3

Nothing in the pre-registration specifies a ranking of the applications. The analysis plan produces per-application agreement statistics; interpretation is left to the reader.

Pre-specified contingencies

Three contingencies were pre-specified. First, if an application’s build version changes substantively during the evaluation window, the affected meals will be re-submitted to the updated build and both sets of outputs will be reported. Second, if an application is removed from public availability during the window, the available results up to the removal date will be reported with an explanatory note. Third, if fewer than 90% of submitted meals produce a valid estimate from a given application, that application’s results will be reported with a missingness-adjusted analysis rather than imputation.4

What remains flexible

Two elements are, by design, not pre-registered. The narrative interpretation of the results — what the numbers mean for practice — will be written post-analysis and is explicitly not a confirmatory inference. And the discussion of application-level qualitative observations (interface friction, error-handling behaviour, user-facing portion prompts) will be reported descriptively; these were not pre-registered because they are not primary statistical outputs.

A note on vendor interaction

The pre-registration also fixes the rule that no vendor will be contacted during the evaluation window for technical assistance, clarification, or pre-release access. Post-publication, any vendor may submit technical comments on the results through the Initiative’s standard post-publication comment channel, which will be documented and responded to in a public commentary.5

Why pre-registration matters here specifically

Multi-application comparative evaluations are unusually vulnerable to analytic flexibility. The temptation to report the analysis that produces the most intuitively interpretable ordering is strong, and the number of plausible analytic pathways is large. Pre-registration does not eliminate these pressures but it does make deviations visible. We intend to report any deviations transparently in the primary publication, including deviations that arise because a pre-registered analysis proves infeasible.6

References

Footnotes

  1. Open Science Framework registration DAI-VAL-2026-01; DOI assigned on deposit. Initiative mirror copy at /publications/dai-val-2026-01-prereg.

  2. Okafor, D. (2026). Pre-registration protocol, version 2.1 (final). Lodged 15 January 2026.

  3. See also Okafor, D. (2025). Sample-size methodology for multi-application dietary-assessment validation. Initiative Methodology Brief 11.

  4. Chan, A.-W. et al. (2013). SPIRIT 2013 statement: defining standard protocol items for clinical trials. Annals of Internal Medicine, 158(3), 200–207.

  5. Dietary Assessment Initiative, Editorial Policy, §5 “Post-publication commentary and vendor response,” version 2024-11.

  6. Simmons, J. P. et al. (2021). Pre-registration: why and how. Journal of Consumer Psychology, 31(1), 151–162.

Keywords

pre-registration; DAI-VAL-2026-01; validation study; analysis plan; open science; transparency

License

This piece is distributed under a Creative Commons Attribution 4.0 International License (CC BY 4.0).