Compare Stored Results¶
Overview¶
VIPR can compare multiple stored inference results and generate a new stored result that summarizes their shared artifacts, fit metrics, and domain-specific overlays.
This feature is available in two places:
Web App via the Compare Results dialog
CLI via
vipr compare run
Compare does not rerun inference. It works on results that already exist in result storage.
What Compare Uses¶
Compare expects at least two stored inference results.
Typical inputs are:
two runs of the same sample with different model settings
a baseline run and a wider-bounds run
results produced by different predictors that emit compatible diagrams and tables
In the Web App, only stored results with result_kind = inference are selectable.
Web App Workflow¶
Location: result controls below the pipeline configuration
Run at least two inference jobs first.
Click Compare Results.
Select two or more stored inference results.
Start compare.
Wait for the background task to finish.
VIPR loads the generated compare result into the normal results view.
The compare task uses the same background-task infrastructure as inference:
frontend starts the task through
/api/compare/runFastAPI hands work to Celery
the worker runs
vipr compare runthe generated compare result is stored like a normal UI result
Output¶
The generated compare result is stored as its own result with:
result_kind = comparemetadata about the source results in
batch_metadata.compare_source_results
Depending on the available source artifacts, the compare result can include:
result overview tables
timing summaries
fit-metrics tables
shared and partial artifact summaries
plugin-specific enrichment such as reflectometry overlay diagrams
For reflectometry workflows, compare can add overlay diagrams such as:
compare_reflectivity_overlaycompare_sld_overlay
CLI Workflow¶
Compare can also be started directly from the CLI.
Minimal example¶
vipr compare run --results RESULT_A RESULT_B
With config file¶
compare:
results:
- Fe_Pt_DN
- Fe_Pt_DN_broad_bounds
- Fe_Pt_DN_NSF_model
result_id: Fe_Pt_DN_compare
overlay:
series_kinds:
- polished
include_experimental: true
Run it with:
vipr --config ./Fe_Pt_DN_compare.yaml compare run
Supported compare config fields¶
compare.results: stored result IDs or unambiguous short UUID prefixescompare.result_id: optional explicit ID for the generated compare resultcompare.overlay: optional overlay filtering for compare-enrichment plugins
CLI arguments take precedence over YAML config when both are provided.
Notes and Constraints¶
Compare requires at least two distinct stored results.
Compare resolves exact result IDs first, then allows unambiguous short prefixes.
Compare is structural: it summarizes and aligns stored artifacts; it does not retrain or repredict.
Domain plugins can enrich compare output through hooks, so available overlay diagrams depend on the installed plugins and the source results.