Configuration Examples¶
Complete workflow configurations demonstrating VIPR framework integration patterns.
Reflectorch XRR Workflow¶
Basic Configuration¶
vipr:
inference:
# DataLoader Handler Selection
load_data:
handler: csv_spectrareader # Handler label from registry
parameters:
data_path: '@vipr_reflectometry/reflectorch/examples/data/PTCDI-C3.txt'
column_mapping: # Handler-specific parameters
q: 0 # via @discover_data_loader
I: 1
# ModelLoader Handler Selection
load_model:
handler: reflectorch # Handler label from registry
parameters:
config_name: b_mc_point_xray_conv_standard_L2_InputQ
device: cpu
# Predictor Handler Selection
prediction:
handler: reflectorch_predictor # Handler label from registry
parameters:
calc_pred_curve: true # Predictor-specific parameters
calc_pred_sld_profile: true # via @discover_predictor
clip_prediction: true
polish_prediction: true
prior_bounds:
thicknesses: [[1, 1500]]
slds: [[1.5, 6.5]]
roughnesses: [[0, 10]]
log10_background: [-10, -4]
r_scale: [0.9, 1.1]
Framework Integration Points:
✅ Handler selection via registry labels
✅ Parameter exposure via discovery decorators
✅ DataSet flow (CSV → Reflectorch predictor)
✅ No explicit filter/hook configuration (default workflow)
Reflectorch NR Workflow with Data Cleaning¶
Configuration with Preprocessing Filters¶
vipr:
inference:
# Filter Configuration (explicit activation)
filters:
INFERENCE_PREPROCESS_PRE_FILTER:
# Filter 1: Data cleaning (weight=-10, runs first)
- class: vipr_reflectometry.shared.preprocessing.neutron_data_cleaner.NeutronDataCleaner
enabled: true # Explicit activation required
method: clean_experimental_data # Filter method from @discover_filter
parameters: # Filter parameters (passed as kwargs)
error_threshold: 0.5 # Relative error threshold (dR/R)
consecutive_errors: 3 # Truncation trigger
remove_single_errors: false
weight: -10 # Execution order (lowest first)
# Filter 2: Interpolation (weight=0, runs after cleaning)
- class: vipr_reflectometry.reflectorch.reflectorch_extension.Reflectorch
enabled: true
method: _preprocess_interpolate # Extension method as filter
parameters: null # No parameters for this filter
weight: 0
# Data loading with error bars
load_data:
handler: csv_spectrareader
parameters:
data_path: '@vipr_reflectometry/reflectorch/examples/data/D17_SiO.dat'
column_mapping:
q: 0
I: 1
dI: 2 # Intensity errors (required for cleaning)
dQ: 3 # Q resolution
# Neutron reflectometry model
load_model:
handler: reflectorch
parameters:
config_name: e_mc_point_neutron_conv_standard_L1_InputQDq_n256_size1024
device: cpu
# Prediction with neutron-specific parameters
prediction:
handler: reflectorch_predictor
parameters:
calc_pred_curve: true
calc_pred_sld_profile: true
clip_prediction: true
q_resolution: 0.1 # Neutron-specific: Q resolution
upper_phase_sld: 0 # Neutron-specific: D2O/air
use_q_shift: false
prior_bounds:
thicknesses: [[1, 1500]]
slds: [[1.5, 6.5], [1.5, 6.5]]
roughnesses: [[0, 10], [0, 10]]
log10_background: [-10, -4]
r_scale: [0.9, 1.1]
Framework Integration Points:
✅ Filter chain: Cleaning (weight=-10) → Interpolation (weight=0)
✅ DataSet transformation: Raw → Cleaned → Interpolated
✅ Error propagation through filter chain (dI, dQ)
✅ Extension method used as filter (
Reflectorch._preprocess_interpolate)✅ Configuration-driven filter activation
Filter Execution Flow¶
LoadDataStep
→ DataSet(x=Q, y=R, dx=dQ, dy=dI, batch_size=1)
↓
PreprocessStep:
→ NeutronDataCleaner.clean_experimental_data (weight=-10)
Input: DataSet (raw, may have negative R, high dR/R)
Output: DataSet (cleaned, padded, dR/R < threshold)
↓
→ Reflectorch._preprocess_interpolate (weight=0)
Input: DataSet (cleaned, variable Q-grid)
Output: DataSet (interpolated, model Q-grid)
↓
PredictionStep
→ ReflectorchPredictor._predict
Input: DataSet (preprocessed)
Output: dict(predictions, pred_curve, sld_profile)
Flow Model Workflow¶
NSF XRR with Posterior Sampling¶
vipr:
inference:
# Flow-specific preprocessing filter
filters:
INFERENCE_PREPROCESS_PRE_FILTER:
- class: vipr_reflectometry.flow_models.flow_preprocessor.FlowPreprocessor
enabled: true
method: _preprocess_flow # Log-scale transformation
parameters: null
weight: 0
# Postprocessing visualization hook
hooks:
INFERENCE_POSTPROCESS_PRE_PRE_FILTER_HOOK:
- class: vipr_reflectometry.flow_models.postprocessors.basic_corner_plot.BasicCornerPlot
enabled: true
method: _create_basic_corner_plot # Hook for visualization
parameters: null
weight: 0
# Data loading
load_data:
handler: csv_spectrareader
parameters:
data_path: '@vipr_reflectometry/reflectorch/examples/data/PTCDI-C3.txt'
column_mapping:
q: 0
I: 1
# Flow model loading
load_model:
handler: flow_model_loader # Custom model loader
parameters:
config_name: mc1_nsf # Neural Spline Flow configuration
device: cpu
model_dir: saved_models
weights_format: pt
# Flow predictor (generates posterior samples)
prediction:
handler: flow_predictor # Custom predictor for flows
parameters:
num_samples: 2000 # Posterior samples to generate
return_scaled: false # Return in original parameter space
Framework Integration Points:
✅ Custom preprocessing filter (log-scale transformation)
✅ Hook system for visualization (corner plot)
✅ Custom model loader (flow architectures)
✅ Custom predictor (posterior sampling vs point prediction)
✅ Data collector pattern for UI integration
Flow Model Output¶
# Predictor returns dictionary (not DataSet)
{
'posterior_samples': np.ndarray, # (num_samples, n_parameters)
'num_samples': 2000,
'batch_size': 1
}
# Hook adds visualization to data collector
app.flow_dc.add_plot('corner_plot', fig)
Flow Model with Clustering Postprocessing¶
Complete Configuration with All Features¶
vipr:
inference:
# Preprocessing
filters:
INFERENCE_PREPROCESS_PRE_FILTER:
- class: vipr_reflectometry.flow_models.flow_preprocessor.FlowPreprocessor
enabled: true
method: _preprocess_flow
parameters: null
weight: 0
# Multiple postprocessing hooks
hooks:
INFERENCE_POSTPROCESS_PRE_PRE_FILTER_HOOK:
# Hook 1: Clustering analysis (weight=0)
- class: vipr_reflectometry.flow_models.postprocessors.cluster.clustering.hook.ClusterHook
enabled: true
method: run_clustering_analysis
parameters:
algorithm: kmeans # Clustering algorithm
n_clusters: 3 # Number of clusters
weight: 0
# Hook 2: Cluster diagnostics (weight=1, runs after clustering)
- class: vipr_reflectometry.flow_models.postprocessors.cluster.model_selection.hook.ClusterDiagnosticsHook
enabled: true
method: generate_diagnostics
parameters: null
weight: 1
# Hook 3: Visualizations (weight=2, runs last)
- class: vipr_reflectometry.flow_models.postprocessors.marginal_distributions.MarginalDistributions
enabled: true
method: plot_marginals
parameters: null
weight: 2
load_data:
handler: hdf5_spectrareader # Batch loading (N spectra)
parameters:
data_path: 'data/xrr_dataset.h5'
group_path: '/spectra'
load_model:
handler: flow_model_loader
parameters:
config_name: mc1_cinn # Conditional INN
device: cuda
model_dir: saved_models
weights_format: pt
prediction:
handler: flow_predictor
parameters:
num_samples: 5000
return_scaled: false
Framework Integration Points:
✅ Multiple hooks with weight-based execution order
✅ Hook chain: Clustering → Diagnostics → Visualization
✅ Batch processing (HDF5 loader, batch_size > 1)
✅ Complex postprocessing workflow
✅ Multiple data collectors for different visualizations
PANPE Workflow¶
Configuration (Optional Submodule)¶
vipr:
inference:
load_data:
handler: csv_spectrareader
parameters:
data_path: '@vipr_reflectometry/panpe/examples/data/xrr.dat'
column_mapping:
q: 0
I: 1
load_model:
handler: panpe_model_loader # PANPE-specific loader
parameters:
model_name: panpe_xrr_model
device: cpu
prediction:
handler: panpe_predictor # PANPE-specific predictor
parameters:
num_samples: 2000
adaptive_prior: true # PANPE feature
Note: PANPE requires separate installation (pip install -e .[plugin-panpe])
Multi-File Batch Processing¶
HDF5 Batch Configuration¶
vipr:
inference:
# Preprocessing with cleaning
filters:
INFERENCE_PREPROCESS_PRE_FILTER:
- class: vipr_reflectometry.shared.preprocessing.neutron_data_cleaner.NeutronDataCleaner
enabled: true
method: clean_experimental_data
parameters:
error_threshold: 0.5
consecutive_errors: 3
weight: -10
- class: vipr_reflectometry.reflectorch.reflectorch_extension.Reflectorch
enabled: true
method: _preprocess_interpolate
parameters: null
weight: 0
# Batch data loading (N spectra)
load_data:
handler: hdf5_spectrareader # Batch-capable loader
parameters:
data_path: 'storage/reflectorch/experimental_data.h5'
group_path: '/measurements'
load_model:
handler: reflectorch
parameters:
config_name: e_mc_point_neutron_conv_standard_L1_InputQDq_n256_size1024
device: cuda # GPU for batch processing
prediction:
handler: reflectorch_predictor
parameters:
calc_pred_curve: true
calc_pred_sld_profile: true
clip_prediction: true
prior_bounds:
thicknesses: [[1, 1500]]
slds: [[1.5, 6.5], [1.5, 6.5]]
roughnesses: [[0, 10], [0, 10]]
Batch Processing Flow:
HDF5SpectraReaderDataLoader
→ DataSet(batch_size=N, n_points=variable)
↓
NeutronDataCleaner (processes each spectrum independently)
→ DataSet(batch_size=N, n_points=max_len, padded=True)
↓
Reflectorch._preprocess_interpolate (batch operation)
→ DataSet(batch_size=N, n_points=model_grid_size)
↓
ReflectorchPredictor (batch prediction)
→ dict(predictions=[N x parameters], pred_curves=[N x n_points])
Key Configuration Patterns¶
Handler Selection Pattern¶
step_name:
handler: label_from_registry # Handler lookup by label
parameters: # Handler-specific parameters
param1: value1 # Exposed via @discover_* decorator
param2: value2
Filter Configuration Pattern¶
filters:
FILTER_SLOT_NAME: # e.g., INFERENCE_PREPROCESS_PRE_FILTER
- class: fully.qualified.ClassName
enabled: true # Explicit activation
method: filter_method # Method with @discover_filter
parameters: # Passed as kwargs to filter
param1: value1
weight: 0 # Execution order (lower first)
Hook Configuration Pattern¶
hooks:
HOOK_SLOT_NAME: # e.g., INFERENCE_POSTPROCESS_PRE_PRE_FILTER_HOOK
- class: fully.qualified.ClassName
enabled: true # Explicit activation
method: hook_method # Method with @discover_hook
parameters: null # Passed as kwargs to hook
weight: 0 # Execution order (lower first)
Package Path Resolution¶
The plugin uses VIPR’s path resolution for data files:
# Package path (resolved to plugin location)
data_path: '@vipr_reflectometry/reflectorch/examples/data/D17_SiO.dat'
# Alternative syntax
data_path: 'vipr_reflectometry/reflectorch/examples/data/D17_SiO.dat'
# Absolute path
data_path: '/home/user/data/experiment.dat'
# Relative to working directory
data_path: 'data/experiment.dat'
Pattern: Use @package_name/path for plugin-bundled data files.
CLI Usage¶
# Run with configuration file
vipr --config path/to/config.yaml inference run
# Run with package-path configuration
vipr --config '@vipr_reflectometry/reflectorch/examples/configs/D17_SiO.yaml' inference run
# Run with environment override
REFLECTORCH_ROOT_DIR=/custom/path vipr --config config.yaml inference run
Integration Summary¶
VIPR Framework Concepts Demonstrated¶
Handler Registry: Dynamic handler selection by label
Discovery System: Parameter exposure via decorators
DataSet Flow: Immutable data transfer between steps
Filter Chain: Weight-based preprocessing pipeline
Hook System: Event-driven workflow customization
Extension Pattern: Domain-specific utilities via
app.extend()Configuration-Driven: YAML-based workflow definition
Batch Processing: N spectra processed in single workflow
Error Propagation: Uncertainties transformed with data
Package Resolution: Plugin-bundled resource access
Workflow Integration Points¶
Configuration (YAML)
↓
Handler Selection (Registry Lookup)
↓
Parameter Injection (Discovery System)
↓
Data Loading → DataSet Creation
↓
Filter Chain (Weight-Based Order)
↓
Model Loading (Environment Integration)
↓
Prediction (Workflow State Access)
↓
Hook Execution (Visualization, Analysis)
↓
Result Output (Data Collectors)
All configuration examples demonstrate how the Reflectometry Plugin leverages VIPR’s framework patterns to create flexible, domain-specific workflows without modifying core framework code.