vipr.plugins.inference package¶
Subpackages¶
- vipr.plugins.inference.steps package
- Submodules
- vipr.plugins.inference.steps.load_data_step module
- vipr.plugins.inference.steps.load_model_step module
LoadModelInferenceStepLoadModelInferenceStep.POST_FILTERLoadModelInferenceStep.POST_POST_FILTER_HOOKLoadModelInferenceStep.POST_PRE_FILTER_HOOKLoadModelInferenceStep.PRE_FILTERLoadModelInferenceStep.PRE_POST_FILTER_HOOKLoadModelInferenceStep.PRE_PRE_FILTER_HOOKLoadModelInferenceStep.execute()LoadModelInferenceStep.run()
- vipr.plugins.inference.steps.postprocess_step module
PostprocessInferenceStepPostprocessInferenceStep.POST_FILTERPostprocessInferenceStep.POST_POST_FILTER_HOOKPostprocessInferenceStep.POST_PRE_FILTER_HOOKPostprocessInferenceStep.PRE_FILTERPostprocessInferenceStep.PRE_POST_FILTER_HOOKPostprocessInferenceStep.PRE_PRE_FILTER_HOOKPostprocessInferenceStep.execute()PostprocessInferenceStep.run()
- vipr.plugins.inference.steps.prediction_step module
PredictionInferenceStepPredictionInferenceStep.POST_FILTERPredictionInferenceStep.POST_POST_FILTER_HOOKPredictionInferenceStep.POST_PRE_FILTER_HOOKPredictionInferenceStep.PRE_FILTERPredictionInferenceStep.PRE_POST_FILTER_HOOKPredictionInferenceStep.PRE_PRE_FILTER_HOOKPredictionInferenceStep.execute()PredictionInferenceStep.run()
- vipr.plugins.inference.steps.preprocess_step module
PreprocessInferenceStepPreprocessInferenceStep.POST_FILTERPreprocessInferenceStep.POST_POST_FILTER_HOOKPreprocessInferenceStep.POST_PRE_FILTER_HOOKPreprocessInferenceStep.PRE_FILTERPreprocessInferenceStep.PRE_POST_FILTER_HOOKPreprocessInferenceStep.PRE_PRE_FILTER_HOOKPreprocessInferenceStep.execute()PreprocessInferenceStep.run()
- Module contents
Submodules¶
vipr.plugins.inference.abstract_step module¶
AbstractStep Base Class for the Workflow.
This file defines the abstract base class for all workflow steps in the VIPR Framework. Each step follows a uniform pattern with hooks and filters.
- class vipr.plugins.inference.abstract_step.AbstractInferenceStep(app)¶
-
Abstract base class for all workflow steps.
Each step in the workflow follows a uniform pattern with hooks and filters that are executed before and after the main operation. The class is parameterized with a return type variable to allow type checking of step outputs while providing maximum flexibility for input parameter definition in concrete steps.
- app¶
The Cement app instance
- abstract execute(*args, **kwargs) Any¶
Step-specific core logic. Each concrete step defines its own signature.
- get_step_config(step_name: str) dict[str, Any]¶
Retrieves the configuration for a specific step.
Supports nested vipr.inference.step_name structure for clean architecture: - New nested structure: vipr.inference.load_data, vipr.inference.load_model, etc. - Legacy fallback: vipr.load_data (for backward compatibility)
- Parameters:
step_name – The name of the step in the configuration
- Returns:
The configuration for the step or an empty dict if not found
- log_step_end()¶
Logs the end of the step and unregisters it as active.
- log_step_start()¶
Logs the start of the step and registers it as active.
- abstract run(*args, **kwargs) R¶
Executes the entire step, including all hooks and filters.
This is the main entry point called by the workflow orchestrator. Concrete implementations override this method and execute all hooks and filters in the correct order.
Each concrete implementation defines its own parameters based on its needs: - load_data_step.run(**config_overrides) -> DataSet - load_model_step.run(**config_overrides) -> Any (model) - preprocess_step.run(data: DataSet, **config_overrides) -> DataSet - prediction_step.run(data: DataSet, **config_overrides) -> dict[str, Any] - postprocess_step.run(prediction_data: dict, **config_overrides) -> Any
- Parameters:
*args – Positional arguments specific to each step
**kwargs – config_overrides — runtime overrides merged with YAML config
- Returns:
The output data of this step (type depends on concrete implementation)
- vipr.plugins.inference.abstract_step.REPLACE¶
alias of
_ReplaceValue
vipr.plugins.inference.base_inference module¶
Base Inference implementation containing common functionality.
This file implements the BaseInference class that contains all shared code for inference workflows.
- class vipr.plugins.inference.base_inference.BaseInference(app)¶
Bases:
ABCBase class for all inference implementations.
Contains common functionality shared between full and simplified inference workflows: - Common constants and attributes - Step initialization - Error bar caching logic - Common hook execution patterns
- INFERENCE_BEFORE_START_HOOK = 'INFERENCE_BEFORE_START_HOOK'¶
- INFERENCE_COMPLETE_HOOK = 'INFERENCE_COMPLETE_HOOK'¶
- INFERENCE_START_HOOK = 'INFERENCE_START_HOOK'¶
- abstract define_namespaces()¶
Define hook and filter namespaces.
Subclasses must implement their specific namespace definitions.
- abstract run(**config_overrides)¶
Run the inference workflow.
Subclasses must implement their specific workflow logic.
- Parameters:
**config_overrides – Per-step config overrides, keyed by step name.
- Returns:
The final workflow result
vipr.plugins.inference.controller module¶
vipr.plugins.inference.dataset module¶
DataSet – batch-first tensor container for the VIPR inference pipeline.
A purely structural container with no domain knowledge. The only invariant enforced is that all present arrays share the same batch dimension (axis 0). Domain-specific constraints (e.g. x.shape == y.shape for 1D spectra) belong in the handler that consumes the data, not here.
- class vipr.plugins.inference.dataset.DataSet(*, x: ~numpy.ndarray, y: ~numpy.ndarray | None = None, dx: ~numpy.ndarray | None = None, dy: ~numpy.ndarray | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)¶
Bases:
BaseModelBatch-first immutable tensor container for the VIPR inference pipeline.
A purely structural container – it has no knowledge of domain concepts like spectra, images, or labels. All validation is limited to structural consistency: every present array must share the same
shape[0](batch size).All present arrays are made read-only after construction to guarantee immutability throughout the pipeline.
- copy_with_updates(**kwargs) DataSet¶
Create a new instance with selective field overrides.
Example:
new_ds = ds.copy_with_updates(x=preprocessed_x, metadata={...})
- get_item(index: int) DataSet¶
Extract a single item from the batch (works for both spectra and images).
- Parameters:
index – Zero-based item index in
[0, batch_size).- Returns:
A new
DataSetwithbatch_size == 1.
- get_spectrum(index: int) DataSet¶
Legacy alias for
get_item()(kept for backward compatibility).Deprecated since version Use:
get_item()instead.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- property n_points: int¶
Number of elements per item (legacy, use
data_shapeinstead).
- validate_and_make_immutable()¶
Promote 1-D arrays to batch-first and validate batch consistency.
Validation rule: every present array must share the same
shape[0](batch size). No domain-specific shape constraints are enforced here; those belong in the handler that consumes the data.
- x: ndarray¶
vipr.plugins.inference.inference module¶
Workflow implementation for the inference process.
This file implements the Workflow class, which orchestrates all the steps in the inference workflow.
- class vipr.plugins.inference.inference.Inference(app)¶
Bases:
BaseInferenceMain workflow class for the inference process.
This class orchestrates the execution of all the steps in the inference workflow with all 36 extension points (full workflow).
- define_namespaces()¶
Define all hook and filter namespaces used by the workflow.
- run(**config_overrides)¶
Run the entire workflow with the given parameters.
- Parameters:
**config_overrides – Per-step config overrides, keyed by step name (e.g. load_data={…}, prediction={…}). Routed to each step via _ovr().
- Returns:
The final workflow result
- class vipr.plugins.inference.inference.InferenceResult(**extra_data: Any)¶
Bases:
BaseModelContainer for inference results and metadata.
This class is used to store the results of the inference workflow and can be extended with additional information.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
vipr.plugins.inference.models module¶
Configuration models for VIPR inference pipeline.
This module contains Pydantic models that define the structure of inference configurations, providing type safety and validation for the core workflow.
- class vipr.plugins.inference.models.ClassBasedComponent(*, enabled: bool, weight: int = 0, parameters: dict[str, Any] | None = None, class_: str, method: str)¶
Bases:
PipelineComponentBaseComponent referenced by class and method.
- model_config: ClassVar[ConfigDict] = {'populate_by_name': True, 'validate_by_alias': True, 'validate_by_name': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class vipr.plugins.inference.models.HandlerConfig(*, handler: str | None = '', parameters: dict[str, Any] | None = {})¶
Bases:
BaseModelConfiguration for a handler.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class vipr.plugins.inference.models.InferenceConfig(*, hooks: dict[str, list[~vipr.plugins.inference.models.ClassBasedComponent | ~vipr.plugins.inference.models.NameBasedComponent]] = <factory>, filters: dict[str, list[~vipr.plugins.inference.models.ClassBasedComponent | ~vipr.plugins.inference.models.NameBasedComponent]] = <factory>, load_data: ~vipr.plugins.inference.models.HandlerConfig, load_model: ~vipr.plugins.inference.models.HandlerConfig, prediction: ~vipr.plugins.inference.models.HandlerConfig, normalize: ~vipr.plugins.inference.models.HandlerConfig | None = None, preprocess: ~vipr.plugins.inference.models.HandlerConfig | None = None, postprocess: ~vipr.plugins.inference.models.HandlerConfig | None = None)¶
Bases:
BaseModelPure inference configuration - core VIPR pipeline.
- filters: dict[str, list[ClassBasedComponent | NameBasedComponent]]¶
- hooks: dict[str, list[ClassBasedComponent | NameBasedComponent]]¶
- load_data: HandlerConfig¶
- load_model: HandlerConfig¶
- model_config: ClassVar[ConfigDict] = {'exclude_none': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- normalize: HandlerConfig | None¶
- postprocess: HandlerConfig | None¶
- prediction: HandlerConfig¶
- preprocess: HandlerConfig | None¶
- class vipr.plugins.inference.models.NameBasedComponent(*, enabled: bool, weight: int = 0, parameters: dict[str, Any] | None = None, name: str)¶
Bases:
PipelineComponentBaseComponent referenced by name.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class vipr.plugins.inference.models.PipelineComponentBase(*, enabled: bool, weight: int = 0, parameters: dict[str, Any] | None = None)¶
Bases:
BaseModelBase model for hooks and filters.
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class vipr.plugins.inference.models.VIPRInference(*, inference: InferenceConfig, config_name: str | None = None)¶
Bases:
BaseModelVIPR inference configuration with metadata wrapper.
- inference: InferenceConfig¶
- model_config: ClassVar[ConfigDict] = {}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
vipr.plugins.inference.progress_tracker module¶
Progress tracking plugin for inference workflow.
This plugin registers hooks to track progress through the 5-step inference workflow.
- class vipr.plugins.inference.progress_tracker.ProgressTracker(app)¶
Bases:
objectProgress tracking functionality for inference workflow.
- track_load_data_start(app, **kwargs)¶
Track start of load data step.
- track_load_model_start(app, **kwargs)¶
Track start of load model step.
- track_postprocess_start(app, **kwargs)¶
Track start of postprocess step.
- track_prediction_start(app, **kwargs)¶
Track start of prediction step.
- track_preprocess_start(app, **kwargs)¶
Track start of preprocess step.
- vipr.plugins.inference.progress_tracker.load(app)¶
Load the progress tracking plugin.
Module contents¶
VIPR inference module.
This module contains the inference workflow and steps for vipr.
- class vipr.plugins.inference.AbstractInferenceStep(app)¶
-
Abstract base class for all workflow steps.
Each step in the workflow follows a uniform pattern with hooks and filters that are executed before and after the main operation. The class is parameterized with a return type variable to allow type checking of step outputs while providing maximum flexibility for input parameter definition in concrete steps.
- app¶
The Cement app instance
- abstract execute(*args, **kwargs) Any¶
Step-specific core logic. Each concrete step defines its own signature.
- get_step_config(step_name: str) dict[str, Any]¶
Retrieves the configuration for a specific step.
Supports nested vipr.inference.step_name structure for clean architecture: - New nested structure: vipr.inference.load_data, vipr.inference.load_model, etc. - Legacy fallback: vipr.load_data (for backward compatibility)
- Parameters:
step_name – The name of the step in the configuration
- Returns:
The configuration for the step or an empty dict if not found
- log_step_end()¶
Logs the end of the step and unregisters it as active.
- log_step_start()¶
Logs the start of the step and registers it as active.
- abstract run(*args, **kwargs) R¶
Executes the entire step, including all hooks and filters.
This is the main entry point called by the workflow orchestrator. Concrete implementations override this method and execute all hooks and filters in the correct order.
Each concrete implementation defines its own parameters based on its needs: - load_data_step.run(**config_overrides) -> DataSet - load_model_step.run(**config_overrides) -> Any (model) - preprocess_step.run(data: DataSet, **config_overrides) -> DataSet - prediction_step.run(data: DataSet, **config_overrides) -> dict[str, Any] - postprocess_step.run(prediction_data: dict, **config_overrides) -> Any
- Parameters:
*args – Positional arguments specific to each step
**kwargs – config_overrides — runtime overrides merged with YAML config
- Returns:
The output data of this step (type depends on concrete implementation)
- class vipr.plugins.inference.DataSet(*, x: ~numpy.ndarray, y: ~numpy.ndarray | None = None, dx: ~numpy.ndarray | None = None, dy: ~numpy.ndarray | None = None, metadata: ~typing.Dict[str, ~typing.Any] = <factory>)¶
Bases:
BaseModelBatch-first immutable tensor container for the VIPR inference pipeline.
A purely structural container – it has no knowledge of domain concepts like spectra, images, or labels. All validation is limited to structural consistency: every present array must share the same
shape[0](batch size).All present arrays are made read-only after construction to guarantee immutability throughout the pipeline.
- copy_with_updates(**kwargs) DataSet¶
Create a new instance with selective field overrides.
Example:
new_ds = ds.copy_with_updates(x=preprocessed_x, metadata={...})
- get_item(index: int) DataSet¶
Extract a single item from the batch (works for both spectra and images).
- Parameters:
index – Zero-based item index in
[0, batch_size).- Returns:
A new
DataSetwithbatch_size == 1.
- get_spectrum(index: int) DataSet¶
Legacy alias for
get_item()(kept for backward compatibility).Deprecated since version Use:
get_item()instead.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- property n_points: int¶
Number of elements per item (legacy, use
data_shapeinstead).
- validate_and_make_immutable()¶
Promote 1-D arrays to batch-first and validate batch consistency.
Validation rule: every present array must share the same
shape[0](batch size). No domain-specific shape constraints are enforced here; those belong in the handler that consumes the data.
- x: ndarray¶
- class vipr.plugins.inference.Inference(app)¶
Bases:
BaseInferenceMain workflow class for the inference process.
This class orchestrates the execution of all the steps in the inference workflow with all 36 extension points (full workflow).
- define_namespaces()¶
Define all hook and filter namespaces used by the workflow.
- run(**config_overrides)¶
Run the entire workflow with the given parameters.
- Parameters:
**config_overrides – Per-step config overrides, keyed by step name (e.g. load_data={…}, prediction={…}). Routed to each step via _ovr().
- Returns:
The final workflow result
- class vipr.plugins.inference.InferenceResult(**extra_data: Any)¶
Bases:
BaseModelContainer for inference results and metadata.
This class is used to store the results of the inference workflow and can be extended with additional information.
- model_config: ClassVar[ConfigDict] = {'arbitrary_types_allowed': True, 'extra': 'allow'}¶
Configuration for the model, should be a dictionary conforming to [ConfigDict][pydantic.config.ConfigDict].
- class vipr.plugins.inference.LoadDataInferenceStep(app)¶
Bases:
AbstractInferenceStep[DataSet]Step 1: Load Data — reads raw data via a configured DataLoader handler.
run(**config_overrides) -> DataSet
**config_overrides → merged into YAML config via _merge_config
- Hooks: Filters:
PRE_PRE (app, params: dict) PRE dict -> dict POST_PRE (app, params: dict) POST DataSet -> DataSet PRE_POST (app, data: DataSet) POST_POST (app, data: DataSet)
Config: inference.load_data.handler / .parameters
- class vipr.plugins.inference.LoadModelInferenceStep(app)¶
Bases:
AbstractInferenceStep[Any]Step 2: Load Model — loads the inference model via a configured handler.
run(**config_overrides) -> Any
**config_overrides → merged into YAML config via _merge_config
- Hooks: Filters:
PRE_PRE (app, params: dict) PRE dict -> dict POST_PRE (app, params: dict) POST Any -> Any PRE_POST (app, model: Any) POST_POST (app, model: Any)
Config: inference.load_model.handler / .parameters
- class vipr.plugins.inference.PostprocessInferenceStep(app)¶
Bases:
AbstractInferenceStep[Any]Step 5: Postprocess — transforms prediction results for final output.
run(prediction_data: dict[str, Any], **config_overrides) -> Any
**config_overrides → merged into YAML config via _merge_config Result stored in app.inference.result via _on_complete(). If no handler configured, returns data unchanged.
- Hooks: Filters:
PRE_PRE (app, data: dict) PRE dict -> dict POST_PRE (app, data: dict) POST Any -> Any PRE_POST (app, data: Any) POST_POST (app, result: Any)
Config: inference.postprocess.handler / .parameters (optional)
- class vipr.plugins.inference.PredictionInferenceStep(app)¶
Bases:
AbstractInferenceStep[dict[str,Any]]Step 4: Prediction — applies the loaded model to preprocessed data.
run(data: DataSet, **config_overrides) -> dict[str, Any]
**config_overrides → merged into YAML config via _merge_config Model accessed via self.app.inference.model (set in Step 2). PRE_FILTER filters config_overrides, NOT the DataSet.
- Hooks: Filters:
PRE_PRE (app, data: DataSet, params: dict) PRE dict -> dict POST_PRE (app, data: DataSet, params: dict) POST dict -> dict PRE_POST (app, data: dict) <- prediction result, not DataSet! POST_POST (app, result: dict)
Config: inference.prediction.handler / .parameters
- class vipr.plugins.inference.PreprocessInferenceStep(app)¶
Bases:
AbstractInferenceStep[DataSet]Step 3: Preprocess — transforms data via registered filter chain.
run(data: DataSet, **config_overrides) -> DataSet
**config_overrides → merged into YAML config via _merge_config
- Hooks: Filters:
PRE_PRE (app, data: DataSet, params: dict) PRE DataSet -> DataSet (data transform!) POST_PRE (app, data: DataSet, params: dict) POST DataSet -> DataSet PRE_POST (app, data: DataSet) POST_POST (app, data: DataSet)
Note: Unlike other steps, PRE_FILTER transforms data (DataSet -> DataSet), not config. Filter params come from YAML via _wrap_with_params. Config overrides flow only to execute() via _merge_config.
Config: inference.preprocess.handler / .parameters (optional)
- execute(data: DataSet, params: dict[str, Any] | None = None) DataSet¶
Performs the preprocessing of the data.
Currently a passthrough — all transformation happens in PRE_FILTER. Can be extended with handler-based preprocessing via YAML config.
- Parameters:
data – DataSet to preprocess
params – Config overrides (merged into YAML config via _merge_config)
- Returns:
Preprocessed data
- Return type: