HTTP API Routes¶
This page is generated by
scripts/generate_api_route_docs.pyfrom the runtime Cement registry and the manually declared FastAPI routers.
This reference combines:
routes created from
@api-annotated Cement controller methodsroutes declared manually in
vipr-apiFastAPI routers
Summary¶
Generated:
2026-03-13 16:16:11 UTCTotal routes:
45Auto-generated Cement routes:
16Manual FastAPI routes:
29
Auto-generated Cement Routes¶
discovery¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/discovery/components: Lists all available components. No input required for this GET endpoint.GET /api/discovery/data-loaders: Lists all available data loaders.GET /api/discovery/filter-types: Lists all available filter types.GET /api/discovery/filters: Lists all available filters.GET /api/discovery/hook-types: Lists all available hook types.GET /api/discovery/hooks: Lists all available hooks.GET /api/discovery/model-loaders: Lists all available model loaders.GET /api/discovery/plugins: Lists all available plugins by type.GET /api/discovery/postprocessors: Lists all available postprocessing handlers.GET /api/discovery/predictors: Lists all available prediction handlers.
ui¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/ui/result: Retrieve a stored UI result by ID (UUID or human-readable)GET /api/ui/result/config: Retrieve the configuration file used for a specific resultGET /api/ui/result/status: Check if a UI result existsGET /api/ui/results/list: Get list of all stored UI results with metadata (sorted by date, newest first)POST /api/ui/storage/cleanup: Clean up old UI results and config filesGET /api/ui/storage/info: Get statistics about UI result storage
Manual FastAPI Routes¶
debug¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/debug/api-mapping: Shows all Plugin → API mappings for debugging and documentation. Returns: List of all HTTP ↔ CLI command mappingsGET /api/debug/plugin-status: Shows the status of the plugin introspection system. Returns: Status information about plugin discovery
files¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
- |
|
|
|
|
|
|
|
|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
Descriptions¶
GET /api/fetch_file/{file_path:path}: Fetch file content from server based on data_path. This endpoint handles both relative paths (using VIPR’s resolve_file_path) and absolute paths (like /tmp files) to provide file access for the frontend. Args: file_path: The path to the file to fetch Returns: FileResponse: The file content with appropriate headers Raises: HTTPException: If file is not found, access denied, or other errorsPOST /api/fileupload: Validates and saves an uploaded file. Supports both text-based files (CSV, TXT, etc.) and HDF5 files with appropriate validation for each type. Args: file: The uploaded file (UTF-8 text format for CSV/TXT or binary for HDF5) Returns: FileValidationResponse: Validation result with file path on success Raises: HTTPException: With appropriate status code and detail message on failureGET /api/hdf5_metadata/{file_path:path}: Get metadata from HDF5 file including available datasets and spectrum counts. This endpoint uses the SpectraReader from the HDF5SpectraReaderDataLoader to extract metadata from HDF5 files. Args: file_path: The path to the HDF5 file Returns: HDF5Metadata: Metadata containing datasets, spectrum counts, and file info Raises: HTTPException: If file is not found, not HDF5, or metadata extraction failsPOST /api/hdf5_preview: Get preview data for a specific dataset/spectrum combination from HDF5 file. Args: request: Dictionary containing: - filePath: Path to HDF5 file - datasetName: Name of dataset to preview - spectrumIndex: Index of spectrum (optional, for single spectrum preview) - batchProcessing: Whether to preview batch data (optional) Returns: HDF5PreviewData: Preview data with Q/intensity ranges and metadata Raises: HTTPException: If file access fails or preview extraction fails
inference¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
|
- |
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
Descriptions¶
POST /api/inference/cancel/{task_id}: Cancel a running VIPR inference task. Args: task_id: Unique identifier of the Celery task to cancel Returns: Dict containing cancellation status Raises: HTTPException: If cancellation failsPOST /api/inference/export-cli-yaml: Export frontend configuration as CLI-compatible YAML file. Generates semantic result_id: config_name + short_uuid for: - Human readability: “PTCDI-C3_XRR_a1b2c3d4” - Uniqueness: UUID prevents collisions - Traceability: Config name is part of the ID Args: config: VIPRInference containing VIPR configuration Returns: StreamingResponse: YAML file download with semantic filename Raises: HTTPException: If export failsGET /api/inference/health: Health check endpoint for Celery backend status. Returns information about: - Celery execution mode (eager/async) - Whether tasks can be executed - Redis availability (for async mode) - Helpful error messages if service is unavailable Returns: HealthCheckResponse containing health status informationGET /api/inference/progress/{task_id}: Get progress of a running VIPR inference task. Args: task_id: Unique identifier of the Celery task Returns: TaskProgressResponse containing task status, progress, and results Raises: HTTPException: If task not found or invalidPOST /api/inference/run: Run VIPR inference using Celery background tasks. This is a generic, domain-agnostic endpoint that validates configuration against the registry before starting a Celery task. Provides defense against malicious configs and ensures only registry-approved hooks/filters are used. Args: config: VIPRInference containing VIPR configuration Returns: Dict containing task_id for progress tracking Raises: HTTPException: If configuration validation fails or task creation fails
panpe¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/panpe/models: Get available PANPE models for reflectometry analysis. Returns: List[AvailableModel]: List of available PANPE models with their configurations Raises: HTTPException: If the configuration directory is not foundGET /api/panpe/{config_name}/parameters: Get parameters for a specific PANPE model configuration (like Reflectorch). Args: config_name (str): Name of the PANPE config (e.g., ‘panpe-2layers-xrr’) Returns: ReflectometryModelParameters: Parameters for the model Raises: HTTPException: If the config is not found or cannot be parsed
plugins¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
Descriptions¶
GET /api/plugins/: List all builtin and runtime plugins with their status.DELETE /api/plugins/batch: Delete multiple runtime plugins in a single operation.POST /api/plugins/batch/toggle: Enable or disable multiple runtime plugins in a single operation.POST /api/plugins/upload: Upload and validate a new runtime plugin.DELETE /api/plugins/{plugin_name}: Delete a runtime plugin (builtin plugins cannot be deleted).PUT /api/plugins/{plugin_name}/toggle: Enable or disable a runtime plugin.
reflectorch¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/reflectorch/models: Get available models for reflectometry analysis. Returns: List[AvailableModel]: List of available models with their configurations Raises: HTTPException: If the configuration directory is not foundGET /api/reflectorch/standard-config: Generate standard Reflectorch configuration. This endpoint migrated from vipr-core to vipr-framework. It calls the VIPR runner to generate standard configuration. Returns: Dict containing standard VIPR configuration Raises: HTTPException: If config generation failsPOST /api/reflectorch/validate-config: Validate configuration without running prediction. Also completes configuration by adding missing parameters with defaults. Uses the same security validation as the /run endpoint.GET /api/reflectorch/{model}/parameters: Get parameters for a specific reflectometry model. Args: model (str): Name of the model Returns: ReflectometryModelParameters: Parameters for the model Raises: HTTPException: If the model configuration is not found
streaming¶
Method |
Path |
Operation |
Python method |
Request model |
Response model |
Tags |
|---|---|---|---|---|---|---|
|
|
|
|
- |
|
|
|
|
|
|
|
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
|
|
|
|
- |
|
|
Descriptions¶
GET /api/streaming/consumers: List all active streaming consumers. Returns: Dict containing list of active consumer IDs and summary statsPOST /api/streaming/start: Start streaming prediction consumer for real-time processing. This endpoint starts a RabbitMQ consumer that listens for spectral data messages and triggers individual VIPR inference tasks for each subscan message. Args: request: StreamingStartRequest containing config and RabbitMQ settings Returns: StreamingResponse with consumer ID and status Raises: HTTPException: If consumer startup failsGET /api/streaming/stats/{consumer_id}: Get statistics for a specific streaming consumer. Args: consumer_id: Unique identifier of the consumer Returns: ConsumerStatsResponse with detailed consumer statistics Raises: HTTPException: If consumer not foundPOST /api/streaming/stop-all: Stop all active streaming consumers. Returns: Dict containing final statistics for all stopped consumers Raises: HTTPException: If operation failsPOST /api/streaming/stop/{consumer_id}: Stop a running streaming prediction consumer. Args: consumer_id: Unique identifier of the consumer to stop Returns: Dict containing final consumer statistics Raises: HTTPException: If consumer not found or stop failsGET /api/streaming/tasks/{consumer_id}: Get task list for a specific streaming consumer. Args: consumer_id: Unique identifier of the consumer limit: Maximum number of tasks to return (most recent first) since: ISO timestamp to filter tasks triggered after this time Returns: ConsumerTasksResponse containing task list and metadata Raises: HTTPException: If consumer not found