WaveOrder
Version v0.1.0 released 24 Nov 2025
License
BSD-3-Clause licenseRepository
https://github.com/mehta-lab/waveorderWaveOrder is a physics-informed, predictive model that unifies forward and inverse wave-optics to reconstruct phase, absorption, birefringence, diattenuation, and fluorescence density/orientation from multi-contrast microscopy, with machine learning auto-tuning for shift-variant blind deconvolution.
Developed By
Model Details
Model Architecture

WaveOrder is a physics-informed, operator-based reconstruction framework implemented in PyTorch. Instead of a single monolithic neural network, it composes analytic, differentiable linear imaging operators (illumination pupil, Green's tensor spectrum, detection pupil) into transfer functions that map specimen properties to measured channels. Reconstructions use closed-form Tikhonov-regularized pseudo-inverses and small learned/tuned parameter vectors (e.g., per-tile illumination/detection misalignment and aberration parameters θ) optimized by backpropagation. The same graph supports label-free (phase, absorption, birefringence, diattenuation) and fluorescence (density, orientation) contrasts across common geometries (widefield, DPC, defocus, oblique light-sheet/plane, confocal).
This architecture preserves physical interpretability, enables rapid, stable reconstruction, and allows ML-based auto-tuning for shift-variant restoration without requiring large training datasets.
Parameters
Flexible network characterized by:
- Analytic core operators
- Low-dimensional learned quantities per tile (typically 5-25 parameters per tile across many thousands of tiles)
Learned parameters include:
- defocus
- illumination tilt
- illumination numerical aperture (NA)
- detection NA
- detection aberrations (astigmatism, coma, etc.)
- polarization states
Optional scalar losses/hyperparameters:
- mid-band frequency losses
- regularization scalars
Model Card Author
Talon Chandler (Biohub)
Citation
If you use WaveOrder, please cite:
-
Chandler T., Ivanov I.E., Hirata-Miyasaki E., et al. "WaveOrder: Physics-informed ML for auto-tuned multi-contrast computational microscopy from cells to organisms." arXiv:2412.09775 (2025).
-
WaveOrder software repository: mehta-lab/waveorder (PyPI: waveorder).
Primary Contact Email
talon.chandler@czbiohub.orgTo submit feature requests or report issues with the model, please open an issue on the GitHub repository.
System Requirements
- Compute Requirements: CPU
Intended Use
Primary Use Cases
- Quantitative phase and absorption reconstruction (label-free).
- Polarization-resolved label-free reconstruction (birefringence, diattenuation).
- Fluorescence deconvolution (density) and fluorescence orientation estimation (dipole second moments).
- Optical (Contrast) Transfer Function (OTF) estimation and correction.
- Blind, shift-variant restoration via physics-guided auto-tuning (per-tile PSF/pupil).
- Multi-contrast fusion for correlative imaging (label-free + fluorescence) from organelles → tissues → small organisms.
Example applied contexts:
- Optical pooled screens: Deblurring/contrast normalization across large wells prior to segmentation.
- Developmental imaging (e.g., zebrafish): Improved sectioning and phenotyping in label-free and fluorescence.
- Polarization microscopy: Mapping transverse birefringence and slow-axis orientation.
Out-of-Scope or Unauthorized Use Cases
Do not use the model for the following purposes:
- Use that violates applicable laws, regulations (including trade compliance laws), or third party rights such as privacy or intellectual property rights.
- Any use that is prohibited by the BSD-3-Clause license and Acceptable Use Policy.
Training Data
WaveOrder does not require large supervised datasets. It operates on acquired multi-channel volumes and optionally optimizes small parameter vectors θ on the fly (self-supervised, physics-guided). The following data can be used for demonstrations and evaluation.
Public Dataset:
Datasets available on request:
- thin adherent cells (e.g., A549)
- zebrafish (light-sheet, oblique/straight paths)
- anisotropy phantoms
- cardiomyocytes under multiple oblique illuminations
- multispectral iPSCs (label-free + fluorescence)
Training Procedure
There is no offline pretraining of a large network. For each dataset (or tile):
- Preprocess: Deskew/registration as required by geometry, apply optional background/Stokes calibration for polarization, apply channel normalization.
- Model setup: Choose contrast mode(s) and geometry; construct transfer functions from illumination/scattering/detection submodels.
- Reconstruction: Tikhonov-regularized pseudo-inverse (closed-form in Fourier domain) to estimate properties.
- Physics-guided auto-tuning (optional, per-tile): Define a scalar image-quality loss (e.g., mid-band frequency energy, symmetry), backpropagate through the differentiable forward model to update θ; re-invert; iterate to convergence.
- Tile fusion: Blend overlapping tiles and (optionally) stitch.
Training Code
Repository: https://github.com/mehta-lab/waveorder
PyPI: https://pypi.org/project/waveorder/
Contains operator construction, inversion, auto-tuning loops, and example notebooks/napari workflows.
Speeds, Sizes, Times
- Throughput depends on volume size, number of channels/properties, and tiling.
- Scalar (phase/fluorescence) inversions 10 x 2k x 2k are typically <1 second (single FFT-based pass plus optional few auto-tuning iterations).
- Vector (polarization) inversions run multiple filters (bank across properties/channels); expect higher memory/compute.
- Checkpointing is not required; intermediate θ and reconstructions can be saved per tile.
Training Hyperparameters
- Numeric precision: PyTorch fp32 by default
- Regularization (per reconstruction): Tikhonov η (user-set)
- Auto-tuning: step size, iteration budget, choice/weight of scalar loss(es)
- Tiling: tile size/stride; blending window
Data Sources
The following datasets were used for training and evaluation:
Performance Metrics
Metrics
- Frequency-domain metrics: Mid-band energy (for tuning); empirical transverse/axial modulation transfer estimates from profiles.
- Reconstruction quality: SNR/contrast improvement, defocus-ambiguity removal (phase sign), polarization consistency.
- Downstream task metrics:
- Segmentation F1/precision/recall vs. manual annotations (e.g., CellPose on pooled-screen tiles).
- ROC/AUC for cell-type classification (e.g., neuromasts: mantle vs hair/support) using simple texture metrics pre/post reconstruction.
Evaluation Datasets
- Phase/polarization demo dataset
- Optical pooled-screen tiles across 35-mm wells (internal)
- Zebrafish embryo/larval datasets (light-sheet label-free + fluorescence; internal)
- Anisotropy phantoms (laser-etched spokes; internal)
- Cardiomyocytes with multi-aperture oblique illumination (internal)
- Multispectral iPSCs (label-free + unmixed fluorescence; internal)
Internal datasets are available upon request.
Evaluation Results
- Pooled screens: Auto-tuned reconstructions restore periphery tiles (oblique illumination) and improve segmentation F1 vs. raw and nominal (untuned) reconstructions.
- Neuromasts: Fluorescence homogeneity metric shows clearer class separation after reconstruction; ROC AUC improves (e.g., ~0.66 → ~0.86).
- Cardiomyocytes: Multi-aperture fusion enhances z-disc modulation over single-aperture and raw data; line profiles confirm expected sarcomere spacing.
- Zebrafish: Improved sectioning/contrast in label-free and fluorescence; anatomy-guided unwraps (notochord/retina) become cleaner and more quantifiable.
- Vector reconstructions: Birefringence maps yield interpretable slow-axis orientation; wave-optical inversion reduces defocus-symmetry artifacts vs. ray-based voxel inversions.
Biases, Risks, and Limitations
Potential Biases
- Results will reflect biases present in the input data.
- The method reflects biases of the linear, single-scattering forward model. Note that specimens/geometries outside these assumptions may reconstruct poorly.
- Auto-tuning objectives (e.g., mid-band energy) implicitly bias reconstructions toward specific frequency content.
Risks
Areas of risk may include but are not limited to:
- Misinterpretation of intensities.
- Over-regularization can erase subtle structures while under-regularization can amplify noise/ringing.
Limitations
- Assumes channel linearity, spatial linearity (no saturation), weak single scattering, and contrast separability between label-free and fluorescence channels.
- Thick, multiply scattering tissues and strong nonlinear effects are out of scope.
- Orientation reconstructions from Stokes data are sensitive to noise and background correction. The current implementation uses simple least-squares priors (Tikhonov), not full noise models.
Caveats and Recommendations
- Validate reconstructed contrasts with controls (beads, phantoms, known structures) and cross-modal checks (e.g., label-free vs. fluorescence).
- We are committed to advancing the responsible development and use of artificial intelligence. Please follow our Acceptable Use Policy when using the model.
- Should you have any security or privacy issues or questions related to the model, please reach out to our team at security@chanzuckerberg.com or privacy@chanzuckerberg.com.