bio.rodeo
HomeCompetitorsLeaderboardOrganizations
bio.rodeo

The authoritative source for evaluating biological foundation models. No hype, just honest analysis.

© 2026 bio.rodeo. All rights reserved.
Imaging

Cytoland

Chan Zuckerberg Biohub / Mehta Lab

A suite of virtual staining models that translate label-free microscopy images into fluorescent-equivalent staining of nuclei and plasma membranes.

Released: 2024

Overview

Cytoland is a collection of three pretrained virtual staining models developed by the Mehta Lab at Chan Zuckerberg Biohub San Francisco. The models translate label-free light microscopy images — including quantitative phase, Zernike phase contrast, and brightfield — into accurate fluorescent-equivalent predictions of nuclei and plasma membranes. By generating virtual fluorescence from label-free contrast alone, Cytoland decouples cellular landmark imaging from molecular fluorescence, freeing optical channels for additional reporters, live-cell time-lapse experiments, and photomanipulation studies that are otherwise incompatible with fluorescent labels.

Prior virtual staining methods struggled with fragility to imaging parameter variation, cultural conditions, and cell-type diversity. Cytoland addresses these limitations by combining physics-inspired data augmentation with a scalable convolutional architecture, making the models robust across microscopes, magnifications, and cell types not seen during training. The work was published in Nature Machine Intelligence in 2025 and the models are distributed through the CZI Virtual Cells Platform alongside the VisCy open-source Python package.

The three variants in the Cytoland collection cover distinct experimental contexts: 3D volumetric imaging of human cell lines, high-throughput 2D plate-based screening, and developmental imaging of zebrafish lateral-line neuromasts — together spanning a broad range of cell biology and developmental biology use cases.

Key Features

  • Three specialized pretrained variants: VSCyto3D targets volumetric imaging of human cell lines (HEK293T, A549, iPSC-derived neurons), VSCyto2D targets high-throughput 2D plate-based screening across multiple human cell lines, and VSNeuromast targets 3D imaging of zebrafish lateral-line hair cells during development.
  • Dual-target simultaneous prediction: each model predicts both a nuclear channel and a plasma membrane channel from a single label-free input stack, providing the two landmarks most commonly required for cell segmentation and tracking.
  • Robust generalization across imaging conditions: training incorporates physically motivated augmentations simulating defocus, shot noise, and illumination nonuniformity, enabling transfer to microscopes and magnifications not represented in training data.
  • Label rescue capability: recovers biological signal in scenarios where experimental fluorescent labels are missing, non-uniformly expressed, or degraded by photobleaching — situations where conventional fluorescence imaging fails outright.
  • Open-source inference and training pipeline: the VisCy PyTorch package provides CLI tools and Python APIs for inference, fine-tuning, and training from scratch, with data handling following community standards for microscopy (OME-Zarr / iohub).

Technical Details

All Cytoland models are built on UNeXt2, an asymmetric encoder-decoder architecture designed for 3D microscopy image-to-image translation. The encoder is based on the ConvNeXt v2 backbone, which uses depthwise separable convolutions and layer normalization for efficient feature extraction. A key design choice is the use of 2D intermediate feature maps within a 3D volumetric processing framework, reducing GPU memory requirements while preserving spatial context across the z-axis. Standard multi-scale skip connections carry fine-grained spatial detail from the encoder to the decoder for accurate organelle boundary prediction. The ConvNeXt v2 backbone is modularly scalable from lightweight to high-capacity configurations, supporting deployment on workstations through to multi-GPU clusters.

Training data for each variant consisted of paired acquisitions of label-free and fluorescently labeled images collected at Chan Zuckerberg Biohub. VSCyto3D and VSNeuromast process full volumetric stacks, while VSCyto2D operates on 2D fields of view. Model performance was evaluated using a multi-tier framework: pixel-level regression metrics (Pearson Correlation Coefficient and Structural Similarity Index Measure), segmentation metrics (Intersection over Union and Average Precision on instance segmentations derived from virtual versus experimental staining), and application-specific biological measurements including cell area distributions and cell counts. Downstream measurements obtained from virtual stains were quantitatively concordant with those from experimental fluorescence.

Applications

Cytoland is designed for researchers who need nuclear and membrane landmarks without consuming fluorescent channels or applying labels that perturb live cells. In live-cell imaging, the models enable long-term tracking of nuclei and membranes without phototoxicity from fluorescent reporters. In multiplexed experiments, freeing spectral channels allows simultaneous imaging of molecular reporters such as signaling probes or drug-response indicators alongside virtually stained landmarks. VSCyto2D supports scalable high-throughput plate-based screening across cell lines without the cost and variability of fluorescent transfection. VSNeuromast enables quantitative analysis of zebrafish neuromast development using label-free confocal or light-sheet microscopy. Virtually stained images are compatible as direct inputs to established segmentation pipelines such as Cellpose and StarDist, integrating seamlessly into standard image analysis workflows.

Impact

Cytoland represents a meaningful advance in virtual staining methodology by systematically addressing the generalization problem that limited earlier approaches. Its publication in Nature Machine Intelligence and distribution through the CZI Virtual Cells Platform reflects both peer-reviewed validation and institutional commitment to broad accessibility. The VisCy package, installable via PyPI, lowers the barrier to adoption for labs without specialized deep learning infrastructure. Notable limitations include dependence on the specific label-free contrasts used during training (generalization to other modalities such as DIC may require fine-tuning), restriction to nuclei and plasma membranes (organelles such as mitochondria and ER are not predicted), and GPU memory scaling for very large volumetric stacks. Adapting the models to a new imaging system requires paired label-free and fluorescent acquisitions from the target microscope, which may be a practical barrier for some labs.

Citation

Robust virtual staining of landmark organelles with Cytoland

Liu, Z., et al. (2025) Robust virtual staining of landmark organelles with Cytoland. Nature Machine Intelligence.

DOI: 10.1038/s42256-025-01046-2

Metrics

GitHub

Stars91
Forks12
Open Issues47
Contributors10
Last Push1d ago
LanguagePython
LicenseBSD-3-Clause

Citations

Total Citations8
Influential0
References61

Tags

image translationsegmentationvirtual stainingvision modelcell biologymicroscopy

Resources

GitHub RepositoryResearch PaperOfficial WebsiteDocumentation