bio.rodeo
HomeCompetitorsLeaderboardOrganizations
bio.rodeo

The authoritative source for evaluating biological foundation models. No hype, just honest analysis.

© 2026 bio.rodeo. All rights reserved.
Imaging

DynaCLR

Chan Zuckerberg Initiative

A self-supervised contrastive learning method for embedding cell and organelle dynamics from time-lapse microscopy using temporal regularization and single-cell tracking.

Released: 2024

Overview

DynaCLR is a self-supervised representation learning method for encoding cell and organelle dynamics from time-lapse microscopy, developed by Eduardo Hirata-Miyasaki, Soorya Pradeep, Ziwen Liu, and colleagues at the Chan Zuckerberg Biohub San Francisco in the laboratory of Shalin B. Mehta. The method was introduced in a preprint posted to arXiv in October 2024 (arXiv:2410.11281) and is part of CZI's Virtual Cells Platform. DynaCLR stands for Dynamic Contrastive Learning of Representations, and it is implemented within the VisCy (Visual Cytometry) framework.

The biological challenge that motivates DynaCLR is the labor intensity of annotating live-cell imaging data. Time-lapse perturbation experiments — in which cells are filmed over hours or days after infection, drug treatment, or genetic manipulation — generate data volumes that are impractical to label exhaustively by hand. Human annotation is also inherently biased: annotators must decide in advance which states to distinguish, potentially missing emergent or unexpected behaviors. DynaCLR addresses both problems by learning informative embeddings of cell dynamics in a fully self-supervised manner, without requiring labeled training images. The resulting embeddings can then be applied to classification, clustering, cross-modal learning, and track-alignment tasks with only sparse human labels.

The approach integrates two innovations that have not previously been combined in biological imaging: single-cell tracking, which provides the identity and trajectory of individual cells across time frames, and time-aware contrastive sampling, which explicitly uses temporal proximity as a structural signal during representation learning. By treating cells close together in time as positive pairs and temporally distant or biologically distinct cells as negatives, DynaCLR learns representations that are smooth across time yet sensitive to biological state transitions.

Key Features

  • Temporal regularization: The contrastive sampling strategy incorporates the time axis explicitly — cells from nearby frames in the same tracked trajectory form positive pairs, while cells far apart in time or from distinct tracked lineages serve as negatives. This encourages the learned embedding space to vary smoothly along biological time courses while remaining discriminative across distinct dynamic states.
  • Single-cell tracking integration: Rather than treating individual frames as independent samples, DynaCLR incorporates cell tracking information to associate images of the same cell across frames. This provides richer self-supervisory structure than frame-level sampling and allows the model to learn the temporal context of dynamic events such as mitosis or infection progression.
  • Cross-modal distillation: DynaCLR can transfer representations from fluorescence imaging channels — which provide cell-type-specific markers — to label-free brightfield or phase contrast channels. This allows biological state information encoded by fluorescent dyes to supervise embedding learning in label-free data, enabling downstream analysis without requiring fluorescent labeling in future experiments.
  • Generalization to out-of-distribution data: Embeddings trained on one cell type and imaging system transfer effectively to held-out datasets acquired with different microscopes, cell lines, and experimental conditions, demonstrating robust generalization beyond the training distribution.
  • Lightweight annotation requirement: Once embeddings are trained self-supervised, a small number of human-labeled examples is sufficient to train high-accuracy classifiers for cell division, infection state, and migration phenotype — dramatically reducing annotation burden compared to fully supervised approaches.

Technical Details

DynaCLR is implemented as a contrastive encoder within the VisCy framework using PyTorch. The encoder architecture processes single-cell image crops extracted from tracked time-lapse sequences and maps them to a lower-dimensional embedding space. Training uses a contrastive objective — analogous to SimCLR or MoCo — but the positive and negative pair construction is governed by temporal proximity and track identity rather than random augmentation alone. Specifically, frames from the same tracked cell within a configurable temporal window form positive pairs, while frames from distant time points or separate cells provide negative supervision.

The temporal regularization is controlled by a sampling temperature that modulates how sharply positive pair selection falls off with temporal distance, allowing the model to capture both fast-changing events (such as mitotic entry) and slow dynamics (such as progressive infection). On held-out benchmarks, DynaCLR achieves F1 scores exceeding 0.98 for cell cycle state and infection state classification, substantially outperforming baselines without temporal regularization. The time-aware sampling strategy is shown to improve embedding smoothness and dynamic range along biological trajectories compared to frame-level contrastive sampling. The model also demonstrates effective cross-modal alignment between fluorescence and label-free imaging, enabling label-free classification of biological states originally distinguished only by fluorescent markers. Code for model training, inference, and a napari visualization plugin are available via the mehta-lab/viscy GitHub repository and the czbiohub-sf/napari-iohub plugin.

Applications

DynaCLR is designed for researchers working with time-lapse fluorescence or label-free microscopy datasets in which labeling every frame is impractical. Primary use cases include classifying cell cycle stages and infection progression in high-throughput perturbation screens, clustering heterogeneous migration phenotypes in wound healing or chemotaxis assays, and aligning asynchronous cellular responses across replicate experiments or broken cell tracks. The cross-modal distillation capability is particularly useful in experimental contexts where fluorescent labels are transient, phototoxic, or incompatible with long-term imaging — allowing the model to learn biologically meaningful representations from label-free data supervised by co-acquired fluorescent channels. The napari integration and sparse-annotation classifier training pipeline make DynaCLR accessible to biologists without deep machine learning expertise.

Impact

DynaCLR advances the practice of self-supervised learning in live-cell imaging by explicitly incorporating the temporal structure of cell dynamics into the representation learning objective. The strong generalization to out-of-distribution imaging systems suggests that the learned representations capture genuine biological variation rather than instrument-specific artifacts, which is a persistent challenge in microscopy-based machine learning. The model's inclusion in CZI's Virtual Cells Platform reflects its positioning as a component of a broader computational infrastructure for virtual cell modeling. A current limitation is that the approach requires reliable cell tracking as a prerequisite — track fragmentation or tracking errors can corrupt the temporal pair construction. Extension to three-dimensional volumetric time-lapse data and to additional perturbation modalities beyond infection are natural directions for future development.

Tags

cell state classificationrepresentation learningtransformerCNNself-supervisedcontrastive learninglive-cell imagingcell biology

Resources

GitHub RepositoryResearch PaperOfficial Website