All Competitors

Every biological foundation model, evaluated and ranked by the bio.rodeo team

Showing 120 of 20 filtered models

Imaging

Cellpose-SAM

HHMI Janelia Research Campus

Generalist cell segmentation model combining SAM's ViT-L backbone with Cellpose flow fields. First model to surpass average human annotators on the Cellpose benchmark.

2.2K118
See the scorecard
Imaging

OmniEM

Peking University

Unified electron microscopy image analysis toolkit built on EM-DINO, a vision foundation model pretrained on 5 million diverse EM images.

2
See the scorecard
Imaging

Cellpose 3

HHMI Janelia Research Campus

Generalist cell segmentation framework with a super-generalist cyto3 model and one-click image restoration networks optimized for downstream segmentation quality.

2.2K281
See the scorecard
Imaging

SubCell

Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab

Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.

68
See the scorecard
Imaging

BiomedParse

Microsoft Research

A biomedical foundation model for joint segmentation, detection, and recognition across nine imaging modalities using natural language prompts.

65614831K
See the scorecard
Imaging

OpenPhenom-S/16

Recursion Pharmaceuticals

Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.

748.6K
See the scorecard
Imaging

Cryo-IEF

Westlake University

Foundation model pre-trained on 65M cryo-EM particle images via contrastive learning, enabling zero-shot classification, pose clustering, and quality assessment.

63
See the scorecard
Imaging

CryoFM

ByteDance Seed

Generative foundation model for cryo-EM density maps using flow matching, enabling zero-shot denoising, map sharpening, and missing wedge restoration.

56
See the scorecard
Imaging

CryoSAM

Xu Lab

Training-free cryo-ET tomogram segmentation that adapts SAM and DINOv2 for 3D volumetric data, enabling full tomogram segmentation from a single user prompt.

1535
See the scorecard
Imaging

CryoViT

Stanford University

Semi-supervised cryo-ET segmentation framework that adapts DINOv2 vision transformers for 3D organelle annotation using sparse 2D slice labels.

125
See the scorecard
Imaging

Cytoland

Chan Zuckerberg Biohub / Mehta Lab

A suite of virtual staining models that translate label-free microscopy images into fluorescent-equivalent staining of nuclei and plasma membranes.

918
See the scorecard
Imaging

CellSeg3D

Mathis Lab

Self-supervised 3D cell segmentation for fluorescence microscopy using WNet3D and Swin-UNetR, achieving supervised-level performance without annotated training data.

119
See the scorecard
Imaging

UniFMIR

Fudan University

Foundation model for fluorescence microscopy image restoration, unifying super-resolution, denoising, isotropic reconstruction, projection, and volumetric reconstruction in one Swin transformer.

6972
See the scorecard
Imaging

CONCH

Mahmood Lab / Brigham and Women's Hospital

Vision-language foundation model for computational pathology, pretrained on 1.17M histopathology image-caption pairs with contrastive and captioning objectives.

491841145.3K
See the scorecard
Imaging

CellSAM

Van Valen Lab

Universal cell segmentation model adapting Meta's SAM for biology. Segments mammalian cells, yeast, and bacteria across diverse imaging modalities with human-level accuracy.

19312
See the scorecard
Imaging

PLIP

Stanford University

CLIP-based vision-language foundation model for pathology, fine-tuned on 208,414 image-text pairs. Enables zero-shot tissue classification and image retrieval.

37773399.2K
See the scorecard
Imaging

CellViT

Institute for AI in Medicine

Vision Transformer for cell instance segmentation and classification in H&E digital pathology, extended by CellViT++ with foundation model backbones and few-shot adaptation.

37326
See the scorecard
Imaging

BiomedCLIP

Microsoft Research

Multimodal biomedical foundation model trained on 15M PubMed Central figure-caption pairs via contrastive learning, achieving state-of-the-art zero-shot performance across imaging modalities.

868.7K
See the scorecard
Imaging

Cellpose 2.0

HHMI Janelia Research Campus

Human-in-the-loop cell segmentation framework enabling custom model training from as few as 100-200 corrected annotations.

2.2K989
See the scorecard
Imaging

Cellpose

HHMI Janelia Research Campus

Generalist deep learning algorithm for cell and nucleus instance segmentation using simulated diffusion flows, without per-dataset retraining.

2.2K3.2K
See the scorecard