All Competitors

Every biological foundation model, evaluated and ranked by the bio.rodeo team

Applications

Architectures

Learning Paradigms

Biological Subjects

Showing 19 of 9 filtered models

Imaging

Cellpose-SAM

HHMI Janelia Research Campus

Generalist cell segmentation model combining SAM's ViT-L backbone with Cellpose flow fields. First model to surpass average human annotators on the Cellpose benchmark.

2.2K118
See the scorecard
Imaging

OmniEM

Peking University

Unified electron microscopy image analysis toolkit built on EM-DINO, a vision foundation model pretrained on 5 million diverse EM images.

2
See the scorecard
Imaging

SubCell

Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab

Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.

68
See the scorecard
Imaging

OpenPhenom-S/16

Recursion Pharmaceuticals

Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.

748.6K
See the scorecard
Imaging

Cryo-IEF

Westlake University

Foundation model pre-trained on 65M cryo-EM particle images via contrastive learning, enabling zero-shot classification, pose clustering, and quality assessment.

63
See the scorecard
Imaging

CryoViT

Stanford University

Semi-supervised cryo-ET segmentation framework that adapts DINOv2 vision transformers for 3D organelle annotation using sparse 2D slice labels.

125
See the scorecard
Pathology

Prov-GigaPath

Microsoft Research

Whole-slide pathology foundation model pretrained on 1.3 billion tiles from 171,189 clinical WSIs. Achieves state-of-the-art on 25 of 26 pathology benchmark tasks.

59674653.7K
See the scorecard
Pathology

Virchow

Paige AI

Self-supervised vision transformer foundation models for computational pathology, pre-trained on up to 3.1 million whole slide images from 632M to 1.9B parameters.

16821.4K
See the scorecard
Imaging

BiomedCLIP

Microsoft Research

Multimodal biomedical foundation model trained on 15M PubMed Central figure-caption pairs via contrastive learning, achieving state-of-the-art zero-shot performance across imaging modalities.

868.7K
See the scorecard