All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–9 of 9 filtered models
Cellpose-SAM
HHMI Janelia Research Campus
Generalist cell segmentation model combining SAM's ViT-L backbone with Cellpose flow fields. First model to surpass average human annotators on the Cellpose benchmark.
OmniEM
Peking University
Unified electron microscopy image analysis toolkit built on EM-DINO, a vision foundation model pretrained on 5 million diverse EM images.
SubCell
Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab
Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.
OpenPhenom-S/16
Recursion Pharmaceuticals
Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.
Cryo-IEF
Westlake University
Foundation model pre-trained on 65M cryo-EM particle images via contrastive learning, enabling zero-shot classification, pose clustering, and quality assessment.
CryoViT
Stanford University
Semi-supervised cryo-ET segmentation framework that adapts DINOv2 vision transformers for 3D organelle annotation using sparse 2D slice labels.
Prov-GigaPath
Microsoft Research
Whole-slide pathology foundation model pretrained on 1.3 billion tiles from 171,189 clinical WSIs. Achieves state-of-the-art on 25 of 26 pathology benchmark tasks.
Virchow
Paige AI
Self-supervised vision transformer foundation models for computational pathology, pre-trained on up to 3.1 million whole slide images from 632M to 1.9B parameters.
BiomedCLIP
Microsoft Research
Multimodal biomedical foundation model trained on 15M PubMed Central figure-caption pairs via contrastive learning, achieving state-of-the-art zero-shot performance across imaging modalities.