All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–8 of 8 filtered models
SubCell
Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab
Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.
OpenPhenom-S/16
Recursion Pharmaceuticals
Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.
Hibou
HistAI
DINOv2-based Vision Transformer foundation models for digital pathology, trained on over 1 million whole-slide images. Available as Hibou-B (86M) and Hibou-L (307M) under Apache 2.0.
Prov-GigaPath
Microsoft Research
Whole-slide pathology foundation model pretrained on 1.3 billion tiles from 171,189 clinical WSIs. Achieves state-of-the-art on 25 of 26 pathology benchmark tasks.
CellSeg3D
Mathis Lab
Self-supervised 3D cell segmentation for fluorescence microscopy using WNet3D and Swin-UNetR, achieving supervised-level performance without annotated training data.
UNI
Mahmood Lab
Self-supervised pathology foundation model (ViT-L/16, DINOv2) pretrained on 100M+ H&E tiles from 100,000+ whole-slide images. State-of-the-art on 34 pathology tasks.
Virchow
Paige AI
Self-supervised vision transformer foundation models for computational pathology, pre-trained on up to 3.1 million whole slide images from 632M to 1.9B parameters.
ProtTrans
Rostlab
A suite of six protein language models — including ProtBERT and ProtT5 — trained on up to 393 billion amino acids using large-scale HPC infrastructure.