All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–8 of 8 filtered models
H-optimus-0
Bioptimus
A 1.1B parameter open-source vision transformer for histopathology, trained on 500,000+ H&E whole slide images from 4,000 clinical practices worldwide.
Hibou
HistAI
DINOv2-based Vision Transformer foundation models for digital pathology, trained on over 1 million whole-slide images. Available as Hibou-B (86M) and Hibou-L (307M) under Apache 2.0.
Prov-GigaPath
Microsoft Research
Whole-slide pathology foundation model pretrained on 1.3 billion tiles from 171,189 clinical WSIs. Achieves state-of-the-art on 25 of 26 pathology benchmark tasks.
UNI
Mahmood Lab
Self-supervised pathology foundation model (ViT-L/16, DINOv2) pretrained on 100M+ H&E tiles from 100,000+ whole-slide images. State-of-the-art on 34 pathology tasks.
CONCH
Mahmood Lab / Brigham and Women's Hospital
Vision-language foundation model for computational pathology, pretrained on 1.17M histopathology image-caption pairs with contrastive and captioning objectives.
Virchow
Paige AI
Self-supervised vision transformer foundation models for computational pathology, pre-trained on up to 3.1 million whole slide images from 632M to 1.9B parameters.
PLIP
Stanford University
CLIP-based vision-language foundation model for pathology, fine-tuned on 208,414 image-text pairs. Enables zero-shot tissue classification and image retrieval.
CellViT
Institute for AI in Medicine
Vision Transformer for cell instance segmentation and classification in H&E digital pathology, extended by CellViT++ with foundation model backbones and few-shot adaptation.