Every biological foundation model, evaluated and ranked by the bio.rodeo team
Showing 1–2 of 2 filtered models
Mahmood Lab / Brigham and Women's Hospital
Vision-language foundation model for computational pathology, pretrained on 1.17M histopathology image-caption pairs with contrastive and captioning objectives.
Stanford University
CLIP-based vision-language foundation model for pathology, fine-tuned on 208,414 image-text pairs. Enables zero-shot tissue classification and image retrieval.