Every biological foundation model, evaluated and ranked by the bio.rodeo team
Showing 1–1 of 1 filtered model
Stanford University
CLIP-based vision-language foundation model for pathology, fine-tuned on 208,414 image-text pairs. Enables zero-shot tissue classification and image retrieval.