All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–13 of 13 filtered models
NatureLM
Microsoft Research AI for Science
Unified science foundation model from Microsoft Research treating molecules, proteins, RNA, DNA, and materials as a shared sequence language for cross-domain generation.
Evolla
Westlake University
An 80B-parameter multimodal protein-language model that decodes protein function through natural language dialogue, integrating sequence, structure, and evolutionary context.
ProteinDT
UC Berkeley
A multimodal framework for text-guided protein design, enabling sequence generation, zero-shot editing, and property prediction via contrastive learning.
SubCell
Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab
Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.
BiomedParse
Microsoft Research
A biomedical foundation model for joint segmentation, detection, and recognition across nine imaging modalities using natural language prompts.
Chai-1
Chai Discovery
Multi-modal foundation model for biomolecular structure prediction covering proteins, small molecules, DNA, RNA, and glycans in a unified diffusion framework.
ProTrek
Westlake University
Tri-modal protein language model unifying sequence, structure, and function via contrastive learning, enabling natural-language protein search across billions of entries.
AlphaFold 3
Google DeepMind
Unified diffusion-based model predicting structures of protein complexes with nucleic acids, small molecules, ions, and modified residues with atomic accuracy.
RoseTTAFold All-Atom
Baker Lab
Deep network that predicts structures of full biological assemblies containing proteins, nucleic acids, small molecules, metals, and covalent modifications simultaneously.
BiomedCLIP
Microsoft Research
Multimodal biomedical foundation model trained on 15M PubMed Central figure-caption pairs via contrastive learning, achieving state-of-the-art zero-shot performance across imaging modalities.
DPI
Xiamen University
End-to-end single-cell multimodal analysis framework using deep parametric inference to integrate RNA and protein data into a unified latent space.
ProtST
DeepGraphLearning
Multi-modal protein language model that jointly learns from protein sequences and biomedical text, enabling zero-shot functional prediction and retrieval.
Galactica
Meta AI
A large language model trained on 48 million scientific papers and knowledge bases to store, combine, and reason about scientific knowledge.