All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–8 of 8 filtered models
Boltz-2
MIT CSAIL / Recursion Pharmaceuticals
Open model that jointly predicts biomolecular structure and small-molecule binding affinity, approaching FEP+ accuracy in seconds on a single GPU.
NatureLM
Microsoft Research AI for Science
Unified science foundation model from Microsoft Research treating molecules, proteins, RNA, DNA, and materials as a shared sequence language for cross-domain generation.
ProteinDT
UC Berkeley
A multimodal framework for text-guided protein design, enabling sequence generation, zero-shot editing, and property prediction via contrastive learning.
Boltz-1
MIT
Open-source deep learning model for biomolecular structure prediction achieving AlphaFold3-level accuracy, trained entirely on publicly available data.
OpenPhenom-S/16
Recursion Pharmaceuticals
Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.
Chai-1
Chai Discovery
Multi-modal foundation model for biomolecular structure prediction covering proteins, small molecules, DNA, RNA, and glycans in a unified diffusion framework.
BioT5+
Microsoft Research Asia
An enhanced T5-based encoder-decoder that unifies molecule, protein, and text understanding via IUPAC integration and multi-task instruction tuning.
BioT5
Renmin University of China
Pre-training framework bridging molecules, proteins, and natural language using T5 with SELFIES representations for cross-modal biological understanding.