All Competitors

Every biological foundation model, evaluated and ranked by the bio.rodeo team

Applications

Architectures

Learning Paradigms

Biological Subjects

Showing 18 of 8 filtered models

Protein

Boltz-2

MIT CSAIL / Recursion Pharmaceuticals

Open model that jointly predicts biomolecular structure and small-molecule binding affinity, approaching FEP+ accuracy in seconds on a single GPU.

3.9K285
See the scorecard
Multimodalities

NatureLM

Microsoft Research AI for Science

Unified science foundation model from Microsoft Research treating molecules, proteins, RNA, DNA, and materials as a shared sequence language for cross-domain generation.

834
See the scorecard
Protein

ProteinDT

UC Berkeley

A multimodal framework for text-guided protein design, enabling sequence generation, zero-shot editing, and property prediction via contrastive learning.

10696
See the scorecard
Protein

Boltz-1

MIT

Open-source deep learning model for biomolecular structure prediction achieving AlphaFold3-level accuracy, trained entirely on publicly available data.

3.9K313
See the scorecard
Imaging

OpenPhenom-S/16

Recursion Pharmaceuticals

Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.

748.6K
See the scorecard
Protein

Chai-1

Chai Discovery

Multi-modal foundation model for biomolecular structure prediction covering proteins, small molecules, DNA, RNA, and glycans in a unified diffusion framework.

1.9K301
See the scorecard
Multimodalities

BioT5+

Microsoft Research Asia

An enhanced T5-based encoder-decoder that unifies molecule, protein, and text understanding via IUPAC integration and multi-task instruction tuning.

12529.3K
See the scorecard
Multimodalities

BioT5

Renmin University of China

Pre-training framework bridging molecules, proteins, and natural language using T5 with SELFIES representations for cross-modal biological understanding.

125
See the scorecard