All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–24 of 122 filtered models
AlphaGenome
Google DeepMind
Google DeepMind model that predicts thousands of functional genomic tracks at single base-pair resolution from megabase-scale DNA sequences.
Boltz-2
MIT CSAIL / Recursion Pharmaceuticals
Open model that jointly predicts biomolecular structure and small-molecule binding affinity, approaching FEP+ accuracy in seconds on a single GPU.
Cellpose-SAM
HHMI Janelia Research Campus
Generalist cell segmentation model combining SAM's ViT-L backbone with Cellpose flow fields. First model to surpass average human annotators on the Cellpose benchmark.
TranscriptFormer
Chan Zuckerberg Initiative
A generative cross-species foundation model for single-cell transcriptomics, trained on 112 million cells from 12 species spanning 1.5 billion years of evolution.
OmniEM
Peking University
Unified electron microscopy image analysis toolkit built on EM-DINO, a vision foundation model pretrained on 5 million diverse EM images.
scPRINT
Institut Pasteur / CNRS
Foundation model pre-trained on 50 million single cells for robust gene network inference, with zero-shot denoising, batch correction, and cell type prediction.
Pinal
Westlake University
A 16B-parameter framework for de novo protein design from natural language, converting text descriptions into functional protein sequences via two-stage structure-conditioned generation.
Evo 2
Arc Institute
Genomic foundation model trained on 9.3 trillion DNA base pairs spanning all domains of life, with 40B parameters and a 1-million-token context window.
NatureLM
Microsoft Research AI for Science
Unified science foundation model from Microsoft Research treating molecules, proteins, RNA, DNA, and materials as a shared sequence language for cross-domain generation.
Evolla
Westlake University
An 80B-parameter multimodal protein-language model that decodes protein function through natural language dialogue, integrating sequence, structure, and evolutionary context.
ProteinDT
UC Berkeley
A multimodal framework for text-guided protein design, enabling sequence generation, zero-shot editing, and property prediction via contrastive learning.
SubCell
Chan Zuckerberg Initiative / Human Protein Atlas / Lundberg Lab
Self-supervised Vision Transformer models trained on proteome-wide fluorescence microscopy images from the Human Protein Atlas for subcellular protein localization.
BioEmu-1
Microsoft
Generative deep learning model from Microsoft Research that emulates protein equilibrium ensembles at 100,000x the speed of molecular dynamics simulation.
ESM Cambrian
EvolutionaryScale
A family of protein language models (300M, 600M, 6B parameters) for representation learning that substantially outperforms ESM-2 at equivalent or smaller scale.
BiomedParse
Microsoft Research
A biomedical foundation model for joint segmentation, detection, and recognition across nine imaging modalities using natural language prompts.
Evo
Arc Institute
A 7B parameter genomic foundation model using StripedHyena architecture to model prokaryotic DNA, RNA, and proteins at single-nucleotide resolution with 131k token context.
Boltz-1
MIT
Open-source deep learning model for biomolecular structure prediction achieving AlphaFold3-level accuracy, trained entirely on publicly available data.
OpenPhenom-S/16
Recursion Pharmaceuticals
Channel-agnostic Vision Transformer trained on 3M+ Cell Painting images via masked autoencoder, producing 384-dimensional morphological embeddings for zero-shot phenotypic analysis.
Cryo-IEF
Westlake University
Foundation model pre-trained on 65M cryo-EM particle images via contrastive learning, enabling zero-shot classification, pose clustering, and quality assessment.
RhoFold+
ml4bio
End-to-end RNA 3D structure prediction combining the RNA-FM language model with Invariant Point Attention, achieving SOTA on RNA-Puzzles and CASP15.
SFM-Protein
Microsoft Research
A transformer protein language model using integrative co-evolutionary pre-training to capture both short-range and long-range residue interactions from sequence alone.
Orthrus
Bowang Lab
Mamba-based mature RNA foundation model using contrastive learning on splice isoforms and 400+ mammalian species orthologs for mRNA property prediction.
CryoFM
ByteDance Seed
Generative foundation model for cryo-EM density maps using flow matching, enabling zero-shot denoising, map sharpening, and missing wedge restoration.
Chai-1
Chai Discovery
Multi-modal foundation model for biomolecular structure prediction covering proteins, small molecules, DNA, RNA, and glycans in a unified diffusion framework.