All Competitors
Every biological foundation model, evaluated and ranked by the bio.rodeo team
Applications
Architectures
Learning Paradigms
Biological Subjects
Showing 1–17 of 17 filtered models
TranscriptFormer
Chan Zuckerberg Initiative
A generative cross-species foundation model for single-cell transcriptomics, trained on 112 million cells from 12 species spanning 1.5 billion years of evolution.
Borzoi
Calico Life Sciences
Deep learning model predicting cell-type-specific RNA-seq coverage at 32 bp resolution from 524 kb of DNA sequence, jointly modeling transcription, splicing, and polyadenylation.
ESM Cambrian
EvolutionaryScale
A family of protein language models (300M, 600M, 6B parameters) for representation learning that substantially outperforms ESM-2 at equivalent or smaller scale.
RhoFold+
ml4bio
End-to-end RNA 3D structure prediction combining the RNA-FM language model with Invariant Point Attention, achieving SOTA on RNA-Puzzles and CASP15.
scPRINT
Institut Pasteur
Single-cell foundation model pre-trained on 50 million cells for gene network inference, denoising, and cell type prediction.
CellFM
Sun Yat-sen University
An 800M-parameter single-cell foundation model pre-trained on 100 million human cells via a RetNet architecture for cell annotation, perturbation prediction, and gene analysis.
CellSeg3D
Mathis Lab
Self-supervised 3D cell segmentation for fluorescence microscopy using WNet3D and Swin-UNetR, achieving supervised-level performance without annotated training data.
Nicheformer
Helmholtz Munich / Technical University of Munich
Transformer foundation model pretrained on 110M single-cell and spatially resolved transcriptomics profiles, enabling spatial context prediction for dissociated cells.
CONCH
Mahmood Lab / Brigham and Women's Hospital
Vision-language foundation model for computational pathology, pretrained on 1.17M histopathology image-caption pairs with contrastive and captioning objectives.
RNAformer
University of Freiburg
Axial-attention transformer for RNA secondary structure prediction from single sequences, without MSAs. Achieves state-of-the-art accuracy via homology-aware training.
xTrimoPGLM
BioMap / Tsinghua University
Unified 100-billion-parameter protein language model combining autoencoding and autoregressive objectives for protein understanding and generation.
UCE
Stanford University
Zero-shot foundation model for single-cell gene expression that generates species-agnostic cell embeddings using protein language model representations of gene products.
trRosettaRNA
Yang Lab
Deep learning pipeline for RNA 3D structure prediction using a transformer (RNAformer) to predict inter-nucleotide geometries, refined by Rosetta energy minimization.
CellViT
Institute for AI in Medicine
Vision Transformer for cell instance segmentation and classification in H&E digital pathology, extended by CellViT++ with foundation model backbones and few-shot adaptation.
MRM-BERT
Nanjing University of Science and Technology
A hybrid deep learning model predicting 12 types of RNA modifications by fine-tuning DNABERT representations fused with CNN-encoded sequence features.
scMoFormer
Michigan State University
Transformer framework for single-cell multi-omics that predicts cross-modality relationships using heterogeneous graphs of cells, genes, and proteins.
ProtTrans
Rostlab
A suite of six protein language models — including ProtBERT and ProtT5 — trained on up to 393 billion amino acids using large-scale HPC infrastructure.