bio.rodeo
HomeCompetitorsLeaderboardOrganizations
bio.rodeo

The authoritative source for evaluating biological foundation models. No hype, just honest analysis.

© 2026 bio.rodeo. All rights reserved.
Protein

ProteinBench

ByteDance Research

A holistic evaluation framework for protein foundation models, assessing 25+ models across 8 tasks using four-dimensional metrics: quality, novelty, diversity, and robustness.

Released: 2024

Overview

ProteinBench is a comprehensive evaluation framework designed to assess protein foundation models across the breadth of tasks they are deployed for in research and drug discovery. Developed by researchers at ByteDance Research, the framework addresses a critical gap in the field: despite the rapid proliferation of protein foundation models for design, folding, and dynamics, no unified standard existed for comparing their capabilities across diverse objectives. ProteinBench establishes that standard by organizing evaluation around a principled taxonomy of protein tasks and a multi-dimensional scoring approach that captures what matters to practitioners, not just peak performance on a single metric.

The framework covers eight task categories spanning three major problem areas: protein design (inverse folding, backbone design, sequence design, structure-sequence co-design, motif scaffolding, and antibody design), three-dimensional structure prediction, and conformational dynamics (single-state and multi-state/ensemble prediction). Across these tasks, ProteinBench evaluates more than 25 models, including widely used systems such as RFdiffusion, ProteinMPNN, ESM3, AlphaFold2, Chroma, EvoDiff, and AlphaFlow, among others.

A central insight behind ProteinBench is that different researchers have different priorities. A laboratory designing de novo enzymes cares about novelty and structural quality. A group targeting a therapeutic protein may prioritize diversity across a sampled ensemble. By decomposing performance into four orthogonal dimensions — quality, novelty, diversity, and robustness — ProteinBench allows users to consult the leaderboard relative to their specific objective rather than relying on a single aggregate score that obscures tradeoffs.

Key Features

  • Four-dimensional evaluation: Performance is measured along quality (structural plausibility via scTM, scRMSD, pLDDT, and clash rates), novelty (maximum TM-score comparison against the entire PDB via Foldseek), diversity (pairwise structural clustering), and robustness (out-of-distribution generalization tested on de novo backbone inputs).
  • Eight task categories: Covers the principal challenges in the protein domain — inverse folding, backbone design, sequence design, co-design, motif scaffolding, antibody design, single-state structure prediction, and conformational ensemble prediction — under a unified taxonomic framework.
  • 25+ models evaluated: Benchmarks a broad set of current protein foundation models side by side, including RFdiffusion, ProteinMPNN, ESM3, ESM-IF1, Chroma, FrameFlow, DPLM, EvoDiff, AlphaFold2, RoseTTAFold2, AlphaFlow, ConfDiff, dyMEAN, AbDPO, and more.
  • User-objective-aligned analysis: Results are stratified by practitioner goals, enabling targeted model selection. For example, inverse folding comparisons separate performance on native PDB distributions from performance on de novo backbones, revealing that top-ranked models differ depending on the use case.
  • Living public leaderboard: An open leaderboard and modular evaluation toolkit allow ongoing community contributions and the addition of new models as the field advances.
  • Curated evaluation datasets: Uses established benchmarks including CAMEO, CASP15, ATLAS molecular dynamics trajectories, the RAbD antibody dataset (55 complexes), and SAbDab, ensuring evaluations reflect realistic in-distribution and out-of-distribution challenges.

Technical Details

ProteinBench is not a trained model but an evaluation framework, so its technical contribution lies in the design of metrics and datasets rather than architectural innovations. Quality for generative tasks is assessed using structure prediction as an oracle: generated sequences or backbones are folded by ESMFold and scored against the original input structure using self-consistency TM-score (scTM) and self-consistency RMSD (scRMSD), with pLDDT used as an additional confidence proxy. Novelty is quantified by comparing generated structures to the entire PDB using Foldseek's fast structural alignment, computing the maximum TM-score across all database hits; structures with lower maximum TM-scores represent more genuinely novel folds. Diversity is measured by pairwise TM-scores within a sampled batch, with additional structural clustering. Robustness evaluations challenge models with inputs outside their training distribution, such as de novo backbones for sequence design models trained primarily on PDB chains.

Key empirical findings from the benchmark include substantial length-dependent performance degradation across all backbone design methods beyond 300 residues, a consistent quality-diversity tradeoff in sequence design (DPLM achieves the highest pLDDT scores around 85-93 but lower diversity; EvoDiff achieves broader structural diversity but lower quality), and a clear advantage for MSA-based folding methods over language-model-only approaches for single-state structure prediction. In antibody CDR-H3 design, dyMEAN achieved the highest amino acid recovery rate (40.95%) but all evaluated methods showed substantial gaps relative to natural antibody benchmarks, underscoring that antibody design remains an open challenge.

Applications

ProteinBench is primarily a tool for the research community rather than an end-user application. It is most useful to computational biologists and ML researchers who are selecting a protein foundation model for a specific task and need to know which system will best serve their objective. Drug discovery teams evaluating generative design tools, academic groups benchmarking new model architectures, and platform developers building protein AI pipelines can all use the leaderboard and associated toolkit to make informed, evidence-based choices. The modular codebase supports adding custom models and evaluation tasks, making it a practical infrastructure layer for ongoing comparative research.

Impact

ProteinBench establishes one of the first rigorous, multi-task benchmarking standards for protein foundation models, addressing a recognized reproducibility and comparability problem in the field. Its central finding — that no single model currently excels across all protein design objectives — has practical implications for how researchers should approach model selection and how developers should frame claims about model performance. The four-dimensional evaluation schema provides a vocabulary for discussing model tradeoffs that is now available to the broader community through the public leaderboard. Released in September 2024 as a preprint, the framework arrived at a moment when the protein AI landscape had become crowded enough that principled benchmarking tools are essential infrastructure, and it is expected to serve as a reference point for future model evaluations as the field continues to evolve.

Citation

ProteinBench: A Holistic Evaluation of Protein Foundation Models

Preprint

Ye, F., et al. (2024) ProteinBench: A Holistic Evaluation of Protein Foundation Models. International Conference on Learning Representations.

DOI: 10.48550/arXiv.2409.06744

Metrics

Citations

Total Citations19
Influential1
References77

Tags

benchmarkevaluationfoundation model

Resources

GitHub RepositoryResearch PaperOfficial WebsiteLeaderboard