bio.rodeo
HomeCompetitorsLeaderboardOrganizations
bio.rodeo

The authoritative source for evaluating biological foundation models. No hype, just honest analysis.

© 2026 bio.rodeo. All rights reserved.
RNA

RhoFold+

ml4bio

End-to-end RNA 3D structure prediction combining the RNA-FM language model with Invariant Point Attention, achieving SOTA on RNA-Puzzles and CASP15.

Released: 2024

Overview

Determining the three-dimensional structure of RNA is essential for understanding its regulatory, catalytic, and structural roles in the cell, yet experimental methods such as X-ray crystallography and cryo-EM remain slow and resource-intensive. Computational approaches have historically depended on multiple sequence alignments (MSAs) to derive co-evolutionary signals, limiting their applicability to RNAs with few known homologs. RhoFold+ addresses both constraints through an end-to-end deep learning pipeline that predicts all-atom RNA 3D structures from sequence alone, with no MSA required.

Published in Nature Methods in November 2024 by the ml4bio group at the Chinese University of Hong Kong, RhoFold+ couples the RNA-FM language model — a 12-layer BERT-style transformer pretrained on 23.7 million non-coding RNA sequences — with an Invariant Point Attention (IPA) structure module adapted from AlphaFold 2. This design allows rich sequence representations to feed directly into geometric coordinate regression, producing complete structural models in approximately 0.14 seconds per sequence on a GPU. The model achieved state-of-the-art performance on both the RNA-Puzzles blind prediction challenge and the RNA-specific targets in CASP15, which included a dedicated RNA category for the first time.

Key Features

  • Single-sequence input: No MSA or template structures required, making RhoFold+ applicable to novel or poorly characterized RNAs where homologs are scarce.
  • Sub-second inference: Predictions complete in roughly 0.14 seconds per sequence on GPU, enabling large-scale structural genomics workflows across entire transcriptomes.
  • RNA-FM language model backbone: A 12-layer BERT-architecture transformer pretrained on 23.7 million non-coding RNA sequences from RNAcentral via masked-token prediction, encoding secondary structure and 3D proximity without labeled structural data.
  • Invariant Point Attention structure module: The IPA module, adapted from AlphaFold 2, operates on per-nucleotide rigid-body frames and iteratively refines all-atom coordinates while remaining invariant to global rotations and translations.
  • End-to-end joint training: RNA-FM embeddings and IPA coordinate regression are fine-tuned together on PDB RNA structures, allowing the structural head to shape the pretrained representations.
  • Open access: Source code, pretrained weights, and a web server for interactive predictions are all publicly available.

Technical Details

RhoFold+ consists of two tightly coupled components trained end-to-end. The RNA-FM backbone is a 12-layer transformer encoder producing 640-dimensional per-nucleotide embeddings; it was pretrained on 23.7 million sequences from RNAcentral100 via masked-token prediction and captures information about secondary structure, 3D proximity, and evolutionary conservation without requiring structural labels. These embeddings are passed to an IPA structure module that defines a local coordinate frame for each nucleotide and applies attention over both invariant scalar features and point coordinates expressed in each frame's reference, making computation invariant to the molecule's global orientation. Successive IPA layers iteratively update backbone frames and nucleotide-specific heavy-atom positions to produce a complete all-atom model.

The structure module was trained on RNA structures deposited in the Protein Data Bank, with redundancy reduction and temporal splits to prevent data leakage on RNA-Puzzles and CASP15 evaluation sets. On benchmarks, RhoFold+ outperformed prior automated deep learning methods on both RNA-Puzzles and CASP15 RNA targets. Inference speed is approximately 0.14 seconds per sequence, compared to hours for MSA-dependent or physics-based pipelines.

Applications

RhoFold+ is particularly valuable for RNAs in newly sequenced genomes or novel regulatory elements where MSA construction is impractical. Its throughput makes it feasible to generate structural models for comprehensive non-coding RNA databases such as Rfam or for entire transcriptomes in structural genomics campaigns. In drug discovery, accurate 3D models support virtual screening for small molecules targeting structured RNAs, including riboswitches and viral RNA elements. Predictions from RhoFold+ also serve as inputs for inverse RNA design pipelines and can guide hypothesis-driven mutagenesis experiments for RNAs of unknown function.

Impact

RhoFold+ represents a significant advance in RNA structural bioinformatics, extending the single-sequence prediction paradigm that AlphaFold 2 established for proteins into the RNA domain. Its CASP15 performance was notable as a first major test of deep learning approaches on RNA in the community's most rigorous blind prediction setting. Key limitations include reduced accuracy for very large or conformationally dynamic RNAs, potential disadvantages relative to MSA-aware methods on RNA families with abundant homologs, and the general challenge of capturing long-range tertiary contacts in complex multi-domain architectures. The availability of open weights and a web server has enabled broad community adoption, and the integrated RNA-FM backbone has influenced subsequent work in RNA representation learning and structure-informed design.

Citation

Accurate RNA 3D structure prediction using a language model-based deep learning approach

Shen, T., et al. (2022) Accurate RNA 3D structure prediction using a language model-based deep learning approach. Nature Methods.

DOI: 10.1038/s41592-024-02487-0

Metrics

GitHub

Stars227
Forks27
Open Issues9
Contributors4
Last Push11mo ago
LanguagePython
LicenseApache-2.0

Citations

Total Citations186
Influential16
References53

Tags

structure predictiontransformerfoundation modellanguage model3D structure

Resources

GitHub RepositoryResearch PaperOfficial Website