bio.rodeo
HomeCompetitorsLeaderboardOrganizations
bio.rodeo

The authoritative source for evaluating biological foundation models. No hype, just honest analysis.

© 2026 bio.rodeo. All rights reserved.
Imaging

Brain TokenGT

National University of Singapore

A tokenized graph transformer for embedding longitudinal brain functional connectomes, enabling interpretable neurodegenerative disease diagnosis and progression prediction from fMRI.

Released: 2023

Overview

Brain TokenGT (Brain Tokenized Graph Transformer) is a deep learning framework for longitudinal brain functional connectome analysis, developed by Zijian Dong, Yilei Wu, Yu Xiao, Joanna Su Xian Chong, Yueming Jin, and Juan Helen Zhou at the National University of Singapore. Presented at MICCAI 2023 and available on arXiv (DOI: 10.48550/arXiv.2307.00858), Brain TokenGT addresses a critical gap in neuroimaging AI: most existing models analyze brain connectivity at a single time point, discarding the trajectory information that is often most informative for understanding progressive neurodegenerative diseases.

Resting-state fMRI captures the functional connectivity (FC) of the brain — the correlations in activity between different brain regions — as a graph structure where nodes are brain regions and edges represent functional coupling. In Alzheimer's disease and related dementias, FC patterns change systematically over years, and the trajectory of these changes carries diagnostic and prognostic information beyond any snapshot. Brain TokenGT is designed to embed these temporal trajectories into a unified representation that captures both the within-timepoint graph structure and the between-timepoint evolution.

The model draws inspiration from TokenGT, a general graph transformer that converts graph nodes and edges into sequences of tokens processable by standard transformer encoders. Brain TokenGT adapts this tokenization strategy to the spatio-temporal setting by introducing a Graph and Interaction Vocabulary Embedding (GIVE) module that generates node and edge tokens augmented with temporal position and identity information, and a Brain Interaction Graph Transformer (BIGTR) module that processes these tokens through a transformer encoder. The combination enables the model to attend globally across brain regions and time points simultaneously, capturing complex spatio-temporal patterns of FC trajectory.

Key Features

  • Longitudinal connectome embedding: Unlike single-timepoint approaches, Brain TokenGT explicitly models the trajectory of functional connectivity across multiple fMRI sessions, capturing disease progression patterns over time.
  • Graph tokenization: The GIVE module converts brain graph nodes and spatio-temporal edges into discrete tokens augmented with type and node identifiers, enabling the use of standard transformer encoders on graph-structured neuroimaging data.
  • Global attention across regions and time: The BIGTR transformer encoder attends across all tokenized brain region and interaction representations simultaneously, allowing long-range dependencies between spatially distant brain regions and temporally separated time points.
  • Interpretability: The model provides attention-based interpretability, allowing identification of which brain regions and temporal connections are most informative for a given clinical prediction task.
  • Adaptable to multiple clinical tasks: The framework is evaluated on three distinct clinical prediction tasks on Alzheimer's disease datasets, demonstrating versatility beyond a single diagnostic objective.

Technical Details

Brain TokenGT processes longitudinal resting-state fMRI data preprocessed into functional connectivity matrices following established pipelines (Kong et al. 2019; Li et al. 2019). The GIVE module generates node embeddings for each brain region and spatio-temporal edge embeddings for each pairwise region interaction across time, encoded with type tokens and node identifier tokens as introduced in the TokenGT framework. These token sequences are passed to the BIGTR module, a standard transformer encoder that computes multi-head self-attention across the full set of tokenized graph elements. The architecture draws on EvolveGCN-style graph recurrent computation for capturing temporal dynamics alongside the TokenGT attention mechanism.

The model was evaluated on two public longitudinal Alzheimer's disease neuroimaging datasets: ADNI (Alzheimer's Disease Neuroimaging Initiative) and OASIS-3 (Open Access Series of Imaging Studies), covering resting-state fMRI collected across multiple visits. Brain TokenGT was assessed on three clinical tasks: (1) distinguishing mild cognitive impairment (MCI) from cognitively normal controls, (2) predicting conversion from MCI to dementia, and (3) classifying amyloid-positive versus amyloid-negative cognitively normal individuals. The model outperformed all benchmark comparators on these tasks while simultaneously providing superior interpretability through attention weight analysis.

Applications

Brain TokenGT is designed for clinical neuroscience researchers studying neurodegenerative diseases through longitudinal neuroimaging. In Alzheimer's disease research, the model supports early detection of cognitive decline by identifying characteristic FC trajectory signatures in individuals who will subsequently convert from MCI to dementia — a window where preventive intervention may be most effective. The framework is also applicable to other progressive neurological conditions including Parkinson's disease, frontotemporal dementia, and multiple sclerosis, where longitudinal fMRI data captures disease-related connectivity changes. Interpretability through attention weights allows neuroscientists to identify which functional connections and brain regions drive predictions, informing both clinical hypotheses and potential biomarker discovery. The model is compatible with standard fMRI preprocessing pipelines and publicly available longitudinal neuroimaging cohorts.

Impact

Brain TokenGT established graph tokenization as an effective strategy for incorporating the temporal dimension of brain connectivity into transformer-based neuroimaging models, overcoming the limitation of single-timepoint FC analysis that characterized most prior deep learning approaches to connectome-based diagnosis. Its presentation at MICCAI 2023 placed it within a growing body of work applying attention mechanisms to brain graph data, and its demonstration on multiple clinical tasks across two independent cohorts strengthened the case for longitudinal modeling in neuroimaging AI. A limitation is that the model requires multiple fMRI sessions per subject, which restricts applicability to datasets with longitudinal coverage; cross-sectional datasets — the most commonly available — cannot directly benefit from the trajectory embedding capability that distinguishes this approach.

Tags

image analysisgraph neural networktransformerrepresentation learningcell biology

Resources

GitHub RepositoryResearch Paper