Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-higher-order-topological-ad-alzheimergit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-higher-order-topological-ad-alzheimer/SKILL.md---
name: higher-order-topological-ad-alzheimer
description: "Extracting interpretable higher-order topological features across multiple scales for Alzheimer's Disease classification using persistent homology. Captures connected components, cycles, and cavities from fMRI brain networks. Activation: higher-order topology, Alzheimer classification, persistent homology, brain network topology, topological features."
---
# Higher-Order Topological Features for Alzheimer's Disease Classification
> Multi-scale persistent homology framework extracting connected components, cycles, and cavities from fMRI brain networks for interpretable Alzheimer's Disease diagnosis.
## Metadata
- **Source**: arXiv:2509.14634v1
- **Authors**: Dengyi Zhao, Shanyong Li, Yunping Wang, Chenfei Wang, Zhiheng Zhou
- **Published**: 2025-09-18
- **Category**: q-bio.NC, cs.LG
## Core Methodology
### Key Innovation
Traditional brain network analysis focuses on lower-order topological features (node degrees, clustering coefficients), missing critical higher-order structures. This methodology introduces **multi-scale persistent homology** to extract interpretable higher-order features (connected components, cycles, cavities) that capture complex neural organization patterns in Alzheimer's Disease.
### Technical Framework
#### 1. Brain Network Construction
- **Input**: Resting-state fMRI time series
- **Nodes**: Brain regions (atlas-based parcellation)
- **Edges**: Pearson correlation between regional time series
- **Output**: Weighted undirected graph G = (V, E, w)
#### 2. Persistent Homology Pipeline
**Step 1: Filtration Construction**
```
For threshold ε from 0 to max_weight:
G_ε = subgraph with edges w > ε
Track birth/death of topological features
```
**Step 2: Multi-Scale Feature Extraction**
- **H0 (0-dimensional)**: Connected components
- Tracks network fragmentation
- Betti-0: Number of connected components
- **H1 (1-dimensional)**: Cycles/loops
- Captures recurrent connectivity patterns
- Betti-1: Number of independent cycles
- **H2 (2-dimensional)**: Cavities/voids
- Represents enclosed network spaces
- Betti-2: Number of 3D cavities
#### 3. Persistence Diagram Features
Extract summary statistics from persistence diagrams:
- **Persistence entropy**: Shannon entropy of persistence values
- **Persistence landscape**: Functional summaries for ML
- **Betti curves**: Evolution of Betti numbers across scales
#### 4. Classification Architecture
```
fMRI → Preprocessing → Network Construction
→ Persistent Homology → Feature Vector
→ Random Forest/SVM → AD Diagnosis
```
### Higher-Order Feature Interpretability
| Feature Type | Neurobiological Meaning | AD Relevance |
|-------------|------------------------|--------------|
| Connected Components | Network fragmentation | Disrupted integration |
| Cycles | Recurrent circuits | Altered feedback loops |
| Cavities | Enclosed information spaces | Compromised segregation |
## Implementation Guide
### Prerequisites
```python
# Required libraries
pip install gudhi # Persistent homology
pip install networkx # Graph construction
pip install nilearn # fMRI preprocessing
pip install scikit-learn # Classification
```
### Step-by-Step Implementation
#### Step 1: fMRI Preprocessing
```python
from nilearn import datasets, preprocessing
import numpy as np
# Load fMRI data and atlas
atlas = datasets.fetch_atlas_schaefer_2018(n_rois=100)
func_data = load_fmri_data(subject_path)
# Extract time series
from nilearn.maskers import NiftiLabelsMasker
masker = NiftiLabelsMasker(labels_img=atlas.maps, standardize=True)
time_series = masker.fit_transform(func_data)
```
#### Step 2: Network Construction
```python
import numpy as np
from scipy.stats import pearsonr
# Compute correlation matrix
corr_matrix = np.corrcoef(time_series.T)
# Threshold and binarize (optional)
threshold = 0.3
adj_matrix = (np.abs(corr_matrix) > threshold).astype(int)
np.fill_diagonal(adj_matrix, 0)
```
#### Step 3: Persistent Homology Computation
```python
import gudhi as gd
# Create Rips filtration
rips_complex = gd.RipsComplex(
distance_matrix=1 - np.abs(corr_matrix),
max_edge_length=1.0
)
simplex_tree = rips_complex.create_simplex_tree(max_dimension=2)
# Compute persistence
persistence = simplex_tree.persistence()
# Extract Betti numbers across filtration
betti_numbers = []
for epsilon in np.linspace(0, 1, 50):
simplex_tree.compute_persistence()
betti = simplex_tree.persistent_betti_numbers(epsilon, epsilon)
betti_numbers.append(betti)
```
#### Step 4: Feature Extraction
```python
def extract_topological_features(persistence):
"""Extract features from persistence diagram."""
features = {}
# Separate by dimension
dim0 = [p for p in persistence if p[0] == 0] # H0
dim1 = [p for p in persistence if p[0] == 1] # H1
dim2 = [p for p in persistence if p[0] == 2] # H2
# Persistence statistics for each dimension
for dim_name, dim_pers in [('h0', dim0), ('h1', dim1), ('h2', dim2)]:
if dim_pers:
persistences = [death - birth for _, (birth, death) in dim_pers if death != float('inf')]
features[f'{dim_name}_mean_persistence'] = np.mean(persistences)
features[f'{dim_name}_entropy'] = compute_entropy(persistences)
features[f'{dim_name}_count'] = len(persistences)
return features
def compute_entropy(persistence_values):
"""Compute persistence entropy."""
if not persistence_values:
return 0
probs = np.array(persistence_values) / sum(persistence_values)
return -np.sum(probs * np.log(probs + 1e-10))
```
#### Step 5: Classification
```python
from sklearn.ensemble import RandomForestClassifier
from sklearn.model_selection import cross_val_score
# Build feature matrix
X = []
y = []
for subject in subjects:
features = extract_subject_features(subject)
X.append(features)
y.append(subject.diagnosis)
# Train classifier
clf = RandomForestClassifier(n_estimators=100, random_state=42)
scores = cross_val_score(clf, X, y, cv=5)
print(f"Cross-validation accuracy: {scores.mean():.3f} ± {scores.std():.3f}")
```
## Applications
- **Alzheimer's Disease diagnosis** from resting-state fMRI
- **Mild Cognitive Impairment (MCI)** detection
- **Disease progression tracking** through longitudinal analysis
- **Treatment response prediction**
## Pitfalls
- **Computational complexity**: Persistent homology scales O(n³); use approximations for large networks
- **Parameter sensitivity**: Edge threshold selection affects results; consider multi-threshold approach
- **Interpretation**: Higher-order features need domain expertise for clinical interpretation
- **Small sample sizes**: Neuroimaging datasets often limited; use regularization
## Related Skills
- homology-morphometry-brain-atrophy
- higher-order-brain-networks
- brain-graph-neural
- combinatorial-complex-brain-fmri
## References
```bibtex
@article{zhao2025higherorder,
title={Extracting Interpretable Higher-Order Topological Features across Multiple Scales for Alzheimer's Disease Classification},
author={Zhao, Dengyi and Li, Shanyong and Wang, Yunping and Wang, Chenfei and Zhou, Zhiheng},
journal={arXiv preprint arXiv:2509.14634},
year={2025}
}
```