Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-elsa-hierarchical-brain-networksgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-elsa-hierarchical-brain-networks/SKILL.md---
name: elsa-hierarchical-brain-networks---emergent-langua
description: Skill for AI agent capabilities
---
# elsa-hierarchical-brain-networks - Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to Model Hierarchical Brain Networks
## Description
A novel architecture that produces hierarchical clusters and imagery represented through symbolic sentences to improve clinical interpretability of hierarchically organized brain networks. Uses a generalized hierarchical loss function to ensure both sentences and images accurately reflect the hierarchical structure of functional brain networks from resting-state fMRI.
**Source:** arXiv:2404.10031v1
**Utility:** 0.91
## Activation Keywords
- ELSA brain networks
- emergent language autoencoder
- hierarchical brain networks
- symbolic autoencoder
- weak supervision fMRI
- hierarchical clustering brain
- clinical interpretability neuroimaging
## Core Concepts
### 1. ELSA Framework Overview
```
rs-fMRI Data → Hierarchical Clustering → Symbolic Sentences + Imagery → Clinical Interpretation
```
**Key innovation:** Moves beyond flat classifiers to produce hierarchical, interpretable representations.
### 2. Architecture Components
| Component | Function |
|-----------|----------|
| **Hierarchical Autoencoder** | Learns hierarchical representations of brain networks |
| **Emergent Language Framework** | Generates symbolic sentences describing networks |
| **Imagery Generator** | Produces visual representations |
| **Weak Supervision** | Guides hierarchical learning |
### 3. Hierarchical Loss Function
**Generalized hierarchical loss:**
- Ensures sentences reflect hierarchy
- Ensures images reflect hierarchy
- Multi-scale representation from broad to granular
### 4. Hierarchical Consistency Metric
**Quantitative assessment:**
- Measures hierarchical consistency
- Achieves >97% consistency for brain network images
- Validates symbolic representations
### 5. Clinical Interpretability
| Level | Representation |
|-------|---------------|
| **Coarse** | Broad functional networks |
| **Intermediate** | Sub-network structures |
| **Fine** | Detailed local patterns |
## Step-by-Step Instructions
### 1. Hierarchical Autoencoder Architecture
```python
import torch
import torch.nn as nn
import torch.nn.functional as F
class HierarchicalAutoencoder(nn.Module):
"""
Hierarchical autoencoder for brain network representation.
Args:
input_dim: Input dimension (flattened brain network)
hidden_dims: List of hidden dimensions for each hierarchy level
latent_dim: Latent space dimension
"""
def __init__(self, input_dim, hidden_dims, latent_dim):
super().__init__()
self.n_levels = len(hidden_dims)
# Encoder (hierarchical)
self.encoders = nn.ModuleList()
prev_dim = input_dim
for h_dim in hidden_dims:
self.encoders.append(nn.Sequential(
nn.Linear(prev_dim, h_dim),
nn.ReLU(),
nn.BatchNorm1d(h_dim)
))
prev_dim = h_dim
# Latent
self.latent = nn.Linear(prev_dim, latent_dim)
# Decoder (hierarchical)
self.decoders = nn.ModuleList()
prev_dim = latent_dim
for h_dim in reversed(hidden_dims):
self.decoders.append(nn.Sequential(
nn.Linear(prev_dim, h_dim),
nn.ReLU(),
nn.BatchNorm1d(h_dim)
))
prev_dim = h_dim
# Output
self.output = nn.Linear(prev_dim, input_dim)
def encode(self, x):
"""
Hierarchical encoding.
Args:
x: Input brain network [batch, input_dim]
Returns:
latent: Latent representation
hierarchy_features: Features at each hierarchy level
"""
hierarchy_features = []
for encoder in self.encoders:
x = encoder(x)
hierarchy_features.append(x)
latent = self.latent(x)
return latent, hierarchy_features
def decode(self, latent):
"""
Hierarchical decoding.
Args:
latent: Latent representation [batch, latent_dim]
Returns:
reconstruction: Reconstructed brain network
"""
x = latent
for decoder in self.decoders:
x = decoder(x)
reconstruction = self.output(x)
return reconstruction
def forward(self, x):
latent, hierarchy_features = self.encode(x)
reconstruction = self.decode(latent)
return reconstruction, latent, hierarchy_features
```
### 2. Emergent Language Framework
```python
class EmergentLanguageFramework(nn.Module):
"""
Generate symbolic sentences describing brain networks.
Args:
latent_dim: Latent space dimension
vocab_size: Vocabulary size
max_seq_len: Maximum sentence length
"""
def __init__(self, latent_dim, vocab_size, max_seq_len=50):
super().__init__()
self.vocab_size = vocab_size
self.max_seq_len = max_seq_len
# Sentence generator
self.sentence_generator = nn.Sequential(
nn.Linear(latent_dim, 256),
nn.ReLU(),
nn.Linear(256, vocab_size * max_seq_len)
)
# Symbol embedding
self.symbol_embedding = nn.Embedding(vocab_size, 64)
def generate_sentence(self, latent):
"""
Generate symbolic sentence from latent representation.
Args:
latent: Latent representation [batch, latent_dim]
Returns:
sentence: Symbolic sentence [batch, seq_len]
probs: Sentence probabilities [batch, seq_len, vocab_size]
"""
# Generate logits
logits = self.sentence_generator(latent)
logits = logits.view(-1, self.max_seq_len, self.vocab_size)
# Sample sentence
probs = F.softmax(logits, dim=-1)
sentence = torch.argmax(probs, dim=-1)
return sentence, probs
def sentence_to_embedding(self, sentence):
"""
Convert sentence to embedding.
Args:
sentence: Symbolic sentence [batch, seq_len]
Returns:
embedding: Sentence embedding [batch, seq_len, 64]
"""
return self.symbol_embedding(sentence)
```
### 3. Imagery Generator
```python
class ImageryGenerator(nn.Module):
"""
Generate visual representations of brain networks.
Args:
latent_dim: Latent space dimension
image_size: Output image size (height, width)
"""
def __init__(self, latent_dim, image_size=(64, 64)):
super().__init__()
self.image_size = image_size
# Initial projection
self.proj = nn.Linear(latent_dim, 512 * 8 * 8)
# Deconvolution layers
self.deconv = nn.Sequential(
nn.ConvTranspose2d(512, 256, 4, 2, 1),
nn.BatchNorm2d(256),
nn.ReLU(),
nn.ConvTranspose2d(256, 128, 4, 2, 1),
nn.BatchNorm2d(128),
nn.ReLU(),
nn.ConvTranspose2d(128, 64, 4, 2, 1),
nn.BatchNorm2d(64),
nn.ReLU(),
nn.ConvTranspose2d(64, 1, 4, 2, 1),
nn.Sigmoid()
)
def forward(self, latent):
"""
Generate imagery from latent representation.
Args:
latent: Latent representation [batch, latent_dim]
Returns:
image: Generated image [batch, 1, H, W]
"""
# Project and reshape
x = self.proj(latent)
x = x.view(-1, 512, 8, 8)
# Generate image
image = self.deconv(x)
return image
```
### 4. Hierarchical Loss Function
```python
class HierarchicalLoss(nn.Module):
"""
Generalized hierarchical loss for brain network modeling.
Args:
n_levels: Number of hierarchy levels
lambda_hierarchy: Weight for hierarchical consistency
"""
def __init__(self, n_levels, lambda_hierarchy=0.5):
super().__init__()
self.n_levels = n_levels
self.lambda_hierarchy = lambda_hierarchy
def reconstruction_loss(self, reconstruction, target):
"""
Reconstruction loss.
Args:
reconstruction: Reconstructed brain network
target: Target brain network
Returns:
loss: Reconstruction loss
"""
return F.mse_loss(reconstruction, target)
def hierarchical_consistency_loss(self, hierarchy_features, labels):
"""
Hierarchical consistency loss.
Args:
hierarchy_features: Features at each hierarchy level
labels: Hierarchical labels [batch, n_levels]
Returns:
loss: Hierarchical consistency loss
"""
loss = 0
for i, features in enumerate(hierarchy_features):
# Compute consistency at each level
# Higher levels should be more consistent with coarse labels
level_loss = F.cross_entropy(features, labels[:, i])
loss += level_loss
return loss / len(hierarchy_features)
def sentence_image_consistency_loss(self, sentence_probs, image, target_image):
"""
Ensure sentence and image are consistent.
Args:
sentence_probs: Sentence probabilities
image: Generated image
target_image: Target image
Returns:
loss: Consistency loss
"""
# Image reconstruction loss
image_loss = F.mse_loss(image, target_image)
# Sentence-image alignment (KL divergence)
# Simplified: ensure both represent same hierarchy
consistency_loss = image_loss
return consistency_loss
def forward(self, reconstruction, target, hierarchy_features, labels,
sentence_probs, image, target_image):
"""
Total hierarchical loss.
Returns:
total_loss: Combined loss
loss_dict: Individual loss components
"""
rec_loss = self.reconstruction_loss(reconstruction, target)
hier_loss = self.hierarchical_consistency_loss(hierarchy_features, labels)
cons_loss = self.sentence_image_consistency_loss(sentence_probs, image, target_image)
total_loss = rec_loss + self.lambda_hierarchy * (hier_loss + cons_loss)
loss_dict = {
'reconstruction': rec_loss.item(),
'hierarchical': hier_loss.item(),
'consistency': cons_loss.item()
}
return total_loss, loss_dict
```
### 5. Hierarchical Consistency Metric
```python
def compute_hierarchical_consistency(model, test_data, hierarchy_labels):
"""
Compute hierarchical consistency of symbolic representations.
Args:
model: ELSA model
test_data: Test brain network data
hierarchy_labels: Hierarchical labels for test data
Returns:
consistency: Hierarchical consistency score (0-1)
"""
model.eval()
correct = 0
total = 0
with torch.no_grad():
for batch, labels in zip(test_data, hierarchy_labels):
# Get latent and hierarchy features
reconstruction, latent, hierarchy_features = model.encode(batch)
# Generate sentence and image
sentence, _ = model.sentence_generator(latent)
image = model.imagery_generator(latent)
# Check if hierarchy is preserved
# At each level, verify clustering matches hierarchy
for i, features in enumerate(hierarchy_features):
# Cluster features
clusters = torch.argmax(features, dim=-1)
# Check consistency with labels
matches = (clusters == labels[:, i]).sum().item()
correct += matches
total += labels.shape[0]
consistency = correct / total
return consistency
```
### 6. Full ELSA Model
```python
class ELSAModel(nn.Module):
"""
Complete ELSA model for hierarchical brain network modeling.
Args:
input_dim: Input brain network dimension
hidden_dims: Hidden dimensions for each hierarchy level
latent_dim: Latent space dimension
vocab_size: Vocabulary size for symbolic sentences
image_size: Output image size
"""
def __init__(self, input_dim, hidden_dims, latent_dim, vocab_size=100, image_size=(64, 64)):
super().__init__()
self.autoencoder = HierarchicalAutoencoder(input_dim, hidden_dims, latent_dim)
self.language = EmergentLanguageFramework(latent_dim, vocab_size)
self.imagery = ImageryGenerator(latent_dim, image_size)
def forward(self, x):
"""
Full ELSA forward pass.
Args:
x: Input brain network [batch, input_dim]
Returns:
outputs: Dictionary with all outputs
"""
# Encode
reconstruction, latent, hierarchy_features = self.autoencoder(x)
# Generate sentence
sentence, sentence_probs = self.language.generate_sentence(latent)
# Generate image
image = self.imagery(latent)
outputs = {
'reconstruction': reconstruction,
'latent': latent,
'hierarchy_features': hierarchy_features,
'sentence': sentence,
'sentence_probs': sentence_probs,
'image': image
}
return outputs
```
## Tools Used
- `torch` - PyTorch neural networks
- `numpy` - Numerical computations
- `sklearn` - Clustering algorithms
- `exec` - Run training scripts
- `read` - Load fMRI data
## Example Use Cases
### 1. Training ELSA
```python
# Create ELSA model
model = ELSAModel(
input_dim=1000,
hidden_dims=[512, 256, 128],
latent_dim=64,
vocab_size=100
)
# Create loss function
criterion = HierarchicalLoss(n_levels=3)
# Training loop
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(100):
outputs = model(brain_networks)
loss, loss_dict = criterion(
outputs['reconstruction'], brain_networks,
outputs['hierarchy_features'], hierarchy_labels,
outputs['sentence_probs'], outputs['image'], target_images
)
optimizer.zero_grad()
loss.backward()
optimizer.step()
print(f"Epoch {epoch}: {loss_dict}")
```
### 2. Evaluating Consistency
```python
# Compute hierarchical consistency
consistency = compute_hierarchical_consistency(model, test_data, test_labels)
print(f"Hierarchical Consistency: {consistency:.2%}")
# Expected: >97%
```
### 3. Generating Interpretations
```python
# Generate sentence and image for a brain network
outputs = model(brain_network)
sentence = outputs['sentence']
image = outputs['image']
# Decode sentence to text
symbols = vocab_decoder(sentence)
print(f"Brain Network: {' '.join(symbols)}")
```
## Instructions for Agents
Follow these steps when applying this skill:
### Step 1: Hierarchical Autoencoder Architecture
## Examples
### Example 1: Basic Application
**User:** I need to apply elsa-hierarchical-brain-networks - Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to Model Hierarchical Brain Networks to my analysis.
**Agent:** I'll help you apply elsa-hierarchical-brain-networks. First, let me understand your specific use case...
**Context:** Apply the methodology
### Example 2: Advanced Scenario
**User:** Complex analysis scenario
**Agent:** Based on the methodology, I'll guide you through the advanced application...
### Example 2: Advanced Application
**User:** What are the key considerations for elsa-hierarchical-brain-networks?
**Agent:** Let me search for the latest research and best practices...
## Related Skills
- `brain-network-joint-embedding` - Brain network embedding
- `functional-connectome-fingerprint` - Functional connectivity
- `multimodal-brain-connectivity-gnn` - Multimodal brain analysis
## References
- Latheef, A. A. P. (2024). "Emergent Language Symbolic Autoencoder (ELSA) with Weak Supervision to Model Hierarchical Brain Networks" arXiv:2404.10031v1 [q-bio.NC]
---
**Created:** 2026-03-29 19:05
**Author:** Aerial (from arXiv:2404.10031v1)