Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-in-context-brain-decodinggit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-in-context-brain-decoding/SKILL.md---
name: in-context-brain-decoding
description: "Meta-learning approach for training-free cross-subject brain decoding from fMRI. Enables zero-shot generalization to novel subjects by conditioning on few image-brain activation examples. Use when working with: (1) Cross-subject fMRI decoding, (2) Meta-learning for neuroscience, (3) Visual reconstruction from brain signals, (4) Subject-invariant neural representations. Activation: brain decoding, meta-learning fMRI, cross-subject decoding, in-context brain mapping."
---
# In-Context Brain Decoding
Meta-learning framework for training-free cross-subject brain decoding using in-context learning. Enables generalization to novel subjects without fine-tuning by conditioning on few examples.
## Core Concept
Traditional brain decoding requires training separate models or fine-tuning for each subject due to substantial variability in neural representations across individuals. This approach uses meta-learning to learn a prior over neural encoding patterns, enabling zero-shot adaptation to new subjects.
## Methodology
### In-Context Learning Framework
```python
# High-level approach
# 1. Meta-train on multiple subjects to learn subject-agnostic patterns
# 2. At inference, condition on K examples from new subject
# 3. Model infers unique neural encoding patterns in-context
def decode_brain_activity(
target_fmri: np.ndarray,
context_examples: List[Tuple[Image, fMRI]], # K examples from new subject
meta_model: MetaLearner
) -> DecodedImage:
# Model infers subject-specific encoding from context
subject_embedding = meta_model.infer_subject_encoding(context_examples)
# Decode target using inferred encoding
decoded = meta_model.decode(target_fmri, subject_embedding)
return decoded
```
### Key Components
1. **Meta-Learning Objective**
- Learn initialization that enables fast adaptation
- Optimize for few-shot generalization across subjects
- Minimize expected loss over subject distribution
2. **Subject Encoding Inference**
- Extract subject-specific neural encoding patterns
- Use attention mechanisms over context examples
- Encode variability in spatial and temporal responses
3. **Decoding Architecture**
- Condition generation on inferred subject encoding
- Use diffusion models or VAEs for visual reconstruction
- Preserve semantic and perceptual features
## Implementation Guide
### Data Preparation
```python
# fMRI preprocessing pipeline
class fMRIPreprocessor:
def __init__(self, tr: float = 2.0, standardize: bool = True):
self.tr = tr # Repetition time
self.standardize = standardize
def preprocess(self, raw_bold: np.ndarray) -> np.ndarray:
# 1. Slice timing correction
# 2. Motion correction
# 3. Spatial normalization to MNI space
# 4. Temporal filtering (high-pass)
# 5. Z-score normalization
pass
```
### Model Architecture
```python
import torch
import torch.nn as nn
class InContextBrainDecoder(nn.Module):
"""Meta-learned brain decoder with in-context subject adaptation."""
def __init__(
self,
fmri_dim: int = 10000, # Number of voxels
latent_dim: int = 512, # Subject embedding dimension
num_context: int = 5 # Number of context examples
):
super().__init__()
# Subject encoder: infers subject-specific patterns
self.subject_encoder = nn.TransformerEncoder(
nn.TransformerEncoderLayer(d_model=fmri_dim, nhead=8),
num_layers=4
)
# Latent subject embedding
self.subject_embedder = nn.Sequential(
nn.Linear(fmri_dim, latent_dim),
nn.ReLU(),
nn.Linear(latent_dim, latent_dim)
)
# Decoder conditioned on subject embedding
self.decoder = ConditionalDiffusionDecoder(
condition_dim=latent_dim,
output_size=(3, 224, 224)
)
def forward(
self,
target_fmri: torch.Tensor,
context_fmri: torch.Tensor,
context_images: torch.Tensor
) -> torch.Tensor:
# Infer subject encoding from context
subject_encoding = self.encode_subject(context_fmri, context_images)
# Decode target fMRI
reconstruction = self.decoder(target_fmri, subject_encoding)
return reconstruction
```
### Training Procedure
```python
class MetaTrainingLoop:
"""Meta-training for cross-subject generalization."""
def __init__(self, model, meta_lr=1e-4, inner_lr=1e-3):
self.model = model
self.meta_optimizer = torch.optim.Adam(model.parameters(), lr=meta_lr)
self.inner_lr = inner_lr
def meta_train_step(self, batch: List[SubjectBatch]):
"""MAML-style meta-training."""
meta_loss = 0.0
for subject_data in batch:
# Sample context and target examples
context = subject_data.sample_context(k=5)
target = subject_data.sample_target()
# Inner loop: adapt to subject
adapted_params = self.inner_loop_adaptation(context)
# Outer loop: meta-update
prediction = self.model.forward_with_params(target.fmri, adapted_params)
loss = F.mse_loss(prediction, target.image)
meta_loss += loss
# Meta-gradient update
self.meta_optimizer.zero_grad()
meta_loss.backward()
self.meta_optimizer.step()
return meta_loss.item()
```
## Applications
### Visual Reconstruction
Decode perceived or imagined visual stimuli from fMRI:
```python
# Example: Reconstruct seen images
context_pairs = load_subject_calibration_data(subject_id)
target_fmri = record_fmri_during_viewing(subject_id, stimulus)
reconstruction = decoder.decode(target_fmri, context_pairs)
```
### Cross-Subject Transfer
Apply trained model to new subjects without retraining:
```python
# New subject - only need K calibration examples
new_subject_context = collect_calibration_trials(new_subject_id, k=5)
# Ready to decode immediately
test_reconstruction = decoder.decode(test_fmri, new_subject_context)
```
## Best Practices
### Data Collection
1. **Calibration Examples**: Collect 5-10 diverse examples per subject
2. **Stimulus Variety**: Include diverse visual categories
3. **Quality Control**: Check fMRI data quality before decoding
### Model Selection
1. **Architecture**: Transformer-based encoders work well for spatial patterns
2. **Conditioning**: Diffusion models provide high-quality reconstructions
3. **Regularization**: Use dropout and weight decay to prevent overfitting
### Evaluation
1. **Metrics**: Use SSIM, LPIPS, and semantic accuracy
2. **Baselines**: Compare against subject-specific trained models
3. **Generalization**: Test on held-out subjects and stimuli
## Limitations
- Requires some calibration data from each new subject
- Performance depends on alignment between training and test subjects
- May struggle with atypical neural patterns
## References
- Paper: "Meta-learning In-Context Enables Training-Free Cross Subject Brain Decoding" (arXiv:2604.08537v1, 2026)
- Related: MAML (Finn et al.), Neural Decoding (Kay et al.)
## Activation Keywords
- in-context brain decoding
- meta-learning fMRI
- cross-subject brain decoding
- training-free neural decoding
- subject-invariant representations
- brain decoding generalization