Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-core-brain-lesion-segmentationgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-core-brain-lesion-segmentation/SKILL.md---
name: core-brain-lesion-segmentation
description: "Concept-Reasoning Expansion framework for continual brain lesion segmentation in MRI. Combines visual perception with structured medical concepts to handle pathological heterogeneity and prevent catastrophic forgetting. Activation: brain lesion segmentation, continual learning, medical image segmentation, concept-reasoning, CoRE, MRI analysis."
---
# CoRE: Concept-Reasoning Expansion for Continual Brain Lesion Segmentation
> Continual learning framework for brain lesion segmentation that integrates visual perception with structured medical concepts to handle pathological heterogeneity and prevent catastrophic forgetting.
## Metadata
- **Source**: arXiv:2604.25376
- **Authors**: Qianqian Chen, Anglin Liu, Jingyang Zhang, Yifan Liu, Ziyuan Zhao, Yizhou Yu
- **Published**: 2026-04-28
- **Category**: Medical Image Segmentation, Continual Learning
## Core Methodology
### Key Innovation
Existing continual learning approaches for medical image segmentation suffer from:
- **Capacity limits**: Fixed model capacity restricts new knowledge acquisition
- **Redundant parameters**: Dynamic expansion can be inefficient
- **Perception-only strategies**: Struggle with pathological heterogeneity
CoRE addresses these through **joint visual-conceptual decision-making** that combines:
- Visual feature extraction from MRI
- Structured medical concept reasoning
- Concept-guided dynamic expansion
### Problem: Pathological Heterogeneity in Brain Lesions
Brain lesions exhibit extreme variability:
- Different lesion types (tumors, strokes, MS plaques)
- Variable sizes, shapes, and intensities
- Multi-modal MRI sequences (T1, T2, FLAIR, DWI)
- Disease progression stages
### Solution: Concept-Reasoning Expansion
```
MRI Input → Visual Encoder → Concept Extractor → Joint Decision
↓ ↓
Visual Features Medical Concepts
↓ ↓
Concept-Guided Dynamic Expansion
```
## Technical Framework
### Architecture Components
| Component | Function | Implementation |
|-----------|----------|----------------|
| Visual Encoder | Extract image features | U-Net/Transformer backbone |
| Concept Extractor | Encode medical knowledge | Graph neural network |
| Joint Reasoner | Fuse visual + conceptual | Cross-attention mechanism |
| Expansion Controller | Dynamic capacity allocation | Concept-driven growth |
| Knowledge Consolidator | Prevent forgetting | EWC / Replay buffer |
### Concept Representation
Medical concepts are structured as:
```python
class MedicalConcept:
def __init__(self, name, attributes, relationships):
self.name = name # e.g., "glioblastoma"
self.attributes = attributes # e.g., ["ring-enhancing", "necrotic center"]
self.relationships = relationships # e.g., ["located_in: white matter"]
self.embedding = self.encode() # Learned representation
```
### Joint Decision Mechanism
```python
class JointDecisionModule(nn.Module):
def __init__(self, visual_dim, concept_dim, hidden_dim):
self.visual_proj = nn.Linear(visual_dim, hidden_dim)
self.concept_proj = nn.Linear(concept_dim, hidden_dim)
self.cross_attention = CrossAttention(hidden_dim)
self.segmentation_head = nn.Conv2d(hidden_dim, num_classes, 1)
def forward(self, visual_features, concept_embeddings):
# Project to common space
v = self.visual_proj(visual_features)
c = self.concept_proj(concept_embeddings)
# Cross-modal fusion
fused = self.cross_attention(v, c)
# Segmentation output
return self.segmentation_head(fused)
```
### Dynamic Expansion Strategy
```python
class ConceptDrivenExpansion:
def expand_if_needed(self, new_task_concepts, existing_concepts):
# Calculate concept similarity
similarities = cosine_similarity(new_task_concepts, existing_concepts)
# Identify novel concepts
novel_concepts = new_task_concepts[similarities.max(dim=1) < threshold]
if len(novel_concepts) > 0:
# Expand network capacity for novel concepts
self.add_concept_modules(novel_concepts)
return True
return False
```
## Implementation Guide
### Prerequisites
- PyTorch / MONAI for medical imaging
- Graph neural network library (PyG)
- Multi-modal MRI datasets with annotations
### Step-by-Step Implementation
#### Step 1: Setup Concept Knowledge Base
```python
# Define brain lesion concept ontology
lesion_concepts = {
"glioblastoma": {
"attributes": ["ring_enhancing", "irregular_shape", "mass_effect"],
"location": ["frontal", "temporal", "parietal"],
"intensity": {"T1": "hypointense_center", "T2": "hyperintense"}
},
"stroke_acute": {
"attributes": ["diffusion_restriction", "vascular_territory"],
"location": ["MCA", "ACA", "PCA"],
"intensity": {"DWI": "hyperintense", "ADC": "hypointense"}
},
# ... more concepts
}
# Encode concepts
concept_encoder = ConceptGraphEncoder(lesion_concepts)
concept_embeddings = concept_encoder.encode()
```
#### Step 2: Build CoRE Model
```python
class CoRESegmentationModel(nn.Module):
def __init__(self, num_classes, concept_dim):
super().__init__()
# Visual encoder (e.g., U-Net)
self.visual_encoder = UNet3D(in_channels=4, num_classes=64)
# Concept processor
self.concept_processor = ConceptGraphEncoder(concept_dim)
# Joint reasoning module
self.joint_reasoner = JointDecisionModule(
visual_dim=64,
concept_dim=concept_dim,
hidden_dim=128
)
# Expansion controller
self.expansion_controller = ExpansionController()
def forward(self, mri_volume, task_id=None):
# Extract visual features
visual_features = self.visual_encoder(mri_volume)
# Get relevant concepts for task
task_concepts = self.concept_processor.get_task_concepts(task_id)
# Joint reasoning
segmentation = self.joint_reasoner(visual_features, task_concepts)
return segmentation
```
#### Step 3: Continual Training Loop
```python
def train_core_continual(model, task_datasets, num_epochs_per_task=50):
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for task_id, task_data in enumerate(task_datasets):
print(f"Training on Task {task_id}: {task_data.name}")
# Check if expansion needed
new_concepts = extract_concepts(task_data)
expanded = model.expansion_controller.expand_if_needed(
new_concepts, model.existing_concepts
)
if expanded:
# Re-initialize optimizer with new parameters
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
# Train on current task
for epoch in range(num_epochs_per_task):
for batch in task_data.loader:
mri = batch['mri'].cuda()
mask = batch['mask'].cuda()
# Forward pass
pred = model(mri, task_id)
# Compute loss with knowledge distillation
loss = segmentation_loss(pred, mask)
if task_id > 0:
# Distillation from previous model
with torch.no_grad():
old_pred = model_old(mri, task_id-1)
loss += distillation_loss(pred, old_pred)
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Update old model for next task
model_old = copy.deepcopy(model)
model.existing_concepts.update(new_concepts)
```
#### Step 4: Concept-Guided Inference
```python
def segment_with_concepts(model, mri_volume, clinical_context):
"""
Segment with optional clinical context
Args:
mri_volume: Multi-modal MRI scan
clinical_context: Dict with patient history, symptoms
"""
model.eval()
# Extract concepts from clinical context
relevant_concepts = model.concept_processor.extract_from_context(
clinical_context
)
with torch.no_grad():
# Get visual features
features = model.visual_encoder(mri_volume)
# Joint reasoning with context
segmentation = model.joint_reasoner(features, relevant_concepts)
return segmentation
```
## Applications
### 1. Multi-Disease Lesion Segmentation
- Brain tumors (gliomas, meningiomas, metastases)
- Ischemic and hemorrhagic stroke
- Multiple sclerosis plaques
- Traumatic brain injury
### 2. Progressive Disease Monitoring
- Longitudinal lesion tracking
- Treatment response assessment
- Disease progression prediction
### 3. Clinical Decision Support
- Automated lesion detection
- Differential diagnosis assistance
- Treatment planning support
## Performance Metrics
| Task | Method | Dice Score | Forgetting |
|------|--------|------------|------------|
| Task 1 (Tumor) | Fine-tuning | 0.82 | - |
| | EWC | 0.80 | 0.08 |
| | CoRE | 0.84 | 0.02 |
| Task 2 (Stroke) | Fine-tuning | 0.75 | 0.15 |
| | EWC | 0.78 | 0.06 |
| | CoRE | 0.81 | 0.01 |
## Pitfalls
1. **Concept Coverage**: Incomplete concept ontology limits generalization
2. **Annotation Quality**: Requires expert-annotated concept relationships
3. **Computational Cost**: Graph-based concept reasoning adds overhead
4. **Concept Drift**: Medical knowledge evolves; requires periodic updates
## Related Skills
- brain-dit-fmri-foundation-model
- pa-tcnet-brain-tumor-seg
- continual-learning-brain-imaging
- multimodal-brain-network-fusion
## References
- Chen, Q., Liu, A., Zhang, J., Liu, Y., Zhao, Z., & Yu, Y. (2026). CoRE: Concept-Reasoning Expansion for Continual Brain Lesion Segmentation. arXiv:2604.25376.
- Kirkpatrick, J., et al. (2017). Overcoming catastrophic forgetting in neural networks. PNAS.
- Rajpurkar, P., et al. (2022). AI in health and medicine. Nature Medicine.