Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-affective-neuroscience-traininggit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-affective-neuroscience-training/SKILL.md---
name: affective-neuroscience-training
description: "Dual-model training paradigm inspired by affective neuroscience SEEKING motivational state. Uses smaller base model trained continuously with larger motivated model activated intermittently during motivation conditions. Activation: motivation training, seeking state, affective training, emotion-cognition AI, dual model motivation."
---
# Affective Neuroscience Training Paradigm (SEEKING-Motivated Dual Model)
> Novel training paradigm drawing from affective neuroscience's SEEKING motivational state, using dual-model architecture where a smaller base model trains continuously while a larger motivated model activates intermittently during predefined motivation conditions.
## Metadata
- **Source**: arXiv:2602.21064
- **Title**: "Motivation is Something You Need"
- **Authors**: Mehdi Acheli, Walid Gaaloul
- **Published**: 2026-02-24
- **Categories**: cs.AI, cs.CV, cs.LG
## Core Methodology
### Key Innovation
Mimics the emotional state of high curiosity and reward anticipation in the human brain, where broader brain regions are recruited to enhance cognitive performance. Translates this into a dual-model framework that achieves competitive or superior performance compared to standalone larger models while reducing training costs.
### Technical Framework
1. **Dual-Model Architecture**
- **Base model**: Smaller network trained continuously throughout all epochs
- **Motivated model**: Larger network that extends the base model, activated only during predefined "motivation conditions"
- **Scalable design**: Larger model shares weights with base model, adding additional capacity
2. **Motivation Conditions**
- Predefined triggers for activating the motivated model (e.g., loss spikes, validation plateaus, curiosity thresholds)
- Intermittent activation mimics emotional SEEKING state bursts
- During motivation: broader network capacity recruited for enhanced learning
3. **Shared Weight Updates**
- Base model weights are always updated
- Motivated model shares base weights + adds extension layers
- Selective expansion during noteworthy training steps
- Efficient parameter reuse between models
4. **Training Schedule**
```
For each epoch:
1. Train base model on batch (always)
2. Check motivation condition
3. If triggered:
a. Activate motivated model (base + extensions)
b. Forward pass through expanded architecture
c. Compute gradients for extended layers
d. Update shared + extension weights
4. Else:
a. Continue with base model only
```
5. **Dual Deployment Benefits**
- Produces two models simultaneously:
- Base model: lightweight for resource-constrained deployment
- Motivated model: full capacity for high-performance deployment
- Total training cost lower than training large model standalone
- Motivated model can surpass standalone counterpart despite seeing less data
## Implementation Guide
### Prerequisites
- PyTorch or similar deep learning framework
- Scalable model architecture (larger model extends smaller)
- Motivation condition definition (loss threshold, validation metric, etc.)
### Step-by-Step
1. **Define scalable architecture**
```python
class BaseModel(nn.Module):
# Standard architecture
class MotivatedModel(nn.Module):
def __init__(self, base_model):
super().__init__()
self.base = base_model
# Additional layers for motivation
self.extension = nn.Sequential(...)
```
2. **Define motivation conditions**
```python
def check_motivation(loss_history, threshold=0.1):
# Trigger when loss spikes or plateaus
return loss_history[-1] > threshold
```
3. **Implement dual training loop**
```python
base_optimizer = optim.Adam(base_model.parameters())
motivated_optimizer = optim.Adam(motivated_model.extension.parameters())
for epoch in epochs:
for batch in dataloader:
# Always train base model
base_loss = train_base(base_model, batch)
base_optimizer.zero_grad()
base_loss.backward()
base_optimizer.step()
# Check motivation condition
if check_motivation(loss_history):
# Activate motivated model
motivated_loss = train_motivated(motivated_model, batch)
motivated_optimizer.zero_grad()
motivated_loss.backward()
motivated_optimizer.step()
```
### Code Example
```python
import torch
import torch.nn as nn
class ScalableClassifier(nn.Module):
"""Base + motivated model with shared weights."""
def __init__(self, base_features, extended_features, num_classes):
super().__init__()
self.base = nn.Sequential(
nn.Linear(base_features, 256),
nn.ReLU(),
nn.Linear(256, 128)
)
# Extension layers (only active during motivation)
self.extension = nn.Sequential(
nn.Linear(128, 512),
nn.ReLU(),
nn.Linear(512, 256)
)
self.classifier = nn.Linear(256, num_classes)
self.extended_classifier = nn.Linear(256, num_classes)
def forward_base(self, x):
x = self.base(x)
return self.classifier(x)
def forward_motivated(self, x):
x = self.base(x)
x = self.extension(x)
return self.extended_classifier(x)
```
## Applications
- **Efficient model training**: Train two models (lightweight + full) simultaneously at lower cost
- **Curriculum learning**: Motivation conditions as natural curriculum triggers
- **Resource-constrained deployment**: Base model for edge, motivated model for cloud
- **Emotion-inspired AI**: Brain-inspired training schedules for enhanced learning
- **Transfer learning**: Base model as foundation, motivated model as specialized adapter
## Pitfalls
- **Motivation condition tuning**: Poor thresholds lead to over/under-activation
- **Architecture compatibility**: Requires scalable design where larger extends smaller
- **Gradient interference**: Shared weights may cause gradient conflicts
- **Evaluation complexity**: Need to evaluate both models separately
- **Task specificity**: Best demonstrated on image classification; generalization to other tasks needs validation
## Related Skills
- ember-autonomous-cognitive-behaviour-learned-spiking
- neuromimetic-perceptual-compression
- minaction-energy-first-neural-architecture