Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-benediff-behavior-neural-diffusiongit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-benediff-behavior-neural-diffusion/SKILL.md---
name: benediff---behavior-relevant-neural-dynamics-with-
description: **来源论文:** arXiv:2410.09614 - Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models
---
# BeNeDiff - Behavior-Relevant Neural Dynamics with Diffusion Models
**来源论文:** arXiv:2410.09614 - Exploring Behavior-Relevant and Disentangled Neural Dynamics with Generative Diffusion Models
**效用评分:** 0.99
**创建时间:** 2026-03-24 06:04
---
## 概述
BeNeDiff 是一个用于探索行为相关神经动力学的方法,通过行为信息潜在变量模型识别细粒度解耦神经子空间,并使用生成扩散模型合成行为视频来解释每个潜在因子的神经动力学。
## 激活关键词
- BeNeDiff
- behavior-relevant neural dynamics
- disentangled neural subspace
- generative diffusion neural
- behavior video synthesis
- neural behavior decoding
- 行为相关神经动力学
- 解耦神经子空间
## 核心架构
```
┌─────────────────────────────────────────────────────────────┐
│ BeNeDiff 架构 │
│ │
│ ┌─────────────────┐ ┌─────────────────────────────┐ │
│ │ 神经数据 │ │ 行为信息潜在变量模型 │ │
│ │ (钙成像/fMRI) │ → │ (Behavior-Informed LVM) │ │
│ └─────────────────┘ └─────────────────────────────┘ │
│ │ │
│ ↓ │
│ ┌─────────────────────┐ │
│ │ 解耦神经子空间 │ │
│ │ (Disentangled Space)│ │
│ └─────────────────────┘ │
│ │ │
│ ↓ │
│ ┌─────────────────────┐ │
│ │ 生成扩散模型 │ │
│ │ (Diffusion Model) │ │
│ └─────────────────────┘ │
│ │ │
│ ↓ │
│ ┌─────────────────────┐ │
│ │ 行为视频合成 │ │
│ │ (Behavior Video) │ │
│ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────┘
```
## 核心组件
### 1. 行为信息潜在变量模型
```python
class BehaviorInformedLVM(nn.Module):
"""
识别行为相关的解耦神经子空间
"""
def __init__(self, neural_dim, latent_dim, behavior_dim):
super().__init__()
self.encoder = nn.Sequential(
nn.Linear(neural_dim, 512),
nn.ReLU(),
nn.Linear(512, latent_dim * 2) # mean + logvar
)
self.behavior_encoder = nn.Linear(behavior_dim, latent_dim)
def forward(self, neural_data, behavior_data):
# 编码神经数据
params = self.encoder(neural_data)
mean, logvar = params.chunk(2, dim=-1)
# 行为信息引导
behavior_latent = self.behavior_encoder(behavior_data)
# 重参数化采样
z = self.reparameterize(mean, logvar)
return z, mean, logvar, behavior_latent
```
### 2. 解耦约束损失
```python
def disentanglement_loss(z, behavior_latent, labels):
"""
确保潜在因子与特定行为相关
"""
# KL 散度
kl_loss = -0.5 * torch.sum(1 + logvar - mean.pow(2) - logvar.exp())
# 行为相关性损失
behavior_loss = F.mse_loss(z, behavior_latent)
# 解耦正则化
# 每个潜在因子应只与一种行为相关
disentangle_reg = 0
for i in range(z.size(-1)):
# 计算每个因子与各行为的相关性
correlations = torch.abs(F.cosine_similarity(
z[:, i:i+1],
behavior_latent,
dim=0
))
# 鼓励稀疏相关性
disentangle_reg += correlations.var()
return kl_loss + behavior_loss + 0.1 * disentangle_reg
```
### 3. 生成扩散模型
```python
class BehaviorDiffusionModel(nn.Module):
"""
从神经潜在因子生成行为视频
"""
def __init__(self, latent_dim, video_shape):
super().__init__()
self.unet = UNetModel(
in_channels=3,
model_channels=128,
out_channels=3,
num_res_blocks=2,
attention_resolutions=(8, 16)
)
self.latent_proj = nn.Linear(latent_dim, 128 * 8 * 8)
def forward(self, x_t, t, neural_latent):
# 将神经潜在因子注入 UNet
latent_emb = self.latent_proj(neural_latent)
latent_emb = latent_emb.view(-1, 128, 8, 8)
# 条件扩散生成
return self.unet(x_t, t, latent_emb)
def generate(self, neural_latent, num_steps=1000):
"""
从噪声逐步去噪生成行为视频
"""
x = torch.randn(1, 3, *video_shape)
for t in reversed(range(num_steps)):
# 预测噪声
noise_pred = self.forward(x, t, neural_latent)
# 去噪步骤
x = self.denoise_step(x, noise_pred, t)
return x
```
## 实现步骤
### 步骤 1:数据准备
```python
def prepare_neuro_behavior_data(neural_recordings, behavior_videos):
"""
准备神经-行为配对数据
Args:
neural_recordings: [n_trials, n_neurons, n_timepoints]
behavior_videos: [n_trials, C, H, W, n_frames]
"""
# 对齐神经活动和行为视频
aligned_data = []
for trial in range(len(neural_recordings)):
neural = neural_recordings[trial]
behavior = behavior_videos[trial]
# 降采样行为标签
behavior_labels = extract_behavior_features(behavior)
aligned_data.append({
'neural': neural,
'behavior': behavior,
'labels': behavior_labels
})
return aligned_data
```
### 步骤 2:训练潜在变量模型
```python
def train_lvm(model, dataloader, epochs=100):
optimizer = torch.optim.Adam(model.parameters(), lr=1e-4)
for epoch in range(epochs):
for batch in dataloader:
neural = batch['neural']
behavior = batch['labels']
# 前向传播
z, mean, logvar, behavior_latent = model(neural, behavior)
# 计算损失
loss = disentanglement_loss(z, behavior_latent, behavior)
# 反向传播
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### 步骤 3:训练扩散模型
```python
def train_diffusion(diffusion_model, lvm, dataloader, epochs=100):
optimizer = torch.optim.Adam(diffusion_model.parameters(), lr=1e-4)
for epoch in range(epochs):
for batch in dataloader:
neural = batch['neural']
behavior_video = batch['behavior']
# 获取神经潜在因子
with torch.no_grad():
z, _, _, _ = lvm(neural, batch['labels'])
# 随机时间步
t = torch.randint(0, 1000, (behavior_video.size(0),))
# 添加噪声
noise = torch.randn_like(behavior_video)
x_t = add_noise(behavior_video, noise, t)
# 预测噪声
noise_pred = diffusion_model(x_t, t, z)
# 计算损失
loss = F.mse_loss(noise_pred, noise)
optimizer.zero_grad()
loss.backward()
optimizer.step()
```
### 步骤 4:生成与解释
```python
def interpret_neural_dynamics(lvm, diffusion_model, neural_data, factor_idx):
"""
解释特定神经潜在因子的行为含义
"""
# 获取潜在因子
z, _, _, _ = lvm(neural_data, behavior_labels)
# 激活特定因子
z_modified = z.clone()
z_modified[:, factor_idx] = z[:, factor_idx] + 2.0 # 增强
# 生成行为视频
generated_video = diffusion_model.generate(z_modified)
return {
'original_latent': z,
'modified_latent': z_modified,
'generated_behavior': generated_video,
'factor_importance': torch.abs(z[:, factor_idx]).mean()
}
```
## 应用场景
1. **神经行为解码** - 从神经活动预测行为
2. **潜在因子解释** - 理解神经编码的行为含义
3. **行为生成** - 合成可能的行为视频
4. **神经表征分析** - 分析不同脑区的行为编码
## 验证数据集
论文在广域钙成像数据集上验证:
- 多会话记录
- 背侧皮层覆盖
- 多种行为任务
## 关键优势
- **可解释性**:解耦潜在因子对应特定行为
- **生成能力**:扩散模型合成行为视频
- **细粒度分析**:识别行为相关的神经子空间
- **端到端训练**:潜在变量模型与扩散模型联合优化
## 相关技能
- `blend-behavior-guided-neural` - 行为引导神经网络
- `task-aware-brain-connectivity` - 任务感知脑连接
- `brainstratify-speech-decoding` - 脑分层语音解码
- `eeg-brain-connectivity-bci` - EEG 脑连接 BCI
---
_此技能基于 BeNeDiff 方法,用于探索行为相关的解耦神经动力学_
## Description
BeNeDiff - Behavior-Relevant Neural Dynamics with Diffusion Models
## Activation Keywords
- benediff-behavior-neural-diffusion
- benediff-behavior-neural-diffusion 技能
- benediff-behavior-neural-diffusion skill
## Tools Used
- `read` - Read documentation and references
- `web_search` - Search for related information
- `web_fetch` - Fetch paper or documentation
## Instructions for Agents
Follow these steps when applying this skill:
### Step 1: 神经行为解码
### Step 2: 潜在因子解释
### Step 3: 行为生成
### Step 4: 神经表征分析
### Step 5: Understand the Request
## Examples
### Example 1: Basic Application
**User:** I need to apply BeNeDiff - Behavior-Relevant Neural Dynamics with Diffusion Models to my analysis.
**Agent:** I'll help you apply benediff-behavior-neural-diffusion. First, let me understand your specific use case...
**Context:** Apply the methodology
### Example 2: Advanced Scenario
**User:** Complex analysis scenario
**Agent:** Based on the methodology, I'll guide you through the advanced application...
### Example 2: Advanced Application
**User:** What are the key considerations for benediff-behavior-neural-diffusion?
**Agent:** Let me search for the latest research and best practices...