Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-integrative-neurocybernetic-modelinggit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-integrative-neurocybernetic-modeling/SKILL.md---
name: integrative-neurocybernetic-modeling
description: "Integrative neurocybernetic modeling framework for large-scale neuroscience. Treats brain as controller pursuing latent objectives in closed-loop coupling with body and environment. Keywords: neurocybernetics, closed-loop modeling, large-scale neuroscience, brain-body-environment coupling, nonlinear state-space models"
---
# Integrative Neurocybernetic Modeling in Large-Scale Neuroscience
> Framework for integrative neurocybernetic models that capture closed-loop coupling of brain, body, and environment, treating the brain as a controller pursuing latent objectives across heterogeneous datasets.
## Metadata
- **Source**: arXiv:2604.23903v1
- **Authors**: Il Memming Park, Ayesha Vermani, Gonzalo G. de Polavieja, et al.
- **Published**: 2026-04-26
## Core Methodology
### Key Innovation
Traditional neuroscience modeling remains fragmented across isolated experiments. This framework proposes integrative neurocybernetic models that:
1. **Understandable**: Dynamical models with interpretable structure
2. **Closed-Loop**: Capture brain-body-environment coupling
3. **Controller**: Treat brain as controller pursuing latent objectives
4. **Structured**: Represent variation across scales and contexts
5. **Scalable**: Scale to heterogeneous, multi-animal datasets
### Framework Components
**1. Brain-Body-Environment Coupling**
```
┌─────────────────┐ ┌─────────────┐ ┌─────────────────┐
│ Brain (N) │◄────│ Body (B) │◄────│ Environment (E) │
│ Neural State │────►│ Motor Output│────►│ Sensory Input │
└─────────────────┘ └─────────────┘ └─────────────────┘
│ │
└──────────────────────────────────────────┘
Latent Objectives (O)
```
**2. Nonlinear State-Space Model (SSM)**
```
State dynamics: x_t = f(x_{t-1}, u_t, θ) + ε_t
Observation: y_t = g(x_t, u_t, θ) + ν_t
Controller: u_t = π(x_t, o_t, θ)
where:
- x_t: latent neural state
- y_t: neural observations (spikes, LFP, calcium, etc.)
- u_t: control inputs (motor commands, attention, etc.)
- o_t: latent objectives
- θ: model parameters
- ε_t, ν_t: noise terms
```
**3. Meta-Dynamical Extensions**
- **Across animals**: Learn shared latent structure with animal-specific variations
- **Across brain areas**: Modular structure with inter-area connections
- **Across contexts**: Context-dependent dynamics with shared base
- **Across scales**: Multi-scale models from spikes to behavior
## Implementation Guide
### Prerequisites
```bash
pip install torch numpy scipy
pip install dynamax # For state-space models
pip install ssm # For switching state-space models
```
### Step-by-Step Implementation
**Step 1: Define Neurocybernetic SSM**
```python
import torch
import torch.nn as nn
from typing import Tuple, Optional
class NeurocyberneticSSM(nn.Module):
"""
Nonlinear state-space model for brain-body-environment coupling.
Treats brain as controller pursuing latent objectives.
"""
def __init__(
self,
state_dim: int = 50,
obs_dim: int = 100,
control_dim: int = 10,
objective_dim: int = 5,
n_areas: int = 4
):
super().__init__()
self.state_dim = state_dim
self.obs_dim = obs_dim
self.control_dim = control_dim
self.objective_dim = objective_dim
self.n_areas = n_areas
# Per-area state dimensions
self.area_state_dim = state_dim // n_areas
# State dynamics: x_t = f(x_{t-1}, u_t) + noise
self.dynamics_net = nn.ModuleList([
nn.Sequential(
nn.Linear(self.area_state_dim + control_dim, 128),
nn.ReLU(),
nn.Linear(128, self.area_state_dim)
) for _ in range(n_areas)
])
# Inter-area connections
self.inter_area_weights = nn.Parameter(
torch.randn(n_areas, n_areas) * 0.1
)
# Observation model: y_t = g(x_t) + noise
self.observation_net = nn.ModuleList([
nn.Sequential(
nn.Linear(self.area_state_dim, 64),
nn.ReLU(),
nn.Linear(64, obs_dim // n_areas)
) for _ in range(n_areas)
])
# Controller: u_t = π(x_t, o_t)
self.controller = nn.Sequential(
nn.Linear(state_dim + objective_dim, 128),
nn.ReLU(),
nn.Linear(128, control_dim),
nn.Tanh()
)
# Objective encoder (from observations/context)
self.objective_encoder = nn.Sequential(
nn.Linear(obs_dim + control_dim, 64),
nn.ReLU(),
nn.Linear(64, objective_dim)
)
# Learnable dynamics noise covariance
self.log_dynamics_noise = nn.Parameter(
torch.zeros(state_dim)
)
# Learnable observation noise covariance
self.log_obs_noise = nn.Parameter(
torch.zeros(obs_dim)
)
def forward(
self,
observations: torch.Tensor,
initial_state: Optional[torch.Tensor] = None
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Forward pass through the neurocybernetic model.
Args:
observations: (batch, time, obs_dim) neural observations
initial_state: (batch, state_dim) initial latent state
Returns:
states: (batch, time, state_dim) latent states
controls: (batch, time, control_dim) control outputs
objectives: (batch, time, objective_dim) latent objectives
"""
batch_size, seq_len, _ = observations.shape
if initial_state is None:
initial_state = torch.zeros(batch_size, self.state_dim)
states = []
controls = []
objectives = []
x = initial_state
u = torch.zeros(batch_size, self.control_dim)
for t in range(seq_len):
# Encode current observation to objective
obj_input = torch.cat([observations[:, t], u], dim=-1)
o = self.objective_encoder(obj_input)
objectives.append(o)
# Controller: compute control from state and objective
u = self.controller(torch.cat([x, o], dim=-1))
controls.append(u)
# State dynamics per area with inter-area coupling
area_states = x.chunk(self.n_areas, dim=-1)
new_area_states = []
for i, (area_x, dyn_net) in enumerate(zip(area_states, self.dynamics_net)):
# Intrinsic dynamics
dyn_input = torch.cat([area_x, u], dim=-1)
area_new = area_x + dyn_net(dyn_input)
# Inter-area coupling
coupling = torch.zeros_like(area_new)
for j, other_x in enumerate(area_states):
if i != j:
coupling += self.inter_area_weights[i, j] * other_x
new_area_states.append(area_new + 0.1 * coupling)
x = torch.cat(new_area_states, dim=-1)
# Add dynamics noise
noise = torch.randn_like(x) * torch.exp(0.5 * self.log_dynamics_noise)
x = x + noise
states.append(x)
states = torch.stack(states, dim=1)
controls = torch.stack(controls, dim=1)
objectives = torch.stack(objectives, dim=1)
return states, controls, objectives
def decode_observations(self, states: torch.Tensor) -> torch.Tensor:
"""
Decode latent states to observations.
"""
batch_size, seq_len, _ = states.shape
area_states = states.view(batch_size * seq_len, self.n_areas, self.area_state_dim)
obs_list = []
for i, obs_net in enumerate(self.observation_net):
area_obs = obs_net(area_states[:, i])
obs_list.append(area_obs)
observations = torch.cat(obs_list, dim=-1)
# Add observation noise
noise = torch.randn_like(observations) * torch.exp(0.5 * self.log_obs_noise)
observations = observations + noise
return observations.view(batch_size, seq_len, self.obs_dim)
```
**Step 2: Multi-Animal Meta-Learning**
```python
class MultiAnimalNeurocybernetic(nn.Module):
"""
Meta-learning framework for multi-animal neurocybernetic models.
Learns shared latent structure with animal-specific adaptations.
"""
def __init__(
self,
n_animals: int,
shared_state_dim: int = 30,
animal_specific_dim: int = 20,
**ssm_kwargs
):
super().__init__()
self.n_animals = n_animals
self.shared_state_dim = shared_state_dim
self.animal_specific_dim = animal_specific_dim
total_state_dim = shared_state_dim + animal_specific_dim
# Shared neurocybernetic model
self.ssm = NeurocyberneticSSM(
state_dim=total_state_dim,
**ssm_kwargs
)
# Animal-specific embeddings
self.animal_embeddings = nn.Embedding(n_animals, animal_specific_dim)
def forward(
self,
observations: torch.Tensor,
animal_ids: torch.Tensor
) -> Tuple[torch.Tensor, torch.Tensor, torch.Tensor]:
"""
Forward pass with animal-specific initialization.
"""
batch_size = observations.shape[0]
# Get animal-specific initial state
animal_specific = self.animal_embeddings(animal_ids)
# Initialize with shared state (zeros) + animal-specific
shared_init = torch.zeros(batch_size, self.shared_state_dim)
initial_state = torch.cat([shared_init, animal_specific], dim=-1)
return self.ssm(observations, initial_state)
```
**Step 3: Training with ELBO**
```python
def compute_elbo(
model: NeurocyberneticSSM,
observations: torch.Tensor,
n_particles: int = 10
) -> torch.Tensor:
"""
Compute Evidence Lower Bound (ELBO) for variational inference.
"""
batch_size, seq_len, obs_dim = observations.shape
# Sample multiple trajectories (particle filtering)
log_likes = []
for _ in range(n_particles):
states, controls, objectives = model(observations)
# Reconstruction likelihood
pred_obs = model.decode_observations(states)
recon_loss = ((pred_obs - observations) ** 2).sum(dim=-1)
# Prior regularization on states
state_prior = (states ** 2).sum(dim=-1)
# Control cost (regularization)
control_cost = (controls ** 2).sum(dim=-1)
# Total negative ELBO
particle_loss = recon_loss + 0.1 * state_prior + 0.01 * control_cost
log_likes.append(-particle_loss.sum(dim=1))
# Average over particles
elbo = torch.stack(log_likes, dim=1).mean(dim=1)
return -elbo.mean()
# Training loop
def train_neurocybernetic(
model,
train_loader,
epochs=100,
lr=1e-3
):
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
for epoch in range(epochs):
total_loss = 0
for batch in train_loader:
observations = batch['observations']
optimizer.zero_grad()
loss = compute_elbo(model, observations)
loss.backward()
optimizer.step()
total_loss += loss.item()
print(f"Epoch {epoch}: Loss = {total_loss / len(train_loader):.4f}")
```
**Step 4: Closed-Loop Simulation**
```python
def simulate_closed_loop(
model: NeurocyberneticSSM,
environment,
n_steps: int = 1000,
initial_state: Optional[torch.Tensor] = None
):
"""
Simulate closed-loop brain-body-environment interaction.
"""
if initial_state is None:
x = torch.zeros(1, model.state_dim)
else:
x = initial_state
states = [x]
observations = []
controls = []
rewards = []
for t in range(n_steps):
# Decode observation from state
obs_pred = model.decode_observations(x.unsqueeze(0))
observations.append(obs_pred)
# Get objective from environment
objective = environment.get_objective(obs_pred)
# Controller computes action
u = model.controller(torch.cat([x, objective], dim=-1))
controls.append(u)
# Environment step
next_obs, reward = environment.step(u)
rewards.append(reward)
# Update state with new observation
with torch.no_grad():
x_new, _, _ = model(next_obs.unsqueeze(0).unsqueeze(0), x.unsqueeze(0))
x = x_new[:, -1]
states.append(x)
return {
'states': torch.stack(states),
'observations': torch.stack(observations),
'controls': torch.stack(controls),
'rewards': torch.stack(rewards)
}
```
## Applications
### 1. Multi-Animal Neural Data Analysis
```python
def analyze_cross_animal_data(datasets, animal_ids):
"""
Fit model to multiple animals simultaneously.
Args:
datasets: list of (T, obs_dim) arrays
animal_ids: list of animal IDs
"""
n_animals = len(set(animal_ids))
model = MultiAnimalNeurocybernetic(n_animals=n_animals)
# Train with meta-learning
train_meta_learning(model, datasets, animal_ids)
# Extract shared structure
shared_structure = model.ssm.shared_state_dim
return model
```
### 2. Latent Objective Inference
```python
def infer_latent_objectives(model, observations):
"""
Infer what objectives the brain is pursuing from neural data.
"""
with torch.no_grad():
_, _, objectives = model(observations)
# Cluster objectives to find discrete task types
from sklearn.cluster import KMeans
kmeans = KMeans(n_clusters=5)
objective_clusters = kmeans.fit_predict(objectives.reshape(-1, objectives.shape[-1]))
return objectives, objective_clusters
```
### 3. Closed-Loop Control Design
```python
def design_controller_for_task(model, task_objective):
"""
Design control policy for specific task.
"""
# Fine-tune controller network for task
controller = model.controller
optimizer = torch.optim.Adam(controller.parameters(), lr=1e-4)
for epoch in range(100):
states, controls, _ = model(observations, task_objective)
# Task-specific loss
task_loss = compute_task_loss(states, controls, task_objective)
optimizer.zero_grad()
task_loss.backward()
optimizer.step()
```
## Pitfalls
1. **Identifiability**: Latent states and objectives may not be uniquely identifiable without constraints
2. **Scalability**: Training on large-scale datasets requires efficient approximate inference (e.g., SVI)
3. **Model Misspecification**: Linear assumptions may fail for complex neural dynamics
4. **Computational Cost**: Particle filtering scales poorly with sequence length
5. **Cross-Animal Transfer**: Shared structure may not exist if animals have fundamentally different strategies
## Related Skills
- brain-digital-twins-execution-semantics
- brain-state-transition-network-control
- brain-network-controllability
## References
```bibtex
@article{park2026neurocybernetic,
title={Integrative neurocybernetic modeling in the era of large-scale neuroscience},
author={Park, Il Memming and Vermani, Ayesha and de Polavieja, Gonzalo G. and Gallego, Juan Álvaro and Esfahany, Kathleen and Saxena, Shreya and Orger, Michael and Ijspeert, Auke and Dowling, Matthew and McNamee, Daniel and Turaga, Srinivas C. and Mainen, Zachary and Paton, Joseph J. and Renart, Alfonso},
journal={arXiv preprint arXiv:2604.23903},
year={2026}
}
```