Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-connectivity-distributions-neural-populationsgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-connectivity-distributions-neural-populations/SKILL.md---
name: connectivity-distributions-neural-populations
version: v1.0.0
last_updated: 2026-04-19
description: Identifying neural connectivity distributions from population recordings using low-rank recurrent neural networks (lrRNNs). Addresses the degeneracy problem where multiple connectivity structures generate identical dynamics. Provides mechanistic interpretation of neural dynamics through inferred connectivity. Applicable to neural population analysis, connectivity inference, circuit architecture mapping. Trigger: connectivity inference neural populations, lrRNN connectivity, neural dynamics degeneracy, circuit architecture inference, population recording analysis
---
# Identifying Connectivity Distributions from Neural Population Recordings
## Description
A methodology for inferring neural connectivity distributions from population activity recordings, addressing the fundamental degeneracy problem: multiple connectivity structures can generate identical neural dynamics. Uses low-rank recurrent neural networks (lrRNNs) to infer low-dimensional latent dynamics and connectivity structure, enabling mechanistic interpretation.
Based on: "Identifying Connectivity Distributions from Neural Population Recordings" (arXiv:2603.26506, March 2026)
## Problem
- Connectivity structure shapes neural computation
- Inferring connectivity from population recordings is **degenerate**: many structures produce identical dynamics
- Standard inference methods cannot distinguish between equivalent connectivity patterns
- Need probabilistic approach that characterizes the distribution of possible connectivities
## lrRNN Framework
### Low-Rank Recurrent Neural Network
The key insight: neural population dynamics can be captured by a low-rank connectivity matrix:
```
W = M K^T / N + W_0
```
Where:
- W: Full connectivity matrix (N x N)
- M, K: Low-rank factors (N x R, where R << N)
- W_0: Fixed background connectivity
- R: Rank (number of latent dimensions)
### Connectivity Distribution Inference
```python
class ConnectivityDistribution:
"""Infer distribution over possible connectivity structures."""
def __init__(self, n_neurons, rank):
self.n_neurons = n_neurons
self.rank = rank
# Distribution over low-rank factors
self.M_mean = torch.zeros(n_neurons, rank)
self.M_cov = torch.eye(n_neurons * rank)
self.K_mean = torch.zeros(n_neurons, rank)
self.K_cov = torch.eye(n_neurons * rank)
def sample_connectivity(self, n_samples=100):
"""Sample possible connectivity matrices from inferred distribution."""
samples = []
for _ in range(n_samples):
M = torch.distributions.Normal(self.M_mean, self.M_cov.sqrt()).sample()
K = torch.distributions.Normal(self.K_mean, self.K_cov.sqrt()).sample()
W = M @ K.T / self.n_neurons
samples.append(W)
return torch.stack(samples)
def connectivity_statistics(self):
"""Compute statistics of the connectivity distribution."""
return {
"mean_connectivity": self.M_mean @ self.K_mean.T / self.n_neurons,
"variance": torch.trace(self.M_cov) * torch.trace(self.K_cov) / self.n_neurons**2,
"effective_rank": self._effective_rank()
}
```
## Key Methodology Steps
### 1. Fit lrRNN to Population Data
```python
def fit_lrrnn(neural_data, rank=10):
"""
Fit low-rank RNN to neural population recordings.
Args:
neural_data: Neural activity [time x neurons]
rank: Target rank of connectivity matrix
"""
# Initialize low-rank factors
M = torch.randn(neural_data.shape[1], rank)
K = torch.randn(neural_data.shape[1], rank)
# Optimize to match observed dynamics
optimizer = torch.optim.Adam([M, K], lr=1e-2)
for step in range(10000):
# Compute predicted dynamics
W = M @ K.T / neural_data.shape[1]
predicted = forward_rnn(neural_data, W)
# Match to observed activity
loss = F.mse_loss(predicted, neural_data)
loss.backward()
optimizer.step()
return M, K
```
### 2. Characterize Connectivity Distribution
After fitting, characterize the posterior distribution over connectivity matrices:
```python
def characterize_connectivity_distribution(M, K, neural_data):
"""
Characterize the distribution of connectivity matrices
consistent with observed neural dynamics.
"""
# Compute Fisher information matrix for uncertainty
fisher_info = compute_fisher_information(M, K, neural_data)
# Laplace approximation to posterior
posterior_cov = torch.inverse(fisher_info + lambda_reg * torch.eye(fisher_info.shape[0]))
return {
"posterior_mean": torch.cat([M.flatten(), K.flatten()]),
"posterior_covariance": posterior_cov,
"credible_sets": compute_credible_sets(posterior_cov)
}
```
### 3. Identify Degenerate Solutions
```python
def identify_degeneracy(connectivity_dist):
"""
Identify which aspects of connectivity are constrained vs. degenerate.
Returns which connectivity features are well-determined vs.
underdetermined by the data.
"""
eigenvalues = torch.linalg.eigvalsh(connectivity_dist["posterior_covariance"])
# High variance eigenvalues = degenerate directions
degenerate_directions = eigenvalues > threshold
return {
"constrained": (~degenerate_directions).sum().item(),
"degenerate": degenerate_directions.sum().item(),
"total": len(eigenvalues)
}
```
## Applications
- **Circuit mechanism discovery**: Identify computational motifs in neural circuits
- **Connectivity perturbation analysis**: Predict effects of targeted manipulations
- **Cross-condition comparison**: Compare connectivity distributions across task conditions
- **Model validation**: Test if inferred connectivity matches known anatomy
## Key Insights
1. **Degeneracy is fundamental**: Multiple connectivity structures always produce identical dynamics
2. **Low-rank structure is key**: lrRNNs capture the computationally relevant subspace
3. **Distribution, not point estimate**: Characterize uncertainty in connectivity inference
4. **Mechanistic interpretation**: Low-rank factors reveal computational primitives