Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cornn-convex-rnn-optimizationgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cornn-convex-rnn-optimization/SKILL.md---
name: cornn-convex-rnn-optimization
description: CORNN凸优化递归神经网络方法论。将RNN训练转化为凸优化问题,训练速度比传统方法快100倍,支持百万参数RNN在标准计算机上亚分钟级训练。适用于大规模神经记录实时建模、神经动力学推断、吸引子结构恢复。触发词:RNN训练、凸优化、神经动力学、实时建模、数据约束RNN、convex optimization、recurrent neural network、neural dynamics inference。
user-invocable: true
---
# CORNN: Convex Optimization of Recurrent Neural Networks
凸优化递归神经网络快速神经动力学推断
## 核心方法论
**来源:** arXiv:2311.10200 (NeurIPS 2023)
**效用:** 0.92
### 核心创新
将 RNN 训练转化为凸优化问题:
- 训练速度比传统方法快 **100 倍**
- 百万参数 RNN 在标准计算机上 **亚分钟级训练**
- 支持实时网络重建
### 实现框架
```python
import numpy as np
import cvxpy as cp
from typing import Tuple
class CORNNTrainer:
"""
CORNN: 凸优化 RNN 训练器
核心思想:将非凸 RNN 训练转化为凸优化问题
"""
def __init__(
self,
n_neurons: int,
hidden_dim: int = 64,
regularization: float = 0.01
):
self.n_neurons = n_neurons
self.hidden_dim = hidden_dim
self.reg = regularization
def train(
self,
neural_data: np.ndarray,
max_iter: int = 1000
) -> Tuple[np.ndarray, np.ndarray]:
"""
凸优化训练
参数:
neural_data: 神经数据 (n_timepoints, n_neurons)
max_iter: 最大迭代次数
返回:
W_rec: 递归权重
W_in: 输入权重
"""
T, N = neural_data.shape
# 定义优化变量
W_rec = cp.Variable((self.hidden_dim, self.hidden_dim))
W_in = cp.Variable((self.hidden_dim, N))
h = cp.Variable((T, self.hidden_dim))
# 构建损失函数
loss = 0
for t in range(1, T):
# RNN 动力学约束
h_pred = W_rec @ h[t-1] + W_in @ neural_data[t-1]
loss += cp.sum_squares(h[t] - cp.tanh(h_pred))
# 正则化
loss += self.reg * (cp.norm(W_rec, 'fro') + cp.norm(W_in, 'fro'))
# 求解凸优化问题
problem = cp.Problem(cp.Minimize(loss))
problem.solve(max_iter=max_iter)
return W_rec.value, W_in.value
def infer_dynamics(
self,
neural_data: np.ndarray,
W_rec: np.ndarray,
W_in: np.ndarray
) -> np.ndarray:
"""
推断神经动力学
返回:
隐藏状态轨迹
"""
T = neural_data.shape[0]
h = np.zeros((T, self.hidden_dim))
for t in range(1, T):
h[t] = np.tanh(W_rec @ h[t-1] + W_in @ neural_data[t-1])
return h
def cornn_train(
neural_data: np.ndarray,
hidden_dim: int = 64,
regularization: float = 0.01
) -> dict:
"""
CORNN 训练接口
参数:
neural_data: 神经数据
hidden_dim: 隐藏维度
regularization: 正则化系数
返回:
训练结果
"""
trainer = CORNNTrainer(
n_neurons=neural_data.shape[1],
hidden_dim=hidden_dim,
regularization=regularization
)
W_rec, W_in = trainer.train(neural_data)
dynamics = trainer.infer_dynamics(neural_data, W_rec, W_in)
return {
'W_rec': W_rec,
'W_in': W_in,
'dynamics': dynamics
}
```
## 应用场景
1. 大规模神经记录实时建模
2. 神经动力学推断
3. 吸引子结构恢复
## 关键优势
| 指标 | 传统方法 | CORNN |
|------|---------|-------|
| 训练速度 | 基准 | **快 100 倍** |
| 参数规模 | 有限 | **百万级** |
| 实时性 | 离线 | **准实时** |
## Activation Keywords
- RNN训练
- 凸优化
- 神经动力学
- 实时建模
- 数据约束RNN
- convex optimization
- recurrent neural network
- neural dynamics inference
## Tools Used
- numpy
- cvxpy
## Instructions for Agents
1. 准备神经数据:确保数据格式为 (时间步, 神经元)
2. 配置隐藏维度和正则化参数
3. 调用 cornn_train 函数进行训练
4. 分析返回的权重矩阵和动力学推断结果
5. 验证训练速度和准确性
## Examples
```python
# 使用示例
from cornn_convex_rnn_optimization import cornn_train
import numpy as np
# 1. 生成模拟神经数据
T, n_neurons = 1000, 50
neural_data = np.random.randn(T, n_neurons)
# 2. 训练 RNN
result = cornn_train(
neural_data,
hidden_dim=32,
regularization=0.01
)
# 3. 查看结果
print(f"递归权重: {result['W_rec'].shape}")
print(f"输入权重: {result['W_in'].shape}")
print(f"动力学轨迹: {result['dynamics'].shape}")
```
## 参考文献
- Dinc, F., et al. (2023). "Convex optimization of recurrent neural networks for rapid inference of neural dynamics" NeurIPS 2023