Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-energy-based-dynamical-models-neurocomputationgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-energy-based-dynamical-models-neurocomputation/SKILL.md---
name: energy-based-dynamical-models-neurocomputation-learning
description: "Recent advances at the intersection of control theory, neuroscience, and machine learning have revealed novel mechanisms by which dynamical systems perform computation. These advances encompass a wide. Activation: energy-based models, dynamical systems, ODE complexity"
version: 1.0.0
metadata:
hermes:
source_paper: "Energy-Based Dynamical Models for Neurocomputation, Learning, and Optimization (arXiv:2604.05042v1)"
tags: [computational, dynamics, energy, learning, memory, network]
---
# Energy-Based Dynamical Models for Neurocomputation, Learning, and Optimization
## Paper Reference
- **Title**: Energy-Based Dynamical Models for Neurocomputation, Learning, and Optimization
- **Authors**: Arthur N. Montanari, Francesco Bullo, Dmitry Krotov et al.
- **arXiv**: 2604.05042v1
- **Published**: 2026-04-06
- **Categories**: cs.LG, cond-mat.dis-nn, eess.SY, math.DS, q-bio.NC
- **PDF**: https://arxiv.org/abs/2604.05042
## Overview
Recent advances at the intersection of control theory, neuroscience, and machine learning have revealed novel mechanisms by which dynamical systems perform computation. These advances encompass a wide range of conceptual, mathematical, and computational ideas, with applications for model learning and training, memory retrieval, data-driven control, and optimization. This tutorial focuses on neuro-inspired approaches to computation that aim to improve scalability, robustness, and energy efficiency across such tasks, bridging the gap between artificial and biological systems. Particular emphasis is placed on energy-based dynamical models that encode information through gradient flows and energy landscapes. We begin by reviewing classical formulations, such as continuous-time Hopfield network
## Core Concepts
1. **Energy-Based Models (EBMs)**: Formulating computation and learning as energy minimization processes
2. **Neurocomputation Framework**: Mapping neural dynamics to energy landscapes
3. **Optimization Principles**: Deriving learning rules from energy function gradients
4. **Biological Plausibility**: Connecting energy-based computation to neural mechanisms
## Mathematical Framework
The energy function E defines the optimization landscape:
```
E(x, y; theta) = -log p(y|x; theta) + R(theta)
```
where R is a regularization term encoding prior knowledge.
## Implementation Pattern
```python
import numpy as np
class EnergyBasedNeuralModel:
"""Energy-based model for neural computation."""
def __init__(self, n_visible, n_hidden):
self.W = np.random.randn(n_visible, n_hidden) * 0.1
self.b_visible = np.zeros(n_visible)
self.b_hidden = np.zeros(n_hidden)
def energy(self, v, h):
"""Compute energy of configuration."""
return -np.dot(np.dot(v, self.W), h) - np.dot(v, self.b_visible) - np.dot(h, self.b_hidden)
def sample_hidden(self, v):
"""Sample hidden units given visible."""
p_h = 1 / (1 + np.exp(-np.dot(v, self.W) - self.b_hidden))
return np.random.binomial(1, p_h)
def contrastive_divergence(self, data, lr=0.01, steps=1):
"""CD-k learning rule."""
pos_grad = np.dot(data.T, self.sample_hidden(data))
h = self.sample_hidden(data)
for _ in range(steps):
v = 1 / (1 + np.exp(-np.dot(h, self.W.T) - self.b_visible))
h = self.sample_hidden(v)
neg_grad = np.dot(v.T, h)
self.W += lr * (pos_grad - neg_grad) / len(data)
```
## Applications
- Unsupervised representation learning
- Neural population dynamics modeling
- Associative memory systems
- Energy-efficient neuromorphic computing
## Limitations
- Based on abstract analysis; full paper may contain additional details
- Implementations are illustrative; refer to paper for production code
- Domain-specific parameters need empirical tuning
## References
- Arthur N. Montanari, Francesco Bullo, Dmitry Krotov et al. (2026). "Energy-Based Dynamical Models for Neurocomputation, Learning, and Optimization." arXiv:2604.05042v1.
- Full paper: https://arxiv.org/pdf/2604.05042.pdf
## Activation Keywords
- computational, dynamics, energy, learning, memory, network, neuroscience, optimization