Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-event-driven-hopfield-retrievalgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-event-driven-hopfield-retrieval/SKILL.md---
name: event-driven-hopfield-retrieval
description: "Event-driven asynchronous retrieval in high-capacity kernel Hopfield networks. KLR Hopfield networks achieve P/N ≈ 30 storage capacity with asynchronous updates, enabling energy-efficient neuromorphic deployment. Event count matches initial Hamming distance — minimal spurious oscillations. Activation: Hopfield network, kernel associative memory, event-driven computation, asynchronous retrieval, neuromorphic memory, storage capacity, KLR Hopfield, margin maximization."
---
# Efficient Event-Driven Retrieval in High-Capacity Kernel Hopfield Networks
**arXiv:** 2605.05978 [cs.NE] (May 2026)
**Author:** Akira Tamamori (Aichi Institute of Technology, Japan)
**Source:** https://arxiv.org/abs/2605.05978
**Published:** NOLTA, IEICE, vol. 17, no. 1, pp. 1–10
## Core Problem
High-capacity associative memory models like Kernel Logistic Regression (KLR) Hopfield networks rely on computationally expensive **synchronous updates** — evaluating all N neurons simultaneously at each step. For large-scale applications where stored patterns P >> N, this incurs substantial computational and memory access costs, blocking deployment on energy-efficient neuromorphic hardware.
## Key Finding: Asynchronous Dynamics Match Synchronous Performance
Under appropriately tuned kernel parameters, **asynchronous sequential updates** in KLR Hopfield networks exhibit trajectories that are **statistically indistinguishable** from synchronous dynamics, while maintaining high recall accuracy. This enables efficient neuromorphic deployment without performance degradation.
## Technical Framework
### KLR Hopfield Network Model
A network of N neurons storing P patterns {ξ^μ}_{μ=1}^{P}:
**Local field at neuron i:**
```
h_i(s) = Σ(μ=1→P) α_i^μ · K(s, ξ^μ)
```
Where:
- α ∈ R^(P×N): dual variables learned via KLR (optimized independently per neuron)
- K(·,·): RBF kernel = exp(-γ‖x-y‖²)
- γ: kernel locality parameter (critical hyperparameter)
**KLR Learning Objective (per neuron i):**
```
L(α_i) = -Σ(ν=1→P)[y_ν,i·log(σ(h_i(ξ^ν))) + (1-y_ν,i)·log(1-σ(h_i(ξ^ν)))]
+ (λ/2)·α_i^T · K · α_i
```
Where y_ν,i = (ξ_i^ν + 1)/2 ∈ {0,1} is the target bit, σ is the logistic sigmoid, K is the Gram matrix, and λ is weight decay. This optimization yields **large-margin attractors**.
### Pseudo-Energy Function
```
V(s) = -Σ(i=1→N) s_i · h_i(s) = -Σ(i=1→N) s_i · Σ(μ=1→P) α_i^μ · K(s, ξ^μ)
```
Note: V(s) is a pseudo-energy (Lyapunov candidate), not the strict Ising energy E(s) of classical Hopfield networks.
### Two Update Schemes
#### Synchronous (Parallel) Update
```
s_i(t+1) = sign(h_i(s(t))) for all i = 1,...,N
```
- Computationally efficient on GPUs
- No monotonic energy decrease guarantee
- Susceptible to oscillations/limit cycles at high loads
#### Asynchronous (Sequential) Update
```
s_i^new = sign(h_i(s_current)) for single neuron i
```
- One epoch = N sequential updates in random permutation order
- Suppresses macroscopic oscillations
- Large margins drive local field alignment → convergence to fixed-point attractor
- **Key insight**: Margin-induced smoothness of attractor landscape prevents spurious oscillations
### Kernel Parameter Regimes
| Regime | γ | Properties |
|--------|------|------------|
| Ridge (static memory) | 0.02 | Sharp/deep attractors, optimal for minimal noise |
| **Robust** (this work) | **0.1** | Wider attractor basins, robust to noise, high capacity |
The broader kernel (γ=0.1) provides wider attractor basins necessary for stable retrieval under high noise (10-20%) and large storage loads.
## Empirical Results
### 1. Trajectory Similarity (Sync vs Async)
- Starting from 20% noisy initial state (N=50, P/N=3.0, γ=0.1)
- Both schemes converge to overlap > 0.95 within a few steps
- Trajectories nearly indistinguishable within statistical variation
- Confirmed at larger scales (N=100, N=200)
### 2. Storage Capacity
| Network Size (N) | Max Tested P/N | Accuracy |
|-------------------|----------------|----------|
| 50 | ~20 | 1.0 (starts degrading) |
| 100 | 30 | 1.0 |
| 200 | 30 | 1.0 |
- **P/N ≈ 30** maintained with perfect recall — orders of magnitude above classical Hebbian limit (P ≈ 0.14N)
- Capacity benefits from increasing orthogonality of random patterns in high-dimensional kernel feature space
- Sync and async performance consistent across all sizes and loads
### 3. Event-Driven Efficiency
- Total bit flips required ≈ initial Hamming distance (theoretical minimum)
- At 20% noise (~10 initial errors): ~10 events needed, 95% recall success
- At 40% noise (~15 initial errors): ~15 events needed, <10% recall
- **No spurious oscillations observed** — network corrects erroneous bits directly
### Computational Cost Comparison
For a 50-neuron network at 20% initial noise:
- **Synchronous**: ~3 steps × 50 evaluations = **150 evaluations**
- **Asynchronous (event-driven)**: ~10 events = **10 evaluations**
- **Speedup**: ~15× fewer computations
## Margin-Induced Smoothness
The key mechanism enabling efficient asynchronous retrieval:
1. **Large classification margins** from KLR optimization ensure local field h_i(s) aligns strongly with target state
2. This suppresses noise from update order permutations
3. Energy landscape characterized by **smooth, wide basins of attraction** largely free of rugged local minima
4. Individual bit flips tend to decrease pseudo-energy in practice
5. Result: direct convergence path with minimal secondary bit flips
### Why No Spurious Oscillations?
Unlike classical Hopfield networks where asynchronous updates often traverse rugged landscapes and fall into spurious states, KLR's margin maximization creates attractor basins that:
- Are deep enough to capture noisy initial states
- Are wide enough to accommodate update order variation
- Are smooth enough to prevent oscillatory behavior
## Trade-off: Capacity vs Locality
- **γ = 0.02 (Ridge)**: Maximizes attractor sharpness, optimal for static memory with minimal noise
- **γ = 0.1 (Robust)**: Required for P/N > 20 with 10-20% noise robustness
- The broader kernel trades some per-pattern specificity for wider basins that accommodate more patterns and more noise
## Implementation Guide
### KLR Hopfield Network
```python
import numpy as np
from scipy.special import expit # sigmoid
class KLRHopfieldNetwork:
def __init__(self, N, gamma=0.1, lr=0.1, weight_decay=0.01):
self.N = N
self.gamma = gamma # kernel locality parameter
self.lr = lr
self.weight_decay = weight_decay
self.alpha = None # dual variables, shape (P, N)
self.patterns = None # stored patterns, shape (P, N)
def rbf_kernel(self, x, y):
"""RBF kernel: K(x,y) = exp(-gamma * ||x-y||^2)"""
return np.exp(-self.gamma * np.sum((x - y) ** 2))
def gram_matrix(self):
"""Compute Gram matrix K_mu_nu = K(xi^mu, xi^nu)"""
P = self.patterns.shape[0]
K = np.zeros((P, P))
for mu in range(P):
for nu in range(P):
K[mu, nu] = self.rbf_kernel(self.patterns[mu], self.patterns[nu])
return K
def learn(self, patterns, iterations=500):
"""Learn dual variables via KLR optimization."""
self.patterns = patterns.copy()
P, N = patterns.shape
self.alpha = np.zeros((P, N))
K = self.gram_matrix()
for iteration in range(iterations):
for i in range(N):
y = (patterns[:, i] + 1) / 2 # target bits in {0, 1}
# Compute local fields for all patterns
h = np.array([
sum(self.alpha[mu, i] * self.rbf_kernel(patterns[nu], patterns[mu])
for mu in range(P))
for nu in range(P)
])
# Gradient of L2-regularized negative log-likelihood
sigma_h = expit(h)
grad = -patterns[:, i] * (y - sigma_h) + self.weight_decay * (K @ self.alpha[:, i])
self.alpha[:, i] -= self.lr * grad
def local_field(self, s, neuron_idx):
"""Compute local field h_i(s) for neuron i."""
h = sum(
self.alpha[mu, neuron_idx] * self.rbf_kernel(s, self.patterns[mu])
for mu in range(self.patterns.shape[0])
)
return h
def retrieve_synchronous(self, s_init, max_steps=20):
"""Synchronous (parallel) retrieval."""
s = s_init.copy()
trajectory = [s.copy()]
for _ in range(max_steps):
h = np.array([self.local_field(s, i) for i in range(self.N)])
s = np.sign(h)
s[s == 0] = 1 # convention: sign(0) = 1
trajectory.append(s.copy())
return s, trajectory
def retrieve_asynchronous(self, s_init, max_epochs=20):
"""
Asynchronous (sequential) retrieval.
One epoch = N sequential updates in random order.
"""
s = s_init.copy()
trajectory = [s.copy()]
event_count = 0 # total bit flips
for epoch in range(max_epochs):
order = np.random.permutation(self.N)
epoch_changed = False
for i in order:
h_i = self.local_field(s, i)
s_new = 1 if h_i >= 0 else -1
if s_new != s[i]:
s[i] = s_new
event_count += 1
epoch_changed = True
trajectory.append(s.copy())
if not epoch_changed:
break
return s, trajectory, event_count
```
### Usage Example
```python
# Store 150 random patterns in 50-neuron network
N, P = 50, 150 # P/N = 3.0
patterns = np.random.choice([-1, 1], size=(P, N))
net = KLRHopfieldNetwork(N, gamma=0.1)
net.learn(patterns, iterations=500)
# Retrieve from noisy initial state (20% noise)
noisy = patterns[0].copy()
flip_idx = np.random.choice(N, size=int(0.2 * N), replace=False)
noisy[flip_idx] *= -1
# Asynchronous retrieval
result, trajectory, events = net.retrieve_asynchronous(noisy)
print(f"Converged in {events} events (theoretical min: {int(0.2 * N)})")
print(f"Recall accuracy: {np.mean(result == patterns[0]):.2%}")
```
## Applications to Neuromorphic Hardware
1. **Event-driven associative memory**: Each bit flip triggers computation only for affected neurons
2. **Low-power pattern retrieval**: 15× fewer computations vs. synchronous evaluation
3. **Scalable storage**: P/N ≈ 30 capacity enables large pattern libraries
4. **No oscillation overhead**: Clean convergence eliminates need for oscillation detection/termination logic
## Limitations
1. Kernel matrix computation is O(P²) — scaling to very large P requires approximation methods
2. Results validated on random patterns; structured/real-world patterns may behave differently
3. Finite-size analysis only (N up to 200); asymptotic behavior unknown
4. RBF kernel chosen; other kernel families may offer different trade-offs
5. No theoretical proof of convergence for asynchronous updates — empirical evidence only
## Comparison with Modern Hopfield Networks (MHNs)
| Property | KLR Hopfield | Modern Hopfield (MHN) |
|----------|-------------|----------------------|
| Architecture | Binary-state, quadratic energy | Complex nonlinearities (e.g., Softmax) |
| Hardware compatibility | High (simple operations) | Low (requires Softmax) |
| Asynchronous support | Yes (this work) | Not explored |
| Storage capacity | P/N ≈ 30 | Higher, but hardware cost |
| Energy landscape | Smooth basins (margin-induced) | Complex, potentially rugged |
## Activation Keywords
kernel Hopfield network, asynchronous retrieval, event-driven computation, associative memory, neuromorphic hardware, storage capacity, KLR, margin maximization, RBF kernel, attractor landscape, pseudo-energy, pattern retrieval, error correction, energy-efficient memory, binary-state network
## Related Skills
- kernel-hopfield-event-driven-retrieval
- kernel-hopfield-associative-memory
- kernel-hopfield-attractor-geometry
- neuromorphic-continual-nuclear-ics
- snn-learning-survey