Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-klr-hopfield-event-driven-retrievalgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-klr-hopfield-event-driven-retrieval/SKILL.md---
name: klr-hopfield-event-driven-retrieval
description: >
Kernel Logistic Regression (KLR) Hopfield Network with asynchronous event-driven retrieval methodology.
Enables high-capacity associative memory (P/N ≈ 30, vs classical 0.14N) with
neuromorphic-compatible sparse computation. Use when: designing associative memory systems,
event-driven neuromorphic hardware deployment, comparing Hopfield variants (KLR vs MHN),
analyzing attractor landscapes, kernel-based neural networks, or studying asynchronous vs
synchronous retrieval dynamics in neural computing.
Triggers: KLR Hopfield, kernel associative memory, event-driven retrieval, asynchronous Hopfield,
neuromorphic associative memory, kernel logistic regression memory, high-capacity Hopfield,
margin-induced attractor, asynchronous neural dynamics, 核逻辑回归Hopfield.
---
# KLR Hopfield Network: Asynchronous Event-Driven Retrieval
## Overview
Kernel Logistic Regression (KLR) Hopfield networks achieve high storage capacity (P/N ≈ 30)
via margin maximization in kernel feature space, while supporting efficient asynchronous,
event-driven retrieval — making them suitable for neuromorphic hardware (Loihi, TrueNorth).
Source: Akira Tamamori, arXiv:2605.05978v1 [cs.NE] (May 2026).
## Core Mechanism
### KLR Hopfield Model
- Neurons: bipolar state s ∈ {-1, 1}^N, storing P patterns {ξ^μ}_{μ=1}^P
- Local field: h_i(s) = Σ_{μ} α^μ_i · K(s, ξ^μ)
- Kernel: RBF kernel K(x, y) = exp(-γ‖x - y‖²), locality parameter γ
- Learning: L2-regularized negative log-likelihood per neuron, yielding dual variables α ∈ R^{P×N}
### Pseudo-Energy Function
V(s) = -Σ_i s_i · h_i(s) = -Σ_i s_i · Σ_μ α^μ_i · K(s, ξ^μ)
KLR optimization creates **large-margin attractors** → smooth energy landscape.
### Two Update Schemes
| Scheme | Description | Pros | Cons |
|--------|-------------|------|------|
| Synchronous | All N neurons update simultaneously | GPU efficient | May oscillate; O(PN) per step |
| Asynchronous | Single neuron updates sequentially | Guaranteed fixed-point convergence; sparse events | Sequential order dependency |
Key finding: under optimal γ, trajectories are **statistically indistinguishable** between schemes.
## Key Results
### Storage Capacity
- Classical Hopfield: P ≈ 0.14N
- KLR Hopfield (async): P/N ≈ 30 with 100% recall (N=100, 200 tested)
- Capacity scales positively with network size
### Event Efficiency
- Async bit flips ≈ initial Hamming distance (near-optimal)
- 20% noise: ~10 events vs. 150 evaluations (synchronous, 3 steps × 50 neurons)
- **15× reduction** in state evaluations
### Kernel Parameter Trade-off
- γ = 0.02 (Ridge regime): sharp attractors, optimal for static memory, minimal noise
- γ = 0.1 (Robust regime): wider basins, robust to 10-20% noise, high storage load
- Smaller γ isolates patterns; larger γ enlarges basins but risks crosstalk
## Implementation Pattern
```python
import numpy as np
from scipy.special import expit # sigmoid
class KLRHopfieldNetwork:
def __init__(self, N, gamma=0.1, lr=0.1, weight_decay=0.01):
self.N = N
self.gamma = gamma
self.lr = lr
self.lam = weight_decay
self.alpha = None # dual variables, shape (P, N)
self.patterns = None # shape (P, N)
self.K = None # Gram matrix, shape (P, P)
def rbf_kernel(self, x, y):
return np.exp(-self.gamma * np.sum((x - y)**2))
def gram_matrix(self, patterns):
P = len(patterns)
K = np.zeros((P, P))
for i in range(P):
for j in range(P):
K[i, j] = self.rbf_kernel(patterns[i], patterns[j])
return K
def fit(self, patterns, epochs=500):
"""Learn dual variables via KLR."""
self.patterns = np.array(patterns)
P, N = self.patterns.shape
self.alpha = np.zeros((P, N))
self.K = self.gram_matrix(self.patterns)
for epoch in range(epochs):
for i in range(N):
h = self.K @ self.alpha[:, i] # (P,)
sigma = expit(h)
y = (self.patterns[:, i] + 1) / 2 # {0, 1}
grad = self.K @ (sigma - y) + self.lam * self.alpha[:, i]
self.alpha[:, i] -= self.lr * grad
def local_field(self, s, i):
"""Compute local field for neuron i."""
K_s = np.array([self.rbf_kernel(s, xi) for xi in self.patterns])
return np.dot(self.alpha[:, i], K_s)
def retrieve_sync(self, s_init, max_steps=50):
"""Synchronous retrieval."""
s = s_init.copy()
for _ in range(max_steps):
h = np.array([self.local_field(s, i) for i in range(self.N)])
s = np.sign(h)
return s
def retrieve_async(self, s_init, max_epochs=50):
"""Asynchronous event-driven retrieval."""
s = s_init.copy()
events = 0
for _ in range(max_epochs):
order = np.random.permutation(self.N)
for i in order:
old_s_i = s[i]
s[i] = 1 if self.local_field(s, i) >= 0 else -1
if s[i] != old_s_i:
events += 1
return s, events
```
## Hardware Mapping
| Component | Software | Neuromorphic Hardware |
|-----------|----------|----------------------|
| Neuron state | Binary variable | SRAM cell / memristor |
| Kernel computation | Dense matrix-vector | Event-triggered MAC |
| Async update | Sequential loop | Spike-triggered neuron |
| Storage | α matrix in RAM | On-chip weight memory |
- Complexity: O(PN) synchronous vs. O(events × P) asynchronous
- Event count ≈ Hamming distance → substantial power savings on Loihi/TrueNorth
## When to Use
- **Need high capacity**: KLR Hopfield >> Classical Hopfield (30× vs 0.14×)
- **Deploy on neuromorphic hardware**: Async event-driven is native fit
- **Memory-constrained**: KLR stores patterns implicitly via dual variables
- **Compare with MHN**: KLR uses simpler binary architecture + margin optimization vs MHN's exponential energy
## Pitfalls
- **Kernel γ tuning is critical**: Too small → narrow basins, poor noise robustness; too large → crosstalk at high load
- **Limited to tested regime**: Results validated on random patterns; structured/correlated patterns may differ
- **No theoretical capacity bound**: P/N ≈ 30 is empirical lower bound; actual capacity unknown
- **Learning is per-neuron**: α_i learned independently, no inter-neuron weight sharing during training
## Related Skills
- `kernel-hopfield-attractor-geometry` — attractor boundaries and storage capacity analysis
- `kernel-hopfield-event-driven-retrieval` — event-driven retrieval in KLR Hopfield networks
- `klr-hopfield-associative-memory` — KLR Hopfield networks overview