Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-kernel-hopfield-event-driven-retrievalgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-kernel-hopfield-event-driven-retrieval/SKILL.md--- name: kernel-hopfield-event-driven-retrieval description: > Event-driven asynchronous retrieval in Kernel Logistic Regression (KLR) Hopfield networks for neuromorphic associative memory. Covers asynchronous update dynamics, large-margin attractor energy landscapes, and sparse event-driven computation. Use when: (1) building neuromorphic associative memory systems, (2) optimizing Hopfield network retrieval for energy efficiency, (3) analyzing event-driven neural computation, (4) studying kernel-based associative memories, or (5) comparing synchronous vs asynchronous neural dynamics. Activation: kernel hopfield, event-driven retrieval, KLR Hopfield, asynchronous associative memory, neuromorphic memory, kernel logistic regression hopfield, sparse event computation, attractor energy landscape. --- # Event-Driven Retrieval in Kernel Hopfield Networks Based on Tamamori (2026) — arXiv:2605.05978. ## Core Concept Kernel Logistic Regression (KLR) Hopfield networks combine the high storage capacity of kernel methods with associative memory attractor dynamics. This paper demonstrates that **asynchronous sequential updates** achieve retrieval trajectories statistically indistinguishable from synchronous dynamics while enabling energy-efficient event-driven computation. ## Key Findings ### 1. Asynchronous ≈ Synchronous Retrieval Under appropriately tuned kernel parameters: - Asynchronous update trajectories match synchronous dynamics statistically - Recall accuracy remains high within tested regimes for random patterns - No spurious oscillations observed during convergence ### 2. Storage Capacity - Empirical capacity approaching **O(N)** for static random patterns (where N = number of neurons) - Exceeds classical Hopfield limit of ~0.138N - Large-margin attractors from KLR learning create smooth energy landscapes ### 3. Event-Driven Efficiency - Network converges using ~H bit flips (H = initial Hamming distance) - Near-optimal event count: each flip corrects one error on average - No wasted transitions — suitable for sparse neuromorphic hardware ## Mathematical Framework ### KLR Hopfield Energy Function The KLR Hopfield network uses a kernel-based energy function: ``` E(x) = Σᵢ log(1 + exp(-yᵢ f(xᵢ))) + λ||f||²_H ``` where f is learned in a reproducing kernel Hilbert space (RKHS), enabling nonlinear decision boundaries between stored patterns. ### Asynchronous Update Rule ``` xᵢ(t+1) = sign( Σⱼ K(xᵢ, xⱼ) · αⱼ ) ``` where K is the kernel function and α are the KLR coefficients. ### Energy Landscape Properties - **Large-margin attractors**: KLR learning maximizes separation between pattern basins of attraction - **Smooth basins**: no local minima between pattern and target - **Event-efficient**: convergence requires ~Hamming-distance events ## Implementation Guidelines ### When to Use Asynchronous vs Synchronous | Scenario | Recommended Mode | |----------|-----------------| | Neuromorphic hardware deployment | Asynchronous (event-driven) | | GPU/parallel batch processing | Synchronous | | Energy-constrained edge devices | Asynchronous | | Maximum throughput | Synchronous | | Online/incremental learning | Asynchronous | ### Kernel Parameter Tuning - **RBF kernel bandwidth σ**: controls attractor basin size - Too small → fragmented basins, poor generalization - Too large → merged basins, pattern interference - Sweet spot: σ ≈ √d / 2 (d = pattern dimension) - **Regularization λ**: controls capacity vs robustness trade-off - Higher λ → fewer spurious attractors, lower capacity - Lower λ → higher capacity, more false memories ### Hardware Mapping For neuromorphic deployment: 1. Map each neuron to an event-driven processing element 2. Use kernel precomputation for common input patterns 3. Implement asynchronous update as spike-based threshold crossing 4. Monitor convergence by tracking event count vs Hamming distance ## Relationship to Existing Work | Model | Capacity | Update Mode | Energy Efficiency | |-------|----------|-------------|-------------------| | Classical Hopfield | ~0.138N | Synchronous | Low | | Modern Hopfield (Demircigil) | ~2^(N/2) | Synchronous | Low | | **KLR Hopfield (this work)** | ~O(N) | **Asynchronous** | **High** | ## Common Pitfalls - **Kernel parameter sensitivity**: Capacity degrades sharply outside optimal parameter range — always validate on held-out patterns - **Pattern correlations**: Results shown for random patterns; correlated patterns may reduce effective capacity - **Asynchronous scheduling**: Random neuron selection assumed; biased scheduling may affect convergence properties - **Scalability**: Kernel computation is O(N²) per pattern — consider Nyström approximation for large N ## Activation Keywords - kernel hopfield, KLR Hopfield, event-driven retrieval, asynchronous associative memory, neuromorphic memory, large-margin attractor, sparse event computation, kernel logistic regression memory