Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-clp-snn-loihi2-continual-learninggit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-clp-snn-loihi2-continual-learning/SKILL.md---
name: clp-snn-loihi2-continual-learning
description: >
Online Continual Learning on Intel Loihi 2 via a Co-designed Spiking Neural Network (CLP-SNN).
Covers self-normalizing local learning rules, spike-driven neural state machines for autonomous
on-chip learning, and breakthrough efficiency gains on neuromorphic hardware. Achieves 113x
lower latency and 6,600x lower energy than edge-GPU baselines while matching replay-based
accuracy rehearsal-free. Use when: implementing continual learning on neuromorphic hardware,
designing SNNs for edge AI deployment, developing local learning rules for on-chip adaptation,
optimizing spiking networks for Intel Loihi 2, or studying catastrophic forgetting in SNNs.
Activation: CLP-SNN, Loihi 2 continual learning, neuromorphic edge AI, spike-driven learning,
self-normalizing SNN, local learning rule, rehearsal-free continual learning, Loihi 2,
神经形态持续学习, 脉冲驱动学习, Loihi 2芯片
---
# CLP-SNN: Online Continual Learning on Intel Loihi 2
Based on: Hajizada et al. (2026), arXiv:2511.01553
## Problem
Edge AI systems require **online continual learning** — adapting to non-stationary data
streams and unfamiliar classes without catastrophic forgetting — under strict power
constraints. Traditional approaches rely on replay buffers or cloud-based training, both
impractical for edge deployment.
## Solution: CLP-SNN
A co-designed spiking neural network with two key innovations:
### 1. Self-Normalizing Local Learning Rule
- Learning based on **local synaptic plasticity** (no backpropagation)
- **Self-normalizing** to prevent weight divergence during continual adaptation
- Compatible with Loihi 2's on-chip learning infrastructure
- Rehearsal-free: no need to store or replay past examples
### 2. Spike-Driven Neural State Machine
- Autonomous on-chip learning triggered by spike patterns
- State machine controls learning phases without external intervention
- Exploits Loihi 2's event-driven architecture for efficiency
## Performance Results
### Accuracy (OpenLORIS few-shot)
- CLP-SNN matches replay-based methods **without rehearsal**
- Competes with strong edge-GPU baselines
### Efficiency Gains vs Edge-GPU
| Metric | Edge-GPU Baseline | CLP-SNN on Loihi 2 | Speedup |
|--------|-------------------|---------------------|---------|
| Latency | 37.3 ms | 0.33 ms | **113×** |
| Energy | 333 mJ | 0.05 mJ | **6,600×** |
### Efficiency Decomposition
The gains come from two sources:
- **Algorithmic efficiency** (SNN + local learning): ~14.5× latency, ~22.6× energy
- **Neuromorphic hardware co-design**: ~7.8× latency, ~295× energy
Key hardware advantages:
- **Event-driven learning**: Only active synapses consume energy
- **Sparse graded-spike communication**: Efficient information encoding
## Architecture
```
CLP-SNN on Loihi 2
├── Spiking neural network layers
│ ├── Event-driven spike propagation
│ └── Sparse graded-spike communication
├── Self-normalizing local learning rule
│ ├── Local synaptic plasticity
│ └── Weight normalization to prevent divergence
└── Spike-driven neural state machine
├── Autonomous learning phase control
└── No external intervention needed
```
## Local Learning Rule
The self-normalizing local learning rule updates synapses based on:
```
Δw_ij = f(pre_spike_i, post_spike_j, w_ij)
```
where the function f incorporates:
- Pre- and post-synaptic spike timing
- Current weight value (state-dependent)
- Normalization factor to prevent divergence
Key property: **local** — each synapse update depends only on locally available
information (pre/post spikes, current weight), making it implementable on Loihi 2.
## Loihi 2 Implementation
Loihi 2 features used:
- **On-chip learning**: Direct weight updates without host CPU
- **Event-driven execution**: Only active neurons consume power
- **Graded spikes**: Multi-valued spike communication for richer encoding
- **Sparse connectivity**: Exploits hardware sparsity support
### Deployment Workflow
1. Design SNN architecture with local learning rules
2. Compile to Loihi 2 using NxSDK/Lava framework
3. Configure on-chip learning parameters
4. Deploy to Loihi 2 chip
5. Network learns autonomously from streaming data
## Key Insights
1. **Co-design is essential**: Algorithmic efficiency + hardware efficiency → breakthrough gains
2. **Local learning enables autonomy**: No backprop needed; learning is truly on-device
3. **Rehearsal-free is achievable**: Self-normalizing rules prevent catastrophic forgetting
4. **Event-driven + sparse = ultra-low energy**: Only active components consume power
5. **Graded spikes bridge accuracy-efficiency gap**: More expressive than binary spikes
## When to Use
| Use Case | Approach |
|----------|----------|
| Cloud-based training with abundant compute | Standard backprop + replay |
| Edge deployment with strict power | CLP-SNN on Loihi 2 |
| Online adaptation to new classes | CLP-SNN local learning |
| Catastrophic forgetting prevention | Self-normalizing rules |
| Ultra-low latency inference | Event-driven Loihi 2 |
## Pitfalls
- **Loihi 2 availability**: Limited hardware access; use cloud-based Loihi 2 instances
- **Learning rule constraints**: Must be locally computable; no global gradient signals
- **Capacity limits**: On-chip learning has memory constraints for weight storage
- **Task complexity**: Best suited for classification tasks; complex tasks may need hybrid approaches
- **Calibration**: Device-specific calibration needed for optimal performance
## Related Skills
- snn-learning-survey
- snn-performance-analysis
- snn-microcontroller-simulation
- edgespike-edge-iot-snn
- neuromorphic-continual-nuclear-ics
## Reference
Hajizada, E., Rager, D., Shea, T., Campos-Macias, L., Wild, A., Hüllermeier, E.,
Sandamirskaya, Y., & Davies, M. (2026). "Online Continual Learning on Intel Loihi 2
via a Co-designed Spiking Neural Network." arXiv:2511.01553 [cs.LG, cs.AI, cs.NE].