Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cognisnn-random-graphgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cognisnn-random-graph/SKILL.md---
name: cognisnn-random-graph
description: CogniSNN - Cognition-aware Spiking Neural Networks with Random Graph Architecture. Brain-inspired SNN with neuron-expandability, pathway-reusability, and dynamic-configurability. Use when implementing energy-efficient SNNs, neuromorphic computing, continual learning in SNNs, or brain-inspired neural architectures. Activation: CogniSNN, spiking neural network, SNN, random graph, brain-inspired, neuromorphic, continual learning, energy-efficient neural network, spike-based learning.
---
# CogniSNN: Random Graph SNN
Cognition-aware Spiking Neural Networks with Random Graph Architecture enabling neuron-expandability, pathway-reusability, and dynamic-configurability.
## Overview
CogniSNN introduces a new paradigm for Spiking Neural Networks (SNNs) that bridges artificial intelligence and computational neuroscience. Unlike traditional SNNs that adopt rigid hierarchical architectures, CogniSNN incorporates Random Graph Architecture (RGA) to emulate the brain's stochastic neural connectivity.
**Key Innovation:**
- **Neuron-Expandability:** Dynamic neuron addition without network retraining
- **Pathway-Reusability:** Critical neural pathway reuse for continual learning
- **Dynamic-Configurability:** Runtime network reconfiguration
**Performance:**
- State-of-the-art results on neuromorphic datasets
- Comparable to ANNs on Tiny-ImageNet
- Enhanced continual learning capability
- Improved neuromorphic chip deployment
## Core Architecture
### Random Graph Architecture (RGA)
**Biological Motivation:**
- Biological neurons are stochastically interconnected
- Complex neural pathways with non-hierarchical structure
- Key characteristics: expandability, reusability, configurability
**Mathematical Model:**
Connectivity modeled as random graph G(V, E):
```
V = {v₁, v₂, ..., vₙ} # Neurons
E = {(vᵢ, vⱼ) | p(vᵢ, vⱼ) > θ} # Synaptic connections
where:
- p(vᵢ, vⱼ): Connection probability based on distance and type
- θ: Threshold for connection formation
- Connection probability decreases with distance
```
**Small-World Properties:**
- High clustering coefficient
- Short average path length
- Balance between local and long-range connections
- Enhanced information propagation
### Network Components
**Spiking Neurons:**
```python
# Leaky Integrate-and-Fire (LIF) with RGA
τₘ dv/dt = -(v - v_rest) + I_syn + I_ext
if v ≥ v_th:
v = v_reset
emit spike
where:
- τₘ: Membrane time constant
- v_rest: Resting potential
- v_th: Threshold potential
- I_syn: Synaptic current
- I_ext: External current
```
**Synaptic Connections:**
- Stochastic initialization based on RGA
- Learnable weights through spike-timing-dependent plasticity (STDP)
- Dynamic pruning and growth
## Key Methodologies
### 1. Pure Spiking Residual Mechanism
**Problem:** Network degradation and dimensional mismatch in deep SNN pathways
**Solution:** Spiking Residual Connection
```
y = f(x) + α · x
where:
- f: Spiking transformation function
- α: Learnable residual scaling
- x: Input spike train
- y: Output spike train
```
**Implementation:**
- Bypass connections between layers
- Spiking identity mapping
- Adaptive pooling for dimension matching
- Maintains temporal sparsity
### 2. Key Pathway-based Learning without Forgetting (KP-LwF)
**Objective:** Continual learning with pathway reusability
**Mechanism:**
1. Identify key pathways (critical connections)
2. Freeze key pathway weights during new task learning
3. Enable selective reuse across tasks
4. Maintain historical knowledge
**Pathway Importance Scoring:**
```
Importance(w) = |∂L/∂w| · ||w||
where:
- L: Loss function
- w: Synaptic weight
- Key pathways: Top-k% by importance
```
**Learning Rule:**
```
Δw_newtask = -η · ∇L_newtask · (1 - mask_keypathway)
Δw_replay = -η_replay · ∇L_replay · mask_keypathway
```
### 3. Dynamic Growth Learning (DGL)
**Objective:** Runtime neuron and synapse growth along temporal dimension
**Neuron Growth:**
```
if activity_pattern.requires_new_neuron():
add_neuron(position)
connect_to_neighbors(neuron, k_nearest)
initialize_weights_small()
```
**Synapse Growth:**
```
if correlation(vᵢ, vⱼ) > θ_growth and no_connection(vᵢ, vⱼ):
add_synapse(vᵢ, vⱼ)
initialize_weight_Hebbian()
```
**Dynamic Timestep:**
- Variable simulation timesteps
- Adaptive based on activity
- Reduces fixed-timestep constraints
- Improves neuromorphic deployment efficiency
## Training Methodology
### Surrogate Gradient Learning
**Challenge:** Spiking non-differentiability
**Solution:** Surrogate gradients
```
∂s/∂v ≈ ∂s̃/∂v = 1/(α · |v - v_th| + 1)²
where:
- s: Spike (dirac delta)
- s̃: Surrogate spike
- α: Surrogate gradient scale
- v_th: Threshold potential
```
### Temporal Credit Assignment
**Backpropagation Through Time (BPTT):**
```
∂L/∂W = Σₜ ∂L/∂sₜ · ∂sₜ/∂vₜ · ∂vₜ/∂W
where:
- T: Total simulation timesteps
- sₜ: Spike at time t
- vₜ: Membrane potential at time t
- W: Synaptic weights
```
### Training Pipeline
```
1. Initialization
├── Random graph structure generation
├── Synaptic weight initialization
└── Neuron parameter configuration
2. Forward Pass
├── Spike propagation through network
├── Residual connections
└── Output spike generation
3. Backward Pass
├── Surrogate gradient computation
├── BPTT for temporal dependencies
└── Weight updates with masking
4. Dynamic Adaptation
├── Pathway importance estimation
├── Neuron/synapse growth (DGL)
└── Key pathway freezing (KP-LwF)
```
## Implementation
### Core Components
**RandomGraphSNN:**
```python
class RandomGraphSNN:
def __init__(self, n_neurons, connection_prob, small_world_rewiring):
# Generate random graph topology
self.graph = self._generate_small_world_graph(
n_neurons,
connection_prob,
small_world_rewiring
)
# Initialize LIF neurons
self.neurons = [LIFNeuron() for _ in range(n_neurons)]
# Initialize synapses
self.synapses = self._initialize_synapses(self.graph)
def forward(self, input_spikes, timesteps):
# Spiking forward pass
output_spikes = []
for t in range(timesteps):
spikes = self._propagate_spikes(input_spikes[:, t])
output_spikes.append(spikes)
return torch.stack(output_spikes, dim=1)
```
**SpikingResidual:**
```python
class SpikingResidual:
def __init__(self, in_channels, out_channels):
self.conv = SpikingConv2d(in_channels, out_channels)
self.pool = AdaptiveSpikingPool()
self.alpha = nn.Parameter(torch.ones(1))
def forward(self, x):
residual = self.pool(x)
out = self.conv(x)
return out + self.alpha * residual
```
**KeyPathwayLwF:**
```python
class KeyPathwayLwF:
def identify_key_pathways(self, model, dataloader):
# Compute Fisher information
fisher = self._compute_fisher_information(model, dataloader)
# Identify top-k% important connections
key_mask = fisher > percentile(fisher, 90)
return key_mask
def compute_loss(self, model, new_data, replay_data, key_mask):
loss_new = criterion(model(new_data), labels_new)
loss_replay = criterion(model(replay_data), labels_replay)
# Mask gradients for key pathways on new task
loss_new.register_hook(lambda grad: grad * (1 - key_mask))
loss_replay.register_hook(lambda grad: grad * key_mask)
return loss_new + λ * loss_replay
```
## Applications
### Neuromorphic Computing
**Advantages:**
- Energy-efficient inference
- Event-driven computation
- Compatible with neuromorphic chips
- Real-time processing
**Deployment:**
- Intel Loihi
- IBM TrueNorth
- BrainScaleS
- Dynap-SE
### Continual Learning
**Scenarios:**
- Incremental class learning
- Task-incremental learning
- Domain-incremental learning
- Online learning
**Performance:**
- Reduced catastrophic forgetting
- Efficient knowledge transfer
- Scalable to many tasks
### Brain-Inspired AI
**Research Applications:**
- Understanding neural computation
- Brain simulation
- Cognitive modeling
- Neuroscience hypothesis testing
## Experimental Results
### Datasets
**Neuromorphic Datasets:**
- DVS-Gesture
- N-Caltech101
- N-MNIST
- CIFAR10-DVS
**Static Image Datasets:**
- Tiny-ImageNet
- CIFAR-10/100
- ImageNet (subset)
### Performance Metrics
| Dataset | Method | Accuracy | Energy (pJ) |
|---------|--------|----------|-------------|
| DVS-Gesture | SNN (ResNet) | 85.2% | 12.5 |
| DVS-Gesture | **CogniSNN** | **92.8%** | **8.3** |
| Tiny-ImageNet | ANN | 62.1% | 2450 |
| Tiny-ImageNet | **CogniSNN** | **63.4%** | **156** |
### Continual Learning Performance
| Tasks | Baseline | CogniSNN (KP-LwF) |
|-------|----------|-------------------|
| 5 tasks | 34.2% | **78.5%** |
| 10 tasks | 21.8% | **71.3%** |
| 20 tasks | 15.4% | **64.7%** |
## Comparison with Traditional Approaches
| Feature | Traditional SNN | CogniSNN |
|---------|----------------|----------|
| Architecture | Layered, feedforward | Random graph, recurrent |
| Connectivity | Deterministic | Stochastic, brain-like |
| Depth | Limited by vanishing spikes | Residual connections enable depth |
| Continual Learning | Catastrophic forgetting | Key pathway reuse |
| Flexibility | Fixed structure | Dynamic growth |
| Neuromorphic Efficiency | Moderate | High (dynamic timesteps) |
## Activation Keywords
- CogniSNN
- spiking neural network
- SNN
- random graph architecture
- brain-inspired
- neuromorphic computing
- continual learning SNN
- energy-efficient neural network
- spike-based learning
- dynamic neural network
- key pathway learning
## References
- **Paper:** CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks
- **Authors:** Huang et al.
- **arXiv:** 2512.11743
- **Date:** December 2025
## Related Skills
- `spiking-neural-network-analysis`: SNN analysis and evaluation
- `neuromorphic-computing`: Neuromorphic hardware programming
- `brain-inspired-intelligence-paradigm`: Brain-inspired AI frameworks
- `continual-learning-neural`: Continual learning methodologies