Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cognisnn-random-graph-snngit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cognisnn-random-graph-snn/SKILL.md---
name: cognisnn-random-graph-snn
description: Cognition-aware Spiking Neural Network (CogniSNN) methodology implementing Random Graph Architecture (RGA) for brain-inspired intelligence
trigger_words:
- cognisnn
- random graph architecture
- pathway reusability
- dynamic growth learning
- kplwf
- neuron expandability
- brain-inspired snn
category: ai_collection
---
# CogniSNN: Random Graph Architecture for Spiking Neural Networks
## Overview
CogniSNN introduces a paradigm shift in Spiking Neural Network design by incorporating **Random Graph Architecture (RGA)** instead of traditional rigid hierarchical structures. This methodology enables three key properties:
- **Neuron-Expandability**: Dynamic neuron addition without disrupting pathways
- **Pathway-Reusability**: Critical pathways shared across tasks via KP-LwF
- **Dynamic-Configurability**: Adaptive growth along temporal dimension via DGL
## Paper Reference
- **Title**: CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks
- **Authors**: Yongsheng Huang, Peibo Duan, Yujie Wu, Kai Sun, Zhipeng Liu, Changsheng Zhang, Bin Zhang, Mingkun Xu
- **arXiv ID**: 2512.11743
- **Published**: December 12, 2025
- **Category**: cs.NE, cs.AI
## Core Architecture
### Random Graph Structure
Unlike ANN's chain-like hierarchy, CogniSNN uses stochastic connectivity:
```
┌─────────┐ ┌─────────┐ ┌─────────┐
│ Neuron │────│ Neuron │────│ Neuron │
│ A │ │ B │ │ C │
└────┬────┘ └────┬────┘ └────┬────┘
│ │ │
└──────────────┼──────────────┘
│
┌─────────┐
│ Neuron │
│ D │
└─────────┘
```
**Properties:**
- Small-world network characteristics
- Sparse connectivity (typical density: 0.1-0.3)
- Path redundancy for robustness
### Neuron Model
**Leaky Integrate-and-Fire (LIF) with Adaptive Threshold:**
```
τ_m * dv/dt = -(v - v_rest) + R * I(t)
if v ≥ θ(t):
emit spike
v = v_reset
θ(t) = θ_0 + α * (recent_spike_rate)
```
**Parameters:**
- τ_m: Membrane time constant (~20ms)
- v_rest: Resting potential (-70mV)
- θ_0: Baseline threshold (-55mV)
- α: Adaptation strength (0.01-0.1)
## Key Algorithms
### 1. Pure Spiking Residual Mechanism
Addresses network degradation in deep random pathways:
```python
class SpikingResidual(nn.Module):
def __init__(self, dim):
super().__init__()
self.conv = SpikingConv2d(dim, dim)
self.bn = SpikingBN(dim)
def forward(self, x_spikes):
# Both input and output are spike trains
residual = self.bn(self.conv(x_spikes))
return x_spikes + residual # Direct spike addition
```
**Advantages:**
- No analog-digital conversion
- Maintains temporal precision
- Compatible with neuromorphic hardware
### 2. Key Pathway-based Learning without Forgetting (KP-LwF)
Enables continual learning by pathway reuse:
```python
class KPLwF:
def __init__(self, model, threshold=0.8):
self.model = model
self.critical_pathways = []
self.importance_threshold = threshold
def identify_critical_pathways(self, task_data):
"""
Analyze pathway activations to identify critical ones
"""
pathway_scores = {}
for pathway_id, pathway in enumerate(self.model.pathways):
# Compute importance score
activations = self.measure_pathway_activity(pathway, task_data)
pathway_scores[pathway_id] = self.compute_importance(activations)
if pathway_scores[pathway_id] > self.importance_threshold:
self.critical_pathways.append(pathway_id)
pathway.freeze()
def learn_new_task(self, new_task_data, epochs=100):
"""
Learn new task while preserving critical pathway knowledge
"""
# Only train non-critical pathways
for pathway_id, pathway in enumerate(self.model.pathways):
if pathway_id not in self.critical_pathways:
pathway.trainable = True
else:
# Use as knowledge distillation source
pathway.trainable = False
pathway.teacher_mode = True
# Train with combined loss
for epoch in range(epochs):
# Task loss
task_loss = self.compute_task_loss(new_task_data)
# Distillation loss from critical pathways
distill_loss = self.compute_distillation_loss(new_task_data)
total_loss = task_loss + 0.5 * distill_loss
total_loss.backward()
self.optimizer.step()
```
### 3. Dynamic Growth Learning (DGL)
Adapts network capacity to input complexity:
```python
class DynamicGrowthLearning:
def __init__(self, initial_capacity=100):
self.current_capacity = initial_capacity
self.utilization_history = []
self.growth_threshold = 0.9
self.pruning_threshold = 0.05
def compute_utilization(self, layer_activations):
"""
Measure how fully current capacity is being used
"""
active_neurons = (layer_activations > 0).sum(dim=1).float().mean()
utilization = active_neurons / layer_activations.size(1)
self.utilization_history.append(utilization.item())
return utilization
def should_grow(self):
"""
Check if network needs expansion
"""
recent_util = sum(self.utilization_history[-10:]) / 10
return recent_util > self.growth_threshold
def grow_temporal_dimension(self, layer):
"""
Add neurons along temporal dimension
"""
new_neurons = self.initialize_neurons(
count=layer.size(1) // 10, # Add 10%
temporal_depth=layer.temporal_dim + 1
)
layer.expand(new_neurons)
self.current_capacity += len(new_neurons)
def should_prune(self, neuron_activations):
"""
Identify inactive neurons for pruning
"""
avg_activation = neuron_activations.mean(dim=0)
return avg_activation < self.pruning_threshold
def forward(self, x, timestep):
# Dynamic adjustment
if timestep >= self.current_capacity - 1:
if self.should_grow():
self.grow_temporal_dimension(self.model.output_layer)
# Pruning check (periodic)
if timestep % 1000 == 0:
self.prune_inactive_neurons()
return self.model(x)
```
## Training Pipeline
### Phase 1: Structure Initialization
```python
def initialize_cognisnn(input_size, output_size, graph_density=0.2):
"""
Initialize CogniSNN with random graph structure
"""
# Create random graph backbone
graph = create_small_world_graph(
n_nodes=1000,
k_neighbors=10,
rewiring_prob=0.3
)
# Initialize spiking neurons
neurons = [LIFNeuron() for _ in range(1000)]
# Connect based on graph edges
for edge in graph.edges:
connect_spiking(neurons[edge[0]], neurons[edge[1]])
# Add input/output layers
input_layer = SpikingInputLayer(input_size)
output_layer = SpikingOutputLayer(output_size)
return CogniSNN(input_layer, neurons, output_layer)
```
### Phase 2: Surrogate Gradient Training
```python
def train_with_surrogate(model, dataloader, epochs=50):
"""
Standard surrogate gradient training
"""
optimizer = torch.optim.Adam(model.parameters(), lr=1e-3)
for epoch in range(epochs):
for batch in dataloader:
spikes, labels = batch
# Forward pass
outputs = model(spikes)
# Surrogate gradient for backprop
loss = F.cross_entropy(outputs, labels)
# Backward
optimizer.zero_grad()
loss.backward()
optimizer.step()
# Identify critical pathways after initial training
if epoch == epochs // 2:
model.identify_critical_pathways(validation_data)
```
### Phase 3: Dynamic Growth Activation
```python
def activate_dynamic_growth(model, new_task_data):
"""
Enable DGL for handling variable-length sequences
"""
model.dgl = DynamicGrowthLearning(
initial_capacity=model.current_timesteps
)
for batch in new_task_data:
sequence_length = batch.size(1)
# DGL handles variable lengths automatically
outputs = model.dgl.forward(batch, sequence_length)
```
## Experimental Results
### Benchmarks
| Dataset | CogniSNN | SOTA SNN | Improvement |
|---------|----------|----------|-------------|
| N-MNIST | 99.2% | 98.8% | +0.4% |
| DVS-CIFAR10 | 81.5% | 79.2% | +2.3% |
| DVS-Gesture | 97.8% | 96.5% | +1.3% |
| Tiny-ImageNet | 65.3% | 62.1% | +3.2% |
### Continual Learning Performance
| Task Sequence | CogniSNN (KP-LwF) | Baseline | Improvement |
|---------------|-------------------|----------|-------------|
| N-MNIST → CIFAR10 | 94.2% | 67.3% | +26.9% |
| 5-Task Split | 91.8% | 72.4% | +19.4% |
### Computational Efficiency
| Metric | CogniSNN | Standard SNN | Improvement |
|--------|----------|--------------|-------------|
| Parameters | 1.2M | 2.8M | -57% |
| Energy (J/sample) | 0.45 | 1.12 | -60% |
| Training Time | 3.2h | 5.1h | -37% |
## Use Cases
1. **Brain-inspired AI Systems**: When biological realism is desired
2. **Continual Learning Applications**: Multi-task learning without forgetting
3. **Resource-Constrained Devices**: Dynamic growth adapts to available resources
4. **Event-based Vision**: Neuromorphic camera data processing
5. **Temporal Pattern Recognition**: Variable-length sequences
## Advantages
- **Biological Plausibility**: Closer to brain structure than ANN-inspired SNNs
- **Parameter Efficiency**: Reusable pathways reduce redundancy
- **Continual Learning**: Pathway reusability enables lifelong learning
- **Temporal Flexibility**: DGL handles variable-length inputs
- **Hardware Friendly**: Event-driven computation suitable for neuromorphic chips
## Limitations
- **Complexity**: Random graph structure harder to analyze than regular networks
- **Hyperparameter Sensitivity**: Graph density, growth thresholds need tuning
- **Training Stability**: Surrogate gradients in random topologies can be unstable
- **Limited Theoretical Understanding**: Random graph dynamics less studied than regular architectures
## Implementation Notes
### Hardware Considerations
- Use sparse matrix formats for efficient storage
- Implement event-driven computation for energy efficiency
- Support online learning for DGL updates
### Hyperparameter Guidelines
- **Graph density**: 0.1-0.3 for balance between connectivity and sparsity
- **Growth threshold**: 0.8-0.9 to trigger expansion
- **Pruning threshold**: 0.05-0.1 for inactive neurons
- **Pathway importance**: 0.7-0.9 for critical pathway identification
## Related Work
- Spiking Neural Networks
- Neuromorphic Computing
- Continual Learning
- Graph Neural Networks
- Brain-inspired Intelligence
- Small-world Networks
## Citation
```bibtex
@article{huang2025cognisnn,
title={CogniSNN: Enabling Neuron-Expandability, Pathway-Reusability, and Dynamic-Configurability with Random Graph Architectures in Spiking Neural Networks},
author={Huang, Yongsheng and Duan, Peibo and Wu, Yujie and Sun, Kai and Liu, Zhipeng and Zhang, Changsheng and Zhang, Bin and Xu, Mingkun},
journal={arXiv preprint arXiv:2512.11743},
year={2025}
}
```