Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-congestion-aware-delay-snngit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-congestion-aware-delay-snn/SKILL.md---
name: congestion-aware-delay-snn
description: "Congestion-Aware Dynamic Axonal Delay mechanism for Spiking Neural Networks. Decomposes delay into channel-wise static base delay + global activity-conditioned shift. Reduces delay parameters by ~50% while improving accuracy on temporal tasks. Source: arXiv:2605.01291 (Bai et al., May 2026)."
category: ai_collection
---
# Congestion-Aware Dynamic Axonal Delay for SNNs
## Description
This skill covers the Congestion-Aware Dynamic Axonal Delay mechanism for Spiking Neural Networks (SNNs). Instead of assigning static delays to individual synapses (which creates too many parameters and limited adaptability), this approach decomposes delay into:
1. **Channel-wise static base delay** for temporal structuring
2. **Global activity-conditioned shift** that dynamically regulates state update rate under varying spike intensities
**Source Paper:** [arXiv:2605.01291](https://arxiv.org/abs/2605.01291) - "Congestion-Aware Dynamic Axonal Delay for Spiking Neural Networks" (Dewei Bai, Hongxiang Peng, Yunyun Zeng, Ziyu Zhang, Hong Qu, May 2, 2026)
## Activation Keywords
- congestion-aware delay
- dynamic axonal delay SNN
- delay learning SNN
- spike congestion
- temporal SNN
- SNN delay mechanism
- activity-conditioned delay
- 动态轴突延迟
- 脉冲神经网络延迟
- 拥塞感知延迟
## Core Problem
### Static Delay Limitations
Traditional delay learning in SNNs assigns a **fixed delay to each synapse**:
```
delay_ij = learnable_parameter # per-synapse
```
**Problems:**
- **Parameter explosion:** O(N²) delay parameters for N neurons
- **Limited adaptability:** Cannot adjust to input-dependent activity dynamics
- **Static behavior:** Same delay regardless of network congestion state
## Solution: Two-Level Delay Decomposition
### Level 1: Channel-Wise Static Base Delay
Instead of per-synapse delays, assign delays per **channel** (group of synapses):
```python
# Channel-wise base delay (dramatically fewer parameters)
base_delay[channel] = learnable_parameter # O(C) where C << N²
# Differentiable linear interpolation for gradient flow
def continuous_delay(x, base_delay):
floor_d = floor(base_delay)
ceil_d = ceil(base_delay)
frac = base_delay - floor_d
return (1 - frac) * x[t - floor_d] + frac * x[t - ceil_d]
```
### Level 2: Global Activity-Conditioned Shift
A **global shift** that dynamically adjusts based on current spike intensity:
```python
# Measure network congestion (spike intensity)
spike_intensity = mean(recent_spike_counts)
# Activity-conditioned shift
shift = shift_network(spike_intensity) # learned function
# Total effective delay
effective_delay = base_delay + shift
```
### Key Insight
- **Base delay** captures the structural temporal relationships (what to align)
- **Congestion shift** captures the dynamic processing conditions (when to process)
- Together they model both **what** and **when** optimally
### Discretization at Inference
During training, delays are continuous (differentiable via linear interpolation). At inference, they are discretized:
```python
# Training: continuous (differentiable)
delay_continuous = base_delay + shift
# Inference: discretized (no interpolation overhead)
delay_discrete = round(delay_continuous)
```
## Implementation Workflow
### Step 1: Define Channel-Wise Base Delays
```python
import torch
import torch.nn as nn
class CongestionAwareDelay(nn.Module):
def __init__(self, num_channels, max_delay=10):
super().__init__()
# Channel-wise base delays
self.base_delay = nn.Parameter(
torch.randn(num_channels) * 0.5 + max_delay / 2
)
# Shift network parameters
self.shift_mlp = nn.Sequential(
nn.Linear(1, 16),
nn.ReLU(),
nn.Linear(16, 1)
)
self.max_delay = max_delay
def forward(self, spikes, spike_intensity):
"""
spikes: (batch, channel, time)
spike_intensity: (batch,) current network congestion level
"""
# Compute activity-conditioned shift
shift = self.shift_mlp(spike_intensity.unsqueeze(-1))
# Total delay
total_delay = self.base_delay + shift.squeeze(-1)
total_delay = torch.clamp(total_delay, 0, self.max_delay)
# Apply delay with differentiable interpolation
delayed = self._apply_delay(spikes, total_delay)
return delayed
```
### Step 2: Differentiable Delay Application
```python
def _apply_delay(self, x, delay):
"""Apply delay with linear interpolation for gradient flow."""
batch, channel, time = x.shape
output = torch.zeros_like(x)
for c in range(channel):
d = delay[c]
floor_d = torch.floor(d).long()
ceil_d = torch.ceil(d).long()
frac = d - floor_d.float()
# Linear interpolation between discrete time steps
for t in range(time):
t_floor = torch.clamp(t - floor_d, 0, time - 1)
t_ceil = torch.clamp(t - ceil_d, 0, time - 1)
output[:, c, t] = (
(1 - frac) * x[:, c, t_floor] +
frac * x[:, c, t_ceil]
)
return output
```
### Step 3: Integration with SNN Layer
```python
class DelayedSNNLayer(nn.Module):
def __init__(self, in_channels, out_channels, max_delay=10):
super().__init__()
self.delay = CongestionAwareDelay(in_channels, max_delay)
self.synaptic_weights = nn.Linear(in_channels, out_channels)
self.lif = LIFNeuron(out_channels)
def forward(self, spikes, spike_intensity):
# Apply congestion-aware delay
delayed_spikes = self.delay(spikes, spike_intensity)
# Synaptic integration
membrane_input = self.synaptic_weights(delayed_spikes.transpose(-1, -2))
# LIF neuron dynamics
output_spikes = self.lif(membrane_input)
return output_spikes
```
## Results
| Dataset | Metric | SOTA Delay Methods | Congestion-Aware | Improvement |
|---------|--------|--------------------|-------------------|-------------|
| SHD | Accuracy | ~92% | **93.75%** | +1.75% |
| SSC | Accuracy | ~78% | **80.49%** | +2.49% |
| GSC-35 | Accuracy | ~94% | **95.53%** | +1.53% |
| All | Parameters | Baseline | **-50%** | Half the parameters |
## When to Use
- **Temporal/spatiotemporal tasks** where spike timing matters
- **Speech recognition** with SNNs
- **Event-based sensor processing** (DVS, audio)
- **Resource-constrained deployment** (need to reduce parameters)
- **Dynamic environments** where input statistics change
## Advantages
1. **Parameter efficiency:** ~50% fewer delay parameters vs. per-synapse delays
2. **Dynamic adaptability:** Adjusts to network congestion in real-time
3. **Differentiable training:** Linear interpolation enables gradient-based optimization
4. **Zero inference overhead:** Discretization removes interpolation cost at deployment
5. **Improved accuracy:** Better spike alignment on temporal tasks
## Pitfalls
- **Shift network capacity:** Too small → cannot model congestion dynamics; too large → defeats efficiency goal
- **Delay range:** Clamp delays to reasonable bounds to avoid temporal misalignment
- **Intensity measurement:** Use appropriate window size for spike intensity estimation
- **Task sensitivity:** Works best for tasks where temporal alignment is critical
## Related Skills
- `spiking-neural-network-analysis`
- `delay-adaptive-snn-classifier`
- `stdp-synaptic-delay-learning`
- `multiplication-free-spike-time-fpga`
## Reference
```
@article{bai2026congestion,
title = {Congestion-Aware Dynamic Axonal Delay for Spiking Neural Networks},
author = {Bai, Dewei and Peng, Hongxiang and Zeng, Yunyun and Zhang, Ziyu and Qu, Hong},
journal = {arXiv preprint},
year = {2026},
eprint = {2605.01291},
primaryClass = {cs.LG},
date = {2026-05-02}
}
```