Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-adaptive-spiking-neuron-asngit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-adaptive-spiking-neuron-asn/SKILL.md---
name: adaptive-spiking-neuron-asn
description: "Adaptive Spiking Neuron (ASN) methodology for vision and language modeling. Implements trainable membrane potential dynamics with adaptive firing mechanisms for efficient Spiking Neural Networks (SNNs). Activation: adaptive spiking neuron, ASN, spiking neural network vision language, SNN adaptive neuron, neuromorphic vision language model."
---
# Adaptive Spiking Neurons for Vision and Language Modeling
## Overview
This skill provides implementation guidance for the Adaptive Spiking Neuron (ASN) and its normalized variant (NASN) - a next-generation spiking neuron design that achieves high performance across both vision and language tasks through trainable membrane potential dynamics and adaptive firing mechanisms.
## Key Features
- **Adaptive Spiking Neuron (ASN)**: Trainable parameters for learning membrane potential dynamics
- **Normalized Adaptive Spiking Neuron (NASN)**: Enhanced variant with integrated normalization for training stability
- **Integer Training & Spike Inference**: Efficient training paradigm for neuromorphic deployment
- **Cross-Modal Performance**: Validated on 19 datasets spanning vision and language tasks
## Methodology
### Core Concept
Traditional spiking neurons use fixed dynamics (e.g., exponential decay). ASN introduces trainable parameters to adapt membrane potential evolution:
```
τ_m * dV_m/dt = -(V_m - V_rest) + I_syn
With ASN: Adaptive time constants and thresholds based on learned parameters
```
### Architecture Components
1. **Trainable Membrane Dynamics**
- Learnable time constants τ
- Adaptive threshold mechanism
- Activity-dependent reset
2. **Integer Training Paradigm**
- Floating-point during training
- Integer inference for hardware deployment
- Quantization-aware optimization
3. **Normalization Integration (NASN)**
- Layer/batch normalization compatibility
- Stabilized training dynamics
- Better convergence properties
## Implementation Guidelines
### Basic ASN Neuron
```python
class AdaptiveSpikingNeuron(nn.Module):
"""
Adaptive Spiking Neuron with trainable dynamics
Args:
input_size: Input feature dimension
hidden_size: Hidden state dimension
tau_init: Initial time constant (default: 2.0)
threshold: Firing threshold (default: 1.0)
"""
def __init__(self, input_size, hidden_size, tau_init=2.0, threshold=1.0):
super().__init__()
self.input_proj = nn.Linear(input_size, hidden_size)
self.tau = nn.Parameter(torch.ones(hidden_size) * tau_init)
self.threshold = threshold
self.hidden_size = hidden_size
def forward(self, x_t, v_prev):
# Adaptive time constant (ensured positive via softplus)
tau_eff = F.softplus(self.tau) + 1.0
# Input current
i_syn = self.input_proj(x_t)
# Membrane potential update with adaptive dynamics
dv = (i_syn - v_prev) / tau_eff
v = v_prev + dv
# Spike generation
spike = (v >= self.threshold).float()
v = v * (1 - spike) # Reset after spike
return spike, v
```
### NASN with Normalization
```python
class NormalizedASN(nn.Module):
"""
Normalized Adaptive Spiking Neuron
Integrates normalization for stable training
"""
def __init__(self, input_size, hidden_size, tau_init=2.0):
super().__init__()
self.asn = AdaptiveSpikingNeuron(input_size, hidden_size, tau_init)
self.norm = nn.LayerNorm(hidden_size)
def forward(self, x_t, v_prev):
spike, v = self.asn(x_t, v_prev)
# Normalize membrane potential for stability
v_normalized = self.norm(v)
return spike, v_normalized
```
### Multi-Layer SNN with ASN
```python
class ASNSNN(nn.Module):
"""
Multi-layer SNN using ASN neurons
Suitable for both vision and language tasks
"""
def __init__(self, input_size, hidden_sizes, output_size, num_steps):
super().__init__()
self.num_steps = num_steps
layers = []
prev_size = input_size
for hidden_size in hidden_sizes:
layers.append(NormalizedASN(prev_size, hidden_size))
prev_size = hidden_size
self.layers = nn.ModuleList(layers)
self.readout = nn.Linear(prev_size, output_size)
def forward(self, x):
# x shape: (batch, time_steps, features)
batch_size = x.size(0)
# Initialize membrane potentials
states = [torch.zeros(batch_size, layer.asn.hidden_size, device=x.device)
for layer in self.layers]
spikes = []
for t in range(self.num_steps):
x_t = x[:, t, :]
for i, layer in enumerate(self.layers):
spike, states[i] = layer(x_t, states[i])
x_t = spike
spikes.append(spike)
# Readout from final timestep
output = self.readout(states[-1])
return output, torch.stack(spikes, dim=1)
```
## Training Configuration
### Hyperparameters
| Parameter | Vision Tasks | Language Tasks |
|-----------|--------------|----------------|
| Time steps (T) | 4-8 | 8-16 |
| Initial τ | 2.0 | 3.0 |
| Learning rate | 1e-3 | 5e-4 |
| Batch size | 128 | 64 |
| Optimizer | AdamW | AdamW |
| Weight decay | 1e-4 | 1e-4 |
### Loss Function
```python
def asn_loss(output, target, spikes, lambda_reg=1e-5):
"""
Combined task loss and firing rate regularization
"""
task_loss = F.cross_entropy(output, target)
# Regularization: encourage sparse firing
firing_rate = spikes.mean()
reg_loss = lambda_reg * firing_rate
return task_loss + reg_loss
```
## Task-Specific Adaptations
### Vision Tasks (Image Classification)
```python
# Vision encoder with ASN
class VisionASN(nn.Module):
def __init__(self, num_classes, num_steps=4):
super().__init__()
# Use smaller time steps for vision (faster inference)
self.encoder = ConvASNBlock(3, 64, num_steps)
self.classifier = ASNSNN(64*8*8, [256, 128], num_classes, num_steps)
def forward(self, x):
features = self.encoder(x)
return self.classifier(features.flatten(1))
```
### Language Tasks
```python
# Language modeling with ASN
class LanguageASN(nn.Module):
def __init__(self, vocab_size, embed_dim, hidden_dim, num_steps=8):
super().__init__()
self.embedding = nn.Embedding(vocab_size, embed_dim)
self.snn = ASNSNN(embed_dim, [hidden_dim, hidden_dim], vocab_size, num_steps)
def forward(self, tokens):
embedded = self.embedding(tokens)
output, _ = self.snn(embedded)
return output
```
## Deployment Optimization
### Integer Conversion
```python
def convert_to_integer_model(model, bit_width=8):
"""
Convert trained ASN model to integer-only inference
"""
# Quantize weights
for module in model.modules():
if isinstance(module, nn.Linear):
module.weight.data = quantize(module.weight.data, bit_width)
# Convert neuron dynamics to fixed-point
for module in model.modules():
if isinstance(module, AdaptiveSpikingNeuron):
# Scale time constants for integer arithmetic
module.tau.data = (module.tau.data * 256).round() / 256
return model
```
### Hardware Deployment (Neuromorphic Chips)
Key considerations for deploying ASN on neuromorphic hardware:
- Pre-compute τ values as lookup tables
- Implement membrane update with fixed-point arithmetic
- Optimize spike generation for event-driven processing
## Evaluation Benchmarks
The ASN family has been validated on:
**Vision Tasks:**
- CIFAR-10/100
- ImageNet
- Tiny-ImageNet
- MNIST variants
**Language Tasks:**
- Language modeling (PTB, WikiText)
- Text classification
- Question answering
## Advantages Over Standard SNNs
1. **Adaptive Dynamics**: Learns optimal time constants per neuron/channel
2. **Training Efficiency**: Integer training paradigm reduces memory overhead
3. **Stability**: NASN variant with normalization enables deeper networks
4. **Cross-Modal**: Single neuron design works across vision and language
## References
- Paper: "Adaptive Spiking Neurons for Vision and Language Modeling" (arXiv:2604.12365)
- Authors: Zhou et al., 2026
- Categories: cs.NE (Neural and Evolutionary Computing)
## Related Skills
- `spiking-neural-network-analysis`: Analysis framework for SNN papers
- `snn-learning-survey`: Comprehensive survey of SNN learning rules
- `neuromorphic-computing`: Hardware deployment guidelines
_Last updated: 2026-04-27_