Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-circuit-level-spiking-neuron-robustnessgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-circuit-level-spiking-neuron-robustness/SKILL.md---
name: circuit-level-spiking-neuron-robustness
description: "Circuit-level spiking neuron model for hardware robustness analysis. Studies how transistor-level variations affect SNN reliability on neuromorphic chips. Activation: circuit-level SNN, neuromorphic hardware reliability, transistor variation spiking, hardware spiking neuron, CMOS spiking, SNN fault tolerance"
version: 1.0.0
metadata:
hermes:
tags: [spiking-neural-networks, neuromorphic-hardware, robustness, circuit-design]
source_paper: "arXiv:2502.09876"
---
# Circuit-Level Spiking Neuron Models for Robustness Analysis
## Overview
Analyzes how circuit-level hardware variations (transistor threshold voltage shifts, process variations, temperature drift) affect computational reliability of spiking neuron implementations on neuromorphic chips. Bridges device-level non-idealities to network-level accuracy degradation.
## Core Concepts
### Hardware-Aware LIF Neuron
```python
import torch, torch.nn as nn
class HardwareAwareLIF(nn.Module):
def __init__(self, tau_mem=10.0, v_threshold=1.0, tau_syn=5.0, variation_std=0.05):
super().__init__()
self.tau_mem = tau_mem * (1 + torch.randn(1) * variation_std)
self.v_threshold = v_threshold * (1 + torch.randn(1) * variation_std)
self.tau_syn = tau_syn * (1 + torch.randn(1) * variation_std)
def forward(self, x, time_steps=10):
v = torch.zeros_like(x)
i_syn = torch.zeros_like(x)
spikes = []
for t in range(time_steps):
i_syn = i_syn * torch.exp(-1/self.tau_syn) + x
v = v * torch.exp(-1/self.tau_mem) + i_syn
spike = (v >= self.v_threshold).float()
v = v * (1 - spike)
spikes.append(spike)
return torch.stack(spikes).sum(0)
```
### Variation-Aware Simulation
```python
import numpy as np
def simulate_variation(neuron_params, n_trials=100):
results = []
for _ in range(n_trials):
varied = {k: v * (1 + np.random.normal(0, 0.05)) for k, v in neuron_params.items()}
accuracy = evaluate_network(varied)
results.append(accuracy)
return np.mean(results), np.std(results)
```
## Key Findings
- Membrane time constant is most variation-sensitive parameter
- Threshold voltage mismatch causes 3-8% accuracy drop at 5% variation
- Synaptic weight quantization has less impact than neuron parameter drift
- Redundant neuron ensembles reduce sensitivity by 40-60%
## Pitfalls
1. Variation models must match foundry PDK data
2. Temperature effects compound process variations
3. Monte Carlo needs 100+ trials for significance
4. Analog vs digital implementations have different failure modes
## References
- arXiv:2502.09876
- Related: snn-internal-noise-analysis, quantized-snn-hardware-optimization
## Activation Keywords
- circuit-level SNN, neuromorphic hardware reliability, transistor variation spiking, hardware spiking neuron, CMOS spiking, SNN fault tolerance