Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-globally-optimal-snn-parameter-reconstructiongit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-globally-optimal-snn-parameter-reconstruction/SKILL.md--- name: globally-optimal-snn-parameter-reconstruction description: "Globally optimal Spiking Neural Network (SNN) training via parameter reconstruction methodology. Extends convexification of parallel feedforward threshold networks to parallel recurrent threshold networks, enabling parameter reconstruction algorithm that avoids surrogate gradient approximation errors. Applicable to SNN training, optimization, energy-efficient neural networks. Triggers: SNN training, surrogate gradient, spiking neural network optimization, convex training, globally optimal SNN." --- # Globally Optimal SNN Training via Parameter Reconstruction ## Overview Methodology from arXiv:2605.08022 (Udupi, Yang, Zhai, 2026-05-08) for training Spiking Neural Networks without surrogate gradient approximation errors. ## Core Problem SNN training typically relies on **surrogate gradients** due to the non-differentiability of the spike function. This introduces approximation errors that accumulate across layers, limiting performance. ## Key Innovation **Parameter Reconstruction Algorithm**: Extends convexification theory from parallel feedforward threshold networks to **parallel recurrent threshold networks**, which subsume parallel SNNs as a structured special case. ## Methodology ### 1. Convexification Framework - Extend convexification from feedforward to recurrent threshold networks - Parallel SNNs are a structured special case of parallel recurrent threshold networks - This theoretical framework provides global optimality guarantees ### 2. Parameter Reconstruction ``` ANN Training -> Parameter Reconstruction -> SNN Parameters ``` - Train equivalent threshold network with convex optimization - Reconstruct SNN parameters from the trained threshold network - Avoids surrogate gradient entirely ### 3. Hybrid Approach - Can combine parameter reconstruction with surrogate-gradient training - Parameter reconstruction provides better initialization - Surrogate gradient fine-tunes the solution ## Key Advantages 1. **No approximation errors**: Avoids surrogate gradient approximation 2. **Data scalability**: Demonstrated consistent improvement with larger datasets 3. **Robust to configuration**: Works across different model architectures 4. **Standalone or hybrid**: Can be used alone or combined with existing methods ## Implementation Considerations - Extend parallel feedforward threshold network convexification to recurrent case - Parameter reconstruction maps threshold network weights to SNN parameters - Suitable for both classification and temporal sequence tasks ## Applications - Energy-efficient SNN deployment - Large-scale SNN training - Neuromorphic hardware optimization - Biological neural network simulation ## arXiv Reference - **Paper**: Globally Optimal Training of Spiking Neural Networks via Parameter Reconstruction - **ID**: 2605.08022 - **URL**: https://arxiv.org/abs/2605.08022 - **PDF**: https://arxiv.org/pdf/2605.08022v1