Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-encrypted-computation-snn-tfhegit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-encrypted-computation-snn-tfhe/SKILL.md---
name: encrypted-computation-snn-tfhe
description: "Efficient encrypted computation in Convolutional Spiking Neural Networks using TFHE (Fully Homomorphic Encryption). Exploits discrete spike signals to avoid continuous non-polynomial function limitations of FHE on neural networks. Activation: homomorphic encryption SNN, privacy-preserving neural network, TFHE, encrypted inference, FHE spiking."
---
# Efficient Encrypted Computation in ConvSNNs with TFHE
> Privacy-preserving spiking neural network inference using Fully Homomorphic Encryption (TFHE scheme), exploiting the discrete nature of spike signals to bypass the continuous non-polynomial function limitation that plagues FHE for traditional neural networks.
## Metadata
- **Source**: arXiv:2603.26781
- **Authors**: Seung-Hoon Paeng, Daewon Shon, Sung-Ho Bae et al.
- **Published**: 2026-03-31
- **Category**: cs.CR
## Core Methodology
### Key Innovation
Traditional neural networks require evaluating non-polynomial activation functions (ReLU, sigmoid) on encrypted data, which is computationally prohibitive in FHE. Spiking neural networks naturally produce binary spike events (fire/no-fire), making them inherently compatible with boolean/arithmetic circuits in FHE schemes like TFHE.
### Technical Framework
1. **TFHE (Fast Fully Homomorphic Encryption over the Torus)**:
- Supports arbitrary boolean gates on encrypted data via bootstrapping
- Gate-by-ggate bootstrapping enables flexible circuit evaluation
- Key-switching allows arithmetic operations on encrypted integers
2. **ConvSNN Architecture for FHE**:
- Convolutional layers with binary spike outputs
- Leaky integrate-and-fire (LIF) neurons discretized to binary decisions
- Membrane potential accumulation via homomorphic additions
- Spike generation via homomorphic comparison (threshold check)
3. **Efficiency Gains**:
- Spike binarity eliminates need for polynomial approximations of activations
- Convolution reduces to integer matrix multiplication on encrypted data
- Sparse spike activity further reduces computation (many zeros)
- Quantized weights minimize bootstrapping operations
### Key Results
- Demonstrated encrypted inference on image classification tasks
- ConvSNN+FHE achieves feasible latency compared to ANN+FHE baselines
- Binary spike signals reduce bootstrapping calls by order of magnitude
- Maintains competitive accuracy despite quantization constraints
## Implementation Guide
### Prerequisites
- TFHE library (e.g., TFHE-rs, Concrete-ML)
- Spiking neural network framework (spikingjelly, Norse)
- Understanding of homomorphic encryption basics
### Step-by-Step
1. Design ConvSNN with quantized weights and binary spike outputs
2. Train the network in plaintext (standard SNN training)
3. Map each operation to FHE gates:
- Convolution → encrypted integer multiply-accumulate
- Membrane integration → encrypted addition
- Threshold comparison → encrypted comparison gate
- Spike output → encrypted boolean output
4. Generate TFHE key pair (secret key, evaluation key)
5. Encrypt input data client-side
6. Execute encrypted inference server-side
7. Decrypt output client-side
### Code Example
```python
# Conceptual: encrypted ConvSNN inference with TFHE
# Using Concrete-ML style API
from concrete.ml.torch.compile import compile_torch_model
import torch
import torch.nn as nn
class EncryptedConvSNNLayer(nn.Module):
"""ConvSNN layer compatible with FHE compilation."""
def __init__(self, in_channels, out_channels, threshold=1.0):
super().__init__()
self.conv = nn.Conv2d(in_channels, out_channels, 3, padding=1)
self.threshold = threshold
def forward(self, x):
# Membrane potential (integer accumulation)
membrane = self.conv(x)
# Binary spike: 1 if above threshold, 0 otherwise
spike = (membrane > self.threshold).float()
return spike
# Compile for FHE execution
model = EncryptedConvSNNLayer(1, 16)
# Quantization-aware compilation for integer FHE arithmetic
q_module = compile_torch_model(
model, torch.randn(1, 1, 28, 28),
n_bits=6 # Weight/input quantization bits
)
# Encrypted inference
# encrypted_input = q_module.encrypt(cleartext_input)
# encrypted_output = q_module.forward(encrypted_input)
# result = q_module.decrypt(encrypted_output)
```
## Applications
- Privacy-preserving medical diagnosis on encrypted EEG/fMRI data
- Confidential brain-computer interface processing
- Secure neuromorphic cloud computing
- Privacy-preserving edge AI with encrypted sensor data
## Pitfalls
- FHE bootstrapping remains computationally expensive even with binary spikes
- Key generation and management overhead for client-server deployment
- Quantization of weights and membrane potentials reduces accuracy
- Deep ConvSNNs with many layers require many sequential bootstrapping operations
- TFHE noise growth limits depth of computable circuits
## Related Skills
- snn-learning-neuromorphic
- privacy-aware-networked-control
- cross-layer-crypto-analysis
- neuromorphic-low-power-ai