Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cmosx-mtj-neuron-nonlinear-classificationgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cmosx-mtj-neuron-nonlinear-classification/SKILL.md---
name: cmosx-mtj-neuron-nonlinear-classification
description: "CMOS+X spiking neuron nonlinear classification methodology using Magnetic Tunnel Junction (MTJ) in series with NMOS transistor. Three intrinsic neuronal properties (threshold activation, response latency, absolute refraction) enable nonlinear computation in compact neuromorphic hardware. Activation: cmos+x, mtj neuron, magnetic tunnel junction, neuromorphic hardware, nonlinear classification, compact spiking neuron."
---
# CMOS+X MTJ Neuron for Nonlinear Classification
> Biologically realistic spiking neuron using CMOS+X technology (MTJ + NMOS) achieves nonlinear classification (XOR) through three intrinsic neuronal dynamics: threshold activation, response latency, and absolute refraction — no additional circuit complexity required.
## Metadata
- **Source**: arXiv:2604.03187
- **Authors**: Steven Louis, Hannah Bradley, Artem Litvinenko, Cody Trevillian, Darrin Hanna
- **Published**: 2026-04-03
- **Category**: cs.NE (Neural and Evolutionary Computing)
## Core Methodology
### Key Innovation
A CMOS+X spiking neuron architecture combining a **Magnetic Tunnel Junction (MTJ)** in series with an **NMOS transistor** achieves nonlinear computation through intrinsic magnetization dynamics rather than additional circuit elements. This is the first demonstration that MTJ-based neurons can solve nonlinearly separable problems (XOR) in a multilayer network configuration.
### Three Intrinsic Nonlinear Properties
1. **Threshold Activation**
- Determines which neurons participate in computation
- MTJ magnetization dynamics create a natural activation threshold
- Functions analogous to biological neuron firing threshold
2. **Response Latency**
- Shifts spike timing based on input magnitude
- MTJ switching delay encodes temporal information
- Enables temporal coding for classification decisions
3. **Absolute Refraction**
- Suppresses subsequent spikes after firing
- Refractory period from MTJ magnetization recovery
- Provides natural spike regularization
### Technical Framework
**Hardware Architecture:**
- MTJ device connected in series with NMOS transistor
- CMOS+X paradigm: conventional CMOS + emerging device technology
- Compact footprint compared to traditional analog neuron circuits
**Network Configuration:**
- Multilayer network topology
- XOR classification as benchmark (requires nonlinear separation)
- Circuit-level simulations validate the approach
**Magnetization Dynamics:**
- MTJ free-layer magnetization governs neuron state
- Spin-transfer torque (STT) or voltage-controlled switching
- Nonlinear transfer function emerges from device physics
## Implementation Guide
### Prerequisites
- SPICE circuit simulator (e.g., Cadence Virtuoso, LTSpice)
- MTJ compact model (e.g., from Stanford/Purdue MTJ models)
- CMOS process design kit (PDK)
### Step-by-Step
1. **MTJ Device Modeling**
- Define MTJ parameters: TMR ratio, resistance-area product, critical switching current
- Calibrate switching dynamics (precession vs. ballistic regime)
- Set device geometry for target threshold voltage
2. **Neuron Circuit Design**
- Connect MTJ in series with NMOS transistor
- Size NMOS for desired current range
- Add input integration capacitor (if membrane potential emulation needed)
3. **Threshold Calibration**
- Sweep input voltage to characterize activation threshold
- Map input current to spike probability
- Tune MTJ anisotropy for target threshold
4. **Response Latency Configuration**
- Measure MTJ switching delay vs. input current
- Adjust pulse width to exploit temporal coding
- Calibrate delay-based weight encoding
5. **Refractory Period Setting**
- Characterize MTJ recovery time after switching
- Tune for desired absolute refractory period
- Ensure spike suppression during recovery window
6. **Multilayer Network Simulation**
- Define network topology (e.g., 2-2-1 for XOR)
- Map synaptic weights to transistor sizing or pulse amplitude
- Run classification simulations
### Code Example (Conceptual MTJ Neuron Model)
```python
import numpy as np
class MTJNeuron:
"""Simplified MTJ-based spiking neuron model."""
def __init__(self, threshold=0.5, latency_scale=1.0, refractory_period=2.0):
self.threshold = threshold # MTJ switching threshold
self.latency_scale = latency_scale # Response latency scaling
self.refractory_period = refractory_period # Refractory time
self.last_spike_time = -np.inf
self.state = 0 # 0: anti-parallel (rest), 1: parallel (fired)
def step(self, input_current, current_time):
"""Process input and potentially generate spike."""
# Check absolute refraction
if current_time - self.last_spike_time < self.refractory_period:
return 0, None # Suppressed
# Threshold activation
if input_current > self.threshold:
# Response latency: delay proportional to current above threshold
excess = input_current - self.threshold
latency = self.latency_scale / (1 + excess)
spike_time = current_time + latency
# MTJ switches state
self.state = 1
self.last_spike_time = spike_time
# Reset after refractory period
self.state = 0
return 1, spike_time
return 0, None
class MTJNetwork:
"""Multilayer network of MTJ neurons for XOR classification."""
def __init__(self):
# Hidden layer: 2 MTJ neurons
self.hidden = [
MTJNeuron(threshold=0.3, latency_scale=0.5, refractory_period=1.0),
MTJNeuron(threshold=0.6, latency_scale=0.8, refractory_period=1.0),
]
# Output layer: 1 MTJ neuron
self.output = MTJNeuron(threshold=0.4, latency_scale=0.3, refractory_period=1.5)
# Synaptic weights (encoded as current scaling)
self.w_hidden = np.array([[1.0, 0.0], [0.0, 1.0]]) # Input to hidden
self.w_output = np.array([1.0, -1.0]) # Hidden to output
def forward(self, x1, x2):
"""XOR classification forward pass."""
inputs = np.array([x1, x2])
# Hidden layer
hidden_spikes = []
for i, neuron in enumerate(self.hidden):
current = np.dot(self.w_hidden[i], inputs)
spike, _ = neuron.step(current, t=0)
hidden_spikes.append(spike)
# Output layer
hidden_out = np.array(hidden_spikes)
output_current = np.dot(self.w_output, hidden_out)
output_spike, _ = self.output.step(output_current, t=0)
return output_spike
# Test XOR
net = MTJNetwork()
xor_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
for x1, x2 in xor_inputs:
result = net.forward(x1, x2)
print(f"XOR({x1}, {x2}) = {result} (expected: {x1 ^ x2})")
```
## Applications
- **Ultra-compact neuromorphic processors**: MTJ neurons at nanoscale footprint
- **Edge AI inference**: Energy-efficient classification in resource-constrained devices
- **Beyond-CMOS computing**: Leveraging emerging device physics for computation
- **Hardware neural networks**: Direct implementation of multilayer networks in CMOS+X
- **Temporal coding systems**: Exploiting spike timing for information processing
## Key Findings
- XOR (nonlinearly separable) successfully solved using only 3 intrinsic neuron properties
- No additional circuit elements needed beyond MTJ + NMOS
- Magnetization dynamics provide natural nonlinear transfer function
- Response latency enables temporal coding for classification
## Pitfalls
- MTJ device variability may affect threshold consistency across neurons
- Temperature sensitivity of MTJ switching characteristics
- Limited fan-in compared to CMOS-only neuron designs
- Speed limited by MTJ switching time (nanosecond range)
- Endurance concerns for training applications with frequent switching
## Related Skills
- circuit-level-spiking-neuron-robustness
- memristive-neuron-multiple-spiking
- intrinsic-neurosynaptic-memristive-spiking
- neuromorphic-low-power-ai
- spiking-neural-network-training