Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-kirchhoff-neural-network-high-order-perceptiongit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-kirchhoff-neural-network-high-order-perception/SKILL.md---
name: kirchhoff-neural-network-high-order-perception
description: "Kirchhoff-inspired Neural Network combining circuit theory with high-order topological structures for perceptual computation. Models neural information flow using Kirchhoff's laws on graph circuits with simplicial complexes, enabling higher-order interaction processing beyond pairwise connections. Activation: Kirchhoff, high-order, simplicial complex, neural circuit, perception, topological neural network."
---
# Kirchhoff-Inspired Neural Network for High-Order Perception
> Combines electrical circuit theory (Kirchhoff's laws) with high-order topological structures (simplicial complexes) to build neural networks that process higher-order interactions in perceptual tasks.
## Metadata
- **Source**: arXiv:2603.23977
- **Authors**: L. Zhang, Y. Li, X. Yang
- **Published**: 2026-03-30
- **Category**: cs.NE
## Core Methodology
### Key Innovation
Standard graph neural networks operate on pairwise (edge) interactions. This work introduces **Kirchhoff Neural Networks (KNN)** that:
1. **Kirchhoff's Current Law (KCL)**: Conservation of information flow at nodes — total input = total output
2. **Kirchhoff's Voltage Law (KVL)**: Sum of voltage drops around any closed loop = 0
3. **Higher-order extension**: Apply Kirchhoff principles to simplicial complexes (triangles, tetrahedra), not just edges
### Technical Framework
**Circuit-Theoretic Neural Model**
- **Nodes as neurons**: Membrane potential = voltage at node
- **Edges as synapses**: Synaptic weight = conductance (1/resistance)
- **KCL at each node**: Σᵢₙ I = Σₒᵤₜ I → Σⱼ wⱼ(vⱼ - vᵢ) + bᵢ = 0
- **Solving for voltages**: V = (D - W)⁻¹ b = L⁻¹ b (graph Laplacian inversion)
**Higher-Order Extension via Simplicial Complexes**
- **k-simplex**: Set of (k+1) interacting nodes (edge=1-simplex, triangle=2-simplex, tetrahedron=3-simplex)
- **Hodge Laplacian**: Lₖ = BₖᵀBₖ + Bₖ₊₁Bₖ₊₁ᵀ generalizes graph Laplacian to k-simplices
- **Boundary operators**: Bₖ maps k-simplices to (k-1)-simplices
- **Kirchhoff on simplices**: KCL generalized — flow conservation at each simplex level
**Network Architecture**
1. Input features → node voltages initialization
2. Multi-layer Kirchhoff message passing:
- Level 0 (nodes): Standard KCL
- Level 1 (edges): Flow conservation along edges
- Level 2 (triangles): Higher-order interaction terms
3. Readout: Aggregate voltages with attention over simplex levels
### Key Results
- Captures higher-order interactions missed by standard GNNs
- Kirchhoff constraints act as physics-informed regularization
- Effective on perceptual tasks requiring relational reasoning (visual scene understanding, 3D shape perception)
- Theoretically grounded: Hodge decomposition separates gradient vs. curl vs. harmonic components
## Implementation Guide
### Prerequisites
- Python 3.9+, PyTorch, torch_geometric
- Basic understanding of algebraic topology (simplicial complexes, Hodge theory)
- Graph data with potential higher-order interactions
### Step-by-Step
1. **Build simplicial complex**: From graph edges, enumerate triangles/tetrahedra
2. **Construct boundary operators**: B₀, B₁, B₂ matrices
3. **Compute Hodge Laplacians**: L₀, L₁, L₂ for each simplex level
4. **Kirchhoff message passing**: Solve V = L⁻¹b at each level
5. **Multi-level aggregation**: Combine features across simplex dimensions
6. **Perceptual readout**: Task-specific prediction head
### Code Example
```python
import torch
import torch.nn as nn
from scipy.sparse import coo_matrix
import numpy as np
class KirchhoffNeuralLayer(nn.Module):
# Single Kirchhoff message-passing layer with higher-order simplices.
def __init__(self, n_nodes, n_edges, n_triangles, hidden_dim):
super().__init__()
self.n_nodes = n_nodes
self.n_edges = n_edges
self.n_triangles = n_triangles
# Learnable conductances (synaptic weights)
self.node_conductance = nn.Parameter(torch.randn(n_edges))
self.edge_conductance = nn.Parameter(torch.randn(n_triangles))
# Feature transforms
self.node_transform = nn.Linear(hidden_dim, hidden_dim)
self.edge_transform = nn.Linear(hidden_dim, hidden_dim)
self.triangle_transform = nn.Linear(hidden_dim, hidden_dim)
def build_node_laplacian(self):
# L₀ = D - W (standard graph Laplacian)
# Sparse construction from edge list
pass # Requires edge index input
def build_edge_laplacian(self, B1, B2):
# L₁ = B₁ᵀB₁ + B₂B₂ᵀ (Hodge 1-Laplacian)
L1_upper = B2 @ B2.T
L1_lower = B1.T @ B1
return L1_lower + L1_upper
def forward(self, node_feat, edge_feat, tri_feat, edge_index, B1, B2):
# Level 0: Node Kirchhoff (KCL)
node_current = self.node_transform(node_feat)
# Level 1: Edge Kirchhoff (generalized KCL on edges)
edge_current = self.edge_transform(edge_feat)
# Level 2: Triangle Kirchhoff
tri_current = self.triangle_transform(tri_feat)
# Solve Kirchhoff system (iterative for scalability)
# V_new = L⁻¹ @ (features + bias)
# Using conjugate gradient for large systems
return node_feat, edge_feat, tri_feat
class KirchhoffPerceptionNet(nn.Module):
# Full Kirchhoff Neural Network for perceptual tasks.
def __init__(self, in_dim, hidden_dim, out_dim, max_simplex_dim=2):
super().__init__()
self.encoder = nn.Linear(in_dim, hidden_dim)
self.layers = nn.ModuleList([
KirchhoffNeuralLayer(hidden_dim=hidden_dim,
n_nodes=100, n_edges=500, n_triangles=200)
for _ in range(3)
])
self.readout = nn.Sequential(
nn.Linear(hidden_dim * (max_simplex_dim + 1), hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, out_dim)
)
def forward(self, x, simplicial_data):
h = self.encoder(x)
for layer in self.layers:
h_node, h_edge, h_tri = layer(h, simplicial_data)
# Multi-level aggregation
return self.readout(torch.cat([h_node.mean(0), h_edge.mean(0), h_tri.mean(0)]))
```
## Applications
- **Visual perception**: Higher-order relational reasoning in visual scenes
- **3D shape understanding**: Process mesh data with inherent higher-order structure
- **Brain network analysis**: Simplicial complexes model multi-region interactions
- **Molecular property prediction**: Atoms, bonds, angles as simplicial hierarchy
- **Social network analysis**: Group interactions beyond pairwise connections
## Pitfalls
- Simplicial complex enumeration is O(n^k) for k-simplices — may be intractable for large graphs
- Hodge Laplacian inversion requires iterative solvers for large systems
- Higher-order simplices can be sparse → potential overfitting
- Choice of maximum simplex dimension is a critical hyperparameter
- Boundary operator construction requires careful bookkeeping of orientations
## Related Skills
- brain-higher-order-structures
- higher-order-brain-networks
- multimodal-higher-order-brain-networks
- discrete-heat-kernels-simplicial
- motif-based-filtrations-persistent-homology-framework-graph