Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-brainfuse-unified-biological-ai-infrastructuregit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-brainfuse-unified-biological-ai-infrastructure/SKILL.md---
name: brainfuse-unified-biological-ai-infrastructure
description: "BrainFuse - unified infrastructure integrating realistic biological neural modeling and core AI methodology. Supports differentiable biophysical neuron simulation, 3000x GPU acceleration for ion-channel dynamics, and neuromorphic hardware deployment."
category: "ai_collection"
tags: ["biophysical modeling", "Hodgkin-Huxley", "neuromorphic computing", "differentiable simulation", "AI-neuroscience bridge", "spiking neural networks", "neuron simulation"]
activation: ["BrainFuse", "biological neuron simulation", "Hodgkin-Huxley AI", "differentiable neuroscience", "neuromorphic deployment", "biophysical SNN"]
papers:
- arxiv: "2601.21407"
title: "BrainFuse: a unified infrastructure integrating realistic biological modeling and core AI methodology"
authors: ["Baiyu Chen", "Yujie Wu", "Siyuan Xu", "Peng Qu", "Dehua Wu", "Xu Chu", "Haodong Bian", "Shuo Zhang", "Bo Xu", "Youhui Zhang", "Zhengyu Ma", "Guoqi Li"]
date: "2026-01-29"
---
# BrainFuse: Unified Biological-AI Infrastructure
BrainFuse is a unified infrastructure that bridges neuroscience and artificial intelligence by providing comprehensive support for biophysical neural simulation and gradient-based learning. It enables the integration of detailed neuronal dynamics into differentiable learning frameworks with scalable deployment to neuromorphic hardware.
## The Problem
Neuroscience and AI represent distinct yet complementary pathways to general intelligence, but their translational synergy has become increasingly elusive due to infrastructural incompatibility:
- **Modern AI frameworks** lack native support for biophysical realism
- **Neural simulation tools** are poorly suited for gradient-based optimization
- **Deployment gap** between simulation and neuromorphic hardware
## BrainFuse Solution
### Three Core Capabilities
```
┌─────────────────────────────────────────────────────────────────┐
│ BrainFuse Architecture │
├─────────────────────────────────────────────────────────────────┤
│ │
│ ┌──────────────────┐ ┌──────────────────┐ ┌──────────────┐ │
│ │ 1. Algorithmic │ │ 2. System-Level │ │ 3. Scalable │ │
│ │ Integration │ │ Optimization │ │ Compute │ │
│ │ │ │ │ │ │ │
│ │ Differentia-ble │ │ 3000x GPU │ │ Neuromorphic │ │
│ │ biophysical │ │ acceleration │ │ deployment │ │
│ │ neuron models │ │ customizable │ │ pipelines │ │
│ │ in AI framework │ │ ion-channel │ │ │ │
│ │ │ │ dynamics │ │ │ │
│ └────────┬─────────┘ └────────┬─────────┘ └──────┬───────┘ │
│ │ │ │ │
│ └────────────────────┼───────────────────┘ │
│ │ │
│ ┌──────────┴──────────┐ │
│ │ Full-Stack Design │ │
│ └─────────────────────┘ │
└─────────────────────────────────────────────────────────────────┘
```
## Architecture Components
### 1. Differentiable Biophysical Modeling
```python
import brainfuse
import torch
import torch.nn as nn
class HodgkinHuxleyNeuron(nn.Module):
"""
Fully differentiable Hodgkin-Huxley neuron model.
Compatible with PyTorch autograd for gradient-based learning.
"""
def __init__(self,
C_m=1.0, # Membrane capacitance (uF/cm^2)
g_Na=120.0, # Sodium conductance (mS/cm^2)
g_K=36.0, # Potassium conductance (mS/cm^2)
g_L=0.3, # Leak conductance (mS/cm^2)
E_Na=50.0, # Sodium reversal potential (mV)
E_K=-77.0, # Potassium reversal potential (mV)
E_L=-54.4): # Leak reversal potential (mV)
super().__init__()
# Membrane parameters (learnable)
self.C_m = nn.Parameter(torch.tensor(C_m))
self.g_Na = nn.Parameter(torch.tensor(g_Na))
self.g_K = nn.Parameter(torch.tensor(g_K))
self.g_L = nn.Parameter(torch.tensor(g_L))
# Reversal potentials
self.E_Na = E_Na
self.E_K = E_K
self.E_L = E_L
# State variables
self.V = None # Membrane potential
self.m = None # Na activation
self.h = None # Na inactivation
self.n = None # K activation
def alpha_m(self, V):
"""Na activation rate (forward)."""
return 0.1 * (V + 40) / (1 - torch.exp(-(V + 40) / 10))
def beta_m(self, V):
"""Na activation rate (backward)."""
return 4.0 * torch.exp(-(V + 65) / 18)
def alpha_h(self, V):
"""Na inactivation rate (forward)."""
return 0.07 * torch.exp(-(V + 65) / 20)
def beta_h(self, V):
"""Na inactivation rate (backward)."""
return 1.0 / (1 + torch.exp(-(V + 35) / 10))
def alpha_n(self, V):
"""K activation rate (forward)."""
return 0.01 * (V + 55) / (1 - torch.exp(-(V + 55) / 10))
def beta_n(self, V):
"""K activation rate (backward)."""
return 0.125 * torch.exp(-(V + 65) / 80)
def forward(self, I_ext, dt=0.01):
"""
Forward integration of HH equations.
Args:
I_ext: External current input (uA/cm^2)
dt: Time step (ms)
Returns:
V: Membrane potential (mV)
"""
# Compute gating variable derivatives
dm = self.alpha_m(self.V) * (1 - self.m) - self.beta_m(self.V) * self.m
dh = self.alpha_h(self.V) * (1 - self.h) - self.beta_h(self.V) * self.h
dn = self.alpha_n(self.V) * (1 - self.n) - self.beta_n(self.V) * self.n
# Update gating variables
self.m = self.m + dt * dm
self.h = self.h + dt * dh
self.n = self.n + dt * dn
# Compute ionic currents
I_Na = self.g_Na * (self.m ** 3) * self.h * (self.V - self.E_Na)
I_K = self.g_K * (self.n ** 4) * (self.V - self.E_K)
I_L = self.g_L * (self.V - self.E_L)
# Membrane potential derivative
dV = (I_ext - I_Na - I_K - I_L) / self.C_m
self.V = self.V + dt * dV
return self.V
def reset(self, V_init=-65.0):
"""Initialize neuron to resting state."""
self.V = torch.tensor(V_init)
# Steady-state values for gating variables
self.m = self.alpha_m(self.V) / (self.alpha_m(self.V) + self.beta_m(self.V))
self.h = self.alpha_h(self.V) / (self.alpha_h(self.V) + self.beta_h(self.V))
self.n = self.alpha_n(self.V) / (self.alpha_n(self.V) + self.beta_n(self.V))
```
### 2. Customizable Ion-Channel Library
```python
class IonChannelLibrary:
"""
Library of biophysically accurate ion channel models.
All channels are differentiable and GPU-accelerated.
"""
def __init__(self):
self.channels = {}
def register_channel(self, name, channel_class):
"""Register a new channel type."""
self.channels[name] = channel_class
def get_channel(self, name, **params):
"""Instantiate a channel model."""
return self.channels[name](**params)
# Pre-defined channel types
@staticmethod
def fast_sodium(params):
"""Fast sodium channel (classic HH)."""
return SodiumChannel(
g_max=params.get('g_max', 120.0),
activation='m3h',
inactivation='h'
)
@staticmethod
def delayed_rectifier_potassium(params):
"""Delayed rectifier potassium (classic HH)."""
return PotassiumChannel(
g_max=params.get('g_max', 36.0),
activation='n4'
)
@staticmethod
def a_type_potassium(params):
"""A-type transient potassium channel."""
return ATypePotassiumChannel(
g_max=params.get('g_max', 10.0),
activation='a',
inactivation='b'
)
@staticmethod
def t_type_calcium(params):
"""T-type calcium channel (low threshold)."""
return TTypeCalciumChannel(
g_max=params.get('g_max', 0.5),
E_Ca=120.0
)
@staticmethod
def calcium_activated_potassium(params):
"""BK-type calcium-activated potassium channel."""
return BKChannel(
g_max=params.get('g_max', 5.0),
ca_dependent=True
)
class MultiCompartmentNeuron(nn.Module):
"""
Multi-compartment neuron with arbitrary morphology.
Supports dendritic tree simulation with active conductances.
"""
def __init__(self, morphology_file=None):
super().__init__()
# Load or define morphology
if morphology_file:
self.morphology = self.load_swc(morphology_file)
else:
# Default: soma + 2 dendrites
self.morphology = self.create_simple_morphology()
# Create compartments
self.compartments = nn.ModuleList()
for section in self.morphology['sections']:
self.compartments.append(
Compartment(
length=section['length'],
diameter=section['diameter'],
channels=section.get('channels', ['Na', 'K'])
)
)
# Axial resistances between compartments
self.axial_resistance = self.compute_axial_resistance()
def forward(self, inputs, dt=0.01):
"""
Simulate multi-compartment dynamics.
Args:
inputs: Dict mapping compartment indices to injected currents
dt: Time step
Returns:
potentials: Tensor of compartment voltages
"""
potentials = []
# Compute axial currents
axial_currents = self.compute_axial_currents(potentials)
# Update each compartment
for i, compartment in enumerate(self.compartments):
# Total current = external + axial
I_total = inputs.get(i, 0) + axial_currents[i]
V = compartment(I_total, dt)
potentials.append(V)
return torch.stack(potentials)
```
### 3. GPU-Accelerated Simulation
```python
import brainfuse.cuda as bf_cuda
class GPUNeuronSimulator:
"""
GPU-accelerated neuron simulation with up to 3000x speedup.
"""
def __init__(self, n_neurons=10000, device='cuda'):
self.device = device
self.n_neurons = n_neurons
# Initialize neuron states on GPU
self.V = torch.full((n_neurons,), -65.0, device=device)
self.m = torch.zeros(n_neurons, device=device)
self.h = torch.ones(n_neurons, device=device)
self.n = torch.zeros(n_neurons, device=device)
# Initialize gating variables
self._initialize_gating()
def _initialize_gating(self):
"""Set initial gating variable values."""
# Steady-state calculation
alpha_m = 0.1 * (self.V + 40) / (1 - torch.exp(-(self.V + 40) / 10))
beta_m = 4.0 * torch.exp(-(self.V + 65) / 18)
self.m = alpha_m / (alpha_m + beta_m)
alpha_h = 0.07 * torch.exp(-(self.V + 65) / 20)
beta_h = 1.0 / (1 + torch.exp(-(self.V + 35) / 10))
self.h = alpha_h / (alpha_h + beta_h)
alpha_n = 0.01 * (self.V + 55) / (1 - torch.exp(-(self.V + 55) / 10))
beta_n = 0.125 * torch.exp(-(self.V + 65) / 80)
self.n = alpha_n / (alpha_n + beta_n)
def step(self, I_ext, dt=0.01):
"""
Single simulation step for all neurons.
Fully vectorized GPU operation.
Args:
I_ext: External currents (n_neurons,)
dt: Time step in ms
Returns:
V: Updated membrane potentials
"""
# HH parameters (can be made learnable)
g_Na = 120.0
g_K = 36.0
g_L = 0.3
E_Na = 50.0
E_K = -77.0
E_L = -54.4
C_m = 1.0
# Compute rate functions (vectorized)
alpha_m = torch.where(
torch.abs(self.V + 40) > 1e-6,
0.1 * (self.V + 40) / (1 - torch.exp(-(self.V + 40) / 10)),
torch.ones_like(self.V) * 0.1
)
beta_m = 4.0 * torch.exp(-(self.V + 65) / 18)
alpha_h = 0.07 * torch.exp(-(self.V + 65) / 20)
beta_h = 1.0 / (1 + torch.exp(-(self.V + 35) / 10))
alpha_n = torch.where(
torch.abs(self.V + 55) > 1e-6,
0.01 * (self.V + 55) / (1 - torch.exp(-(self.V + 55) / 10)),
torch.ones_like(self.V) * 0.01
)
beta_n = 0.125 * torch.exp(-(self.V + 65) / 80)
# Update gating variables
self.m = self.m + dt * (alpha_m * (1 - self.m) - beta_m * self.m)
self.h = self.h + dt * (alpha_h * (1 - self.h) - beta_h * self.h)
self.n = self.n + dt * (alpha_n * (1 - self.n) - beta_n * self.n)
# Clamp gating variables
self.m = torch.clamp(self.m, 0, 1)
self.h = torch.clamp(self.h, 0, 1)
self.n = torch.clamp(self.n, 0, 1)
# Compute ionic currents
I_Na = g_Na * (self.m ** 3) * self.h * (self.V - E_Na)
I_K = g_K * (self.n ** 4) * (self.V - E_K)
I_L = g_L * (self.V - E_L)
# Update membrane potential
dV = (I_ext - I_Na - I_K - I_L) / C_m
self.V = self.V + dt * dV
return self.V
def simulate(self, I_ext_trace, dt=0.01):
"""
Run full simulation with input trace.
Args:
I_ext_trace: (n_timesteps, n_neurons) current input
dt: Time step
Returns:
V_trace: (n_timesteps, n_neurons) voltage trace
"""
n_steps = I_ext_trace.shape[0]
V_trace = torch.zeros(n_steps, self.n_neurons, device=self.device)
for t in range(n_steps):
V_trace[t] = self.step(I_ext_trace[t], dt)
return V_trace
class BenchmarkResults:
"""
Performance benchmarks from paper.
"""
def __init__(self):
self.results = {
'neuron_simulation': {
'brainfuse_gpu': 1.2, # seconds for 1000 neurons, 1s simulation
'traditional_cpu': 3600, # seconds (estimated)
'speedup': 3000 # 3000x acceleration
},
'neuromorphic_deployment': {
'neurons': 38000,
'synapses': 100000000, # 100 million
'power': 1.98 # Watts
}
}
```
### 4. Neuromorphic Deployment Pipeline
```python
class NeuromorphicDeployment:
"""
Pipeline for deploying BrainFuse models to neuromorphic hardware.
"""
def __init__(self, target_hardware='loihi'):
self.target = target_hardware
self.compilers = {
'loihi': LoihiCompiler(),
'truenorth': TrueNorthCompiler(),
'spinnaker': SpiNNakerCompiler(),
'custom_chip': CustomChipCompiler()
}
def compile(self, brainfuse_model):
"""
Compile BrainFuse model to neuromorphic hardware.
Args:
brainfuse_model: Trained BrainFuse SNN
Returns:
hardware_config: Deployment-ready configuration
"""
compiler = self.compilers[self.target]
# Extract neuron parameters
neuron_params = self.extract_neuron_params(brainfuse_model)
# Extract connectivity
synaptic_weights = self.extract_connectivity(brainfuse_model)
# Compile to hardware-specific format
hardware_config = compiler.compile(
neurons=neuron_params,
synapses=synaptic_weights
)
return hardware_config
def extract_neuron_params(self, model):
"""Extract biophysical parameters from model."""
params = []
for neuron in model.neurons:
params.append({
'type': 'HH',
'C_m': neuron.C_m.item(),
'g_Na': neuron.g_Na.item(),
'g_K': neuron.g_K.item(),
'g_L': neuron.g_L.item(),
'E_Na': neuron.E_Na,
'E_K': neuron.E_K,
'E_L': neuron.E_L
})
return params
def deploy(self, hardware_config, chip_id=None):
"""
Deploy compiled configuration to hardware.
Args:
hardware_config: Compiled configuration
chip_id: Target chip identifier
Returns:
runtime: Hardware runtime interface
"""
runtime = self.compilers[self.target].deploy(
hardware_config,
chip_id=chip_id
)
return runtime
class LoihiCompiler:
"""Intel Loihi-specific compiler."""
def compile(self, neurons, synapses):
"""Compile to Loihi neurocores."""
config = {
'neuron_cores': self.allocate_neurons(neurons),
'synaptic_map': self.map_synapses(synapses),
'time_constants': self.compute_time_constants(neurons)
}
return config
def allocate_neurons(self, neurons):
"""Map neurons to Loihi neurocores (1024 neurons/core)."""
n_cores = (len(neurons) + 1023) // 1024
cores = []
for i in range(n_cores):
start = i * 1024
end = min((i + 1) * 1024, len(neurons))
cores.append({
'core_id': i,
'neurons': neurons[start:end],
'compartment_type': 'HH' if any(n['type'] == 'HH' for n in neurons[start:end]) else 'LIF'
})
return cores
```
## Usage Examples
### Example 1: Basic HH Neuron Simulation
```python
import brainfuse
import matplotlib.pyplot as plt
# Create single HH neuron
neuron = brainfuse.HodgkinHuxleyNeuron()
# Simulation parameters
dt = 0.01 # ms
t_sim = 100 # ms
n_steps = int(t_sim / dt)
# Create input current (step pulse)
I_ext = torch.zeros(n_steps)
I_ext[1000:8000] = 10.0 # 10 uA/cm^2 for 70 ms
# Run simulation
neuron.reset()
voltages = []
for t in range(n_steps):
V = neuron(I_ext[t], dt)
voltages.append(V.item())
# Plot results
plt.figure(figsize=(10, 6))
plt.subplot(2, 1, 1)
plt.plot(voltages)
plt.ylabel('Membrane Potential (mV)')
plt.title('Hodgkin-Huxley Neuron Response')
plt.subplot(2, 1, 2)
plt.plot(I_ext.numpy())
plt.ylabel('Input Current (uA/cm^2)')
plt.xlabel('Time (steps)')
plt.show()
```
### Example 2: Training Biophysical SNN
```python
import torch.nn as nn
import torch.optim as optim
class BioSNN(nn.Module):
"""Biologically realistic SNN with BrainFuse."""
def __init__(self, n_inputs=784, n_hidden=100, n_outputs=10):
super().__init__()
# Hidden layer with HH neurons
self.hidden = brainfuse.layers.HHLayer(
n_inputs, n_hidden,
neuron_params={
'g_Na': 120.0,
'g_K': 36.0,
'learnable_channels': ['g_Na', 'g_K'] # Learnable
}
)
# Readout layer
self.readout = nn.Linear(n_hidden, n_outputs)
def forward(self, x, time_steps=100):
# x: (batch, n_inputs, time_steps)
# Simulate hidden layer
spike_trains = []
for t in range(time_steps):
spikes = self.hidden(x[:, :, t])
spike_trains.append(spikes)
# Temporal pooling
hidden_activity = torch.stack(spike_trains, dim=2).mean(dim=2)
# Readout
output = self.readout(hidden_activity)
return output
# Training
model = BioSNN()
optimizer = optim.Adam(model.parameters(), lr=1e-4)
criterion = nn.CrossEntropyLoss()
for epoch in range(100):
for inputs, labels in dataloader:
optimizer.zero_grad()
outputs = model(inputs)
loss = criterion(outputs, labels)
loss.backward()
optimizer.step()
```
### Example 3: Large-Scale Simulation
```python
# Deploy 38,000 HH neurons on neuromorphic chip
large_network = brainfuse.Network()
# Add neurons with diverse properties
for i in range(38000):
neuron = brainfuse.HodgkinHuxleyNeuron(
C_m=1.0 + np.random.normal(0, 0.1),
g_Na=np.random.uniform(100, 140),
g_K=np.random.uniform(30, 42)
)
large_network.add_neuron(neuron)
# Connect with 100M synapses
large_network.random_connect(
connection_prob=0.1,
weight_distribution='lognormal',
weight_params={'mean': -2, 'sigma': 0.5}
)
# Compile and deploy
deployment = NeuromorphicDeployment(target_hardware='loihi')
config = deployment.compile(large_network)
runtime = deployment.deploy(config, chip_id='loihi_1')
# Run on hardware
runtime.run(duration=1000) # 1 second simulation
```
## Performance Benchmarks
### Simulation Speedup
| Configuration | Traditional (CPU) | BrainFuse (GPU) | Speedup |
|--------------|-------------------|-----------------|---------|
| 1,000 neurons, 1s sim | ~3,600s | ~1.2s | **3,000x** |
| 10,000 neurons, 1s sim | ~36,000s | ~12s | **3,000x** |
| 38,000 neurons, hardware | N/A | 1.98W power | Neuromorphic |
### Deployment Metrics
- **Neurons**: 38,000 Hodgkin-Huxley neurons
- **Synapses**: 100 million
- **Power**: 1.98 Watts (single neuromorphic chip)
- **Temporal Precision**: Sub-millisecond
## Integration with AI Frameworks
```python
# PyTorch integration
import torch
import brainfuse
class BioHybridModel(torch.nn.Module):
"""Combine biological and artificial layers."""
def __init__(self):
super().__init__()
# Biological feature extractor
self.bio_layer = brainfuse.layers.HHLayer(
784, 256,
ion_channels=['Na', 'K', 'Ca', 'A']
)
# Artificial processing
self.transformer = torch.nn.TransformerEncoder(
torch.nn.TransformerEncoderLayer(d_model=256, nhead=8),
num_layers=6
)
# Output
self.classifier = torch.nn.Linear(256, 10)
def forward(self, x):
# Biological feature extraction
bio_features = self.bio_layer(x)
# Transformer processing
transformed = self.transformer(bio_features)
# Classification
return self.classifier(transformed)
```
## Best Practices
### 1. Parameter Initialization
```python
# Initialize from empirical data
from brainfuse.data import cortical_neuron_params
neuron = brainfuse.HodgkinHuxleyNeuron(
C_m=cortical_neuron_params['C_m']['mean'],
g_Na=cortical_neuron_params['g_Na']['pyramidal']['mean'],
g_K=cortical_neuron_params['g_K']['pyramidal']['mean']
)
```
### 2. Numerical Stability
```python
# Use stable rate function formulations
class StableHHNeuron(brainfuse.HodgkinHuxleyNeuron):
def alpha_m(self, V):
# L'Hôpital's rule for numerical stability
return torch.where(
torch.abs(V + 40) > 1e-6,
0.1 * (V + 40) / (1 - torch.exp(-(V + 40) / 10)),
torch.ones_like(V) * 0.1
)
```
### 3. Gradient Checkpointing
```python
# For memory efficiency during training
neuron = brainfuse.HodgkinHuxleyNeuron()
neuron.enable_gradient_checkpointing()
# Simulate with checkpointing
voltages = neuron.simulate_checkpointed(I_ext, dt)
```
## References
- Chen, B., et al. (2026). BrainFuse: a unified infrastructure integrating realistic biological modeling and core AI methodology. arXiv:2601.21407
- Hodgkin, A. L., & Huxley, A. F. (1952). A quantitative description of membrane current and its application to conduction and excitation in nerve. The Journal of Physiology.
- Davies, M., et al. (2018). Loihi: A neuromorphic manycore processor with on-chip learning. IEEE Micro.
## Keywords
BrainFuse, biological neuron simulation, Hodgkin-Huxley AI, differentiable neuroscience, neuromorphic deployment, biophysical SNN, ion channel modeling, GPU-accelerated simulation, AI-neuroscience bridge