Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-biotrain-edge-biosignal-tuninggit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-biotrain-edge-biosignal-tuning/SKILL.md---
name: biotrain-edge-biosignal-tuning
description: "BioTrain framework for sub-MB, sub-50mW on-device fine-tuning of edge-AI models on biosignals (EEG, ECG, EMG). Enables full backpropagation fine-tuning on microcontrollers for BCI deployment. Use when: edge AI biosignals, on-device fine-tuning, MCU deployment, BCI edge computing, embedded neural network adaptation, low-power biosignal ML."
---
# BioTrain: Edge Biosignal Fine-Tuning Framework
## Overview
On-device fine-tuning for biosignal AI models on resource-constrained edge devices:
- **Memory footprint**: Sub-MB model parameters
- **Power consumption**: Sub-50mW operation
- **Full backpropagation**: Complete gradient-based fine-tuning on MCU
- **Domain adaptation**: Handles cross-subject and cross-session variability
## Source
**Paper:** BioTrain: Sub-MB, Sub-50mW On-Device Fine-Tuning for Edge-AI on Biosignals
**arXiv:** 2604.13359v1
## Key Challenges Addressed
1. **Domain shift**: Biosignals vary significantly across subjects and sessions
2. **Resource constraints**: MCU has limited RAM, flash, and power budget
3. **Full backprop on MCU**: Most edge frameworks only support inference, not training
## Architecture
Pre-trained model deployed to MCU, then fine-tuned on-device with collected data.
### Memory Management
- **Gradient checkpointing**: Store only essential activations
- **Layer-wise updates**: Update one layer at a time to fit in RAM
- **Quantized gradients**: Low-precision gradient storage
- **Batch size = 1**: Online learning without buffering
### Power Optimization
- **Sparse updates**: Only compute gradients for changed weights
- **Event-driven training**: Trigger fine-tuning only on performance degradation
- **Dynamic precision**: Adjust computation precision based on power budget
## Implementation
```python
import torch
import torch.nn as nn
class EdgeBiosignalModel(nn.Module):
def __init__(self, n_channels=22, n_classes=4, hidden=32):
super().__init__()
self.encoder = nn.Sequential(
nn.Conv1d(n_channels, hidden, kernel_size=32, stride=8),
nn.BatchNorm1d(hidden),
nn.ReLU(),
nn.Conv1d(hidden, hidden*2, kernel_size=16, stride=4),
nn.BatchNorm1d(hidden*2),
nn.ReLU(),
nn.AdaptiveAvgPool1d(1),
nn.Flatten()
)
self.classifier = nn.Linear(hidden*2, n_classes)
def forward(self, x):
return self.classifier(self.encoder(x))
def on_device_finetune(model, new_data, lr=0.001, max_steps=100):
optimizer = torch.optim.Adam(model.parameters(), lr=lr)
for step in range(max_steps):
for sample in new_data:
x, y = sample
optimizer.zero_grad()
output = model(x.unsqueeze(0))
loss = nn.CrossEntropyLoss()(output, y.unsqueeze(0))
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), max_norm=1.0)
optimizer.step()
if loss.item() < 0.01:
break
return model
```
## Practical Applications
- Wearable BCI devices adapting to individual users
- Personalized seizure detection on implanted devices
- Adaptive EMG control for prosthetics
- Personalized sleep staging on smartwatches
## Activation Keywords
- biotrain, edge biosignal, on-device fine-tuning, MCU BCI, embedded biosignal ML
- low-power neural network, wearable AI adaptation, sub-mW biosignal