Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-brain-inspired-capture-evidence-driven-neuromigit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-brain-inspired-capture-evidence-driven-neuromi/SKILL.md---
name: brain-inspired-capture-evidence-driven-neuromimetic-perceptual
description: "Brain-Inspired Capture (BI-Cap) methodology for evidence-driven neuromimetic perceptual simulation. Models human perceptual processes for robust visual understanding. Activation: brain-inspired capture, neuromimetic perceptual, BI-Cap, evidence-driven perception."
---
# Brain-Inspired Capture: Evidence-Driven Neuromimetic Perceptual Simulation
> BI-Cap methodology that models human perceptual processes through evidence accumulation and neuromimetic simulation for robust visual understanding and scene interpretation.
## Metadata
- **Source**: arXiv:2604.17927
- **Title**: Brain-Inspired Capture: Evidence-Driven Neuromimetic Perceptual Simulation for Visual Decoding
- **Authors**: Feixue Shao, Guangze Shi, Xueyu Liu, Yongfei Wu, Mingqiang Wei, Jianan Zhang, Jianbo Lu, Guiying Yan, Weihua Yang
- **Published**: 2026-04-20
## Core Methodology
### Key Innovation
BI-Cap captures the **evidence accumulation dynamics** of human perception, where sensory information is integrated over time through attractor-based neural dynamics, rather than making instantaneous feedforward decisions.
### Biological Inspiration
- **Evidence Accumulation**: Based on drift-diffusion models from decision neuroscience
- **Attractor Dynamics**: Uses recurrent networks with stable states representing perceptual hypotheses
- **Temporal Integration**: Information accumulates over time until threshold reached
### Technical Framework
```
Sensory Input → Feature Extraction → Evidence Integration → Attractor Dynamics → Perceptual Decision
↑___________________________________________|
(Recurrent evidence accumulation)
```
1. **Feature Extraction**: Extract multi-scale visual features
2. **Evidence Nodes**: Compute evidence for competing hypotheses
3. **Attractor Network**: Recurrent dynamics stabilize on perceptual interpretation
4. **Decision Threshold**: Commitment when evidence reaches criterion
## Implementation Guide
### Prerequisites
- PyTorch
- Visual processing libraries (OpenCV, PIL)
- Neural dynamics simulation tools
### Core Implementation
```python
import torch
import torch.nn as nn
class BrainInspiredCapture(nn.Module):
"""
BI-Cap: Evidence-driven neuromimetic perceptual simulation
"""
def __init__(self, n_hypotheses, evidence_dim, n_attractors=5):
super().__init__()
# Feature extraction (simplified ResNet backbone)
self.feature_extractor = ResNetBackbone()
# Evidence integration layer
self.evidence_net = nn.Sequential(
nn.Linear(evidence_dim, 256),
nn.ReLU(),
nn.Linear(256, n_hypotheses)
)
# Attractor dynamics (recurrent)
self.attractor = AttractorNetwork(
n_states=n_hypotheses,
n_attractors=n_attractors,
recurrent_strength=0.9
)
# Decision threshold
self.decision_threshold = 0.8
self.max_integration_time = 100 # steps
def forward(self, visual_input, time_steps=None):
"""
Args:
visual_input: Visual stimulus [B, C, H, W]
time_steps: Number of integration steps (None = until threshold)
Returns:
perceptual_decision: Final perceptual interpretation
evidence_history: Accumulated evidence over time
"""
# Extract features
features = self.feature_extractor(visual_input)
# Initialize evidence
evidence = torch.zeros(visual_input.size(0), self.n_hypotheses)
evidence_history = []
# Evidence accumulation loop
for t in range(time_steps or self.max_integration_time):
# Compute momentary evidence
momentary = self.evidence_net(features)
# Integrate with attractor dynamics
evidence = self.attractor(evidence, momentary)
evidence_history.append(evidence.clone())
# Check decision threshold
max_evidence = evidence.max(dim=1)[0]
if time_steps is None and (max_evidence > self.decision_threshold).all():
break
# Commit to decision (highest evidence)
perceptual_decision = evidence.argmax(dim=1)
return perceptual_decision, torch.stack(evidence_history, dim=1)
class AttractorNetwork(nn.Module):
"""Recurrent network with stable attractor states"""
def __init__(self, n_states, n_attractors, recurrent_strength):
super().__init__()
self.recurrent_weights = nn.Parameter(
torch.randn(n_attractors, n_states, n_states) * 0.1
)
self.strength = recurrent_strength
def forward(self, current, input_signal):
# Recurrent update with attractor dynamics
recurrent = torch.tanh(current @ self.recurrent_weights.mean(0))
updated = self.strength * recurrent + (1 - self.strength) * input_signal
return updated
```
### Key Parameters
- **Evidence accumulation rate**: Controls integration speed
- **Decision threshold**: Balance between speed and accuracy
- **Attractor basin width**: Determines perceptual stability
## Applications
### Robust Visual Recognition
- Handles noisy/occluded inputs
- Graceful degradation
- Uncertainty quantification
### Scene Understanding
- Temporal integration of visual information
- Attention-guided processing
- Multi-object tracking
### Psychophysics Simulation
- Model human perceptual behavior
- Predict reaction times
- Simulate perceptual illusions
## Advantages
- ✅ Biologically plausible
- ✅ Handles uncertainty
- ✅ Temporal integration
- ✅ Robust to noise
## Limitations
- Higher computational cost than feedforward
- Requires careful parameter tuning
- Slower inference than standard CNNs
## Related Skills
- brain-inspired-capture-evidence-driven
- neuromimetic-perceptual-compression
- primary-visual-cortex-v1-functions
## References
- arXiv:2604.17927