Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-brain-inspired-capture-evidence-drivengit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-brain-inspired-capture-evidence-driven/SKILL.md---
name: brain-inspired-capture-evidence-driven
description: "Brain-Inspired Capture (BI-Cap) methodology for evidence-driven neuromimetic perceptual simulation in visual decoding. Trigger words: BI-Cap, neuromimetic, perceptual simulation, visual decoding, HVS"
---
# Brain-Inspired Capture: Evidence-Driven Neuromimetic Perceptual Simulation
> BI-Cap methodology that aligns neural and visual modalities by emulating Human Visual System processing through neuromimetic dynamic and static transformations with Mutual Information-guided adaptive blur.
## Metadata
- **Source**: arXiv:2604.17927v1
- **Authors**: Visual decoding researchers (2026)
- **Published**: 2026-04-20
- **Domain**: Computational Neuroscience, Visual Decoding, BCIs, Neuromimetic Systems
## Core Methodology
### Key Innovation
Current visual decoding approaches suffer from systematic and stochastic gaps between neural and visual modalities because they ignore the intrinsic computational mechanisms of the Human Visual System (HVS). BI-Cap addresses this by constructing a biologically plausible pipeline that emulates HVS processing, bridging the modality gap through neuromimetic transformations.
### Technical Framework
#### Four Neuromimetic Transformations
1. **Dynamic Transformations**: Time-varying aspects of visual processing
- Temporal integration windows matching neural dynamics
- Motion-sensitive pathways emulation
2. **Static Transformations**: Spatial aspects of visual processing
- Retina-like sampling (foveated attention)
- LGN-style contrast enhancement
- V1-oriented edge detection
- Higher-level feature extraction
3. **Mutual Information (MI) Guidance**: Adaptive blur regulation
- Dynamic adjustment of spatial resolution based on neural activity
- Optimal information transfer between neural and visual domains
4. **Evidence-Driven Pipeline**: Probabilistic inference framework
- Bayesian integration of neural evidence
- Uncertainty quantification in visual reconstruction
#### Processing Pipeline
```
Neural Signals → Neuromimetic Transformations →
MI-Guided Blur Regulation → Evidence Integration →
Visual Reconstruction
```
## Implementation Guide
### Prerequisites
- PyTorch/TensorFlow
- OpenCV for image processing
- MNE for neural signal processing
- scipy for mutual information computation
### Step-by-Step
#### 1. Retina-like Preprocessing
```python
import cv2
import numpy as np
def retina_sampling(image, fovea_center, fovea_radius=32,
peripheral_scale=0.25):
"""
Apply foveated sampling mimicking retina structure
- High resolution at fovea center
- Decreasing resolution in periphery
"""
h, w = image.shape[:2]
result = np.zeros_like(image)
# Create distance map from fovea
y, x = np.ogrid[:h, :w]
dist = np.sqrt((x - fovea_center[0])**2 + (y - fovea_center[1])**2)
# Foveal region: full resolution
fovea_mask = dist <= fovea_radius
result[fovea_mask] = image[fovea_mask]
# Peripheral region: downsampled
peripheral_mask = dist > fovea_radius
small = cv2.resize(image, (w//4, h//4))
enlarged = cv2.resize(small, (w, h))
result[peripheral_mask] = enlarged[peripheral_mask]
return result
```
#### 2. V1-like Edge Detection
```python
import torch
import torch.nn as nn
class V1EdgeDetector(nn.Module):
"""V1 simple cell inspired edge detection"""
def __init__(self, num_orientations=8):
super().__init__()
self.num_orientations = num_orientations
# Create Gabor filters at different orientations
filters = []
for i in range(num_orientations):
theta = i * np.pi / num_orientations
kernel = self.create_gabor_kernel(theta)
filters.append(torch.tensor(kernel, dtype=torch.float32))
self.conv = nn.Conv2d(1, num_orientations, kernel_size=17,
padding=8, bias=False)
self.conv.weight.data = torch.stack(filters).unsqueeze(1)
def create_gabor_kernel(self, theta, sigma=4, lambda_=8, gamma=0.5):
"""Create Gabor filter kernel"""
sigma_x = sigma
sigma_y = sigma / gamma
# Bounding box
n = int(4 * sigma + 1)
y, x = np.meshgrid(np.linspace(-n, n, 2*n+1),
np.linspace(-n, n, 2*n+1))
# Rotation
x_theta = x * np.cos(theta) + y * np.sin(theta)
y_theta = -x * np.sin(theta) + y * np.cos(theta)
# Gabor function
gb = np.exp(-0.5 * (x_theta**2 / sigma_x**2 + y_theta**2 / sigma_y**2))
gb *= np.cos(2 * np.pi * x_theta / lambda_)
return gb
def forward(self, x):
return torch.relu(self.conv(x))
```
#### 3. Mutual Information-Guided Blur
```python
from scipy.stats import entropy
def mutual_information(x, y, bins=256):
"""Compute mutual information between neural signal and visual feature"""
hist_2d, _, _ = np.histogram2d(x.flatten(), y.flatten(), bins=bins)
pxy = hist_2d / float(np.sum(hist_2d))
px = np.sum(pxy, axis=1)
py = np.sum(pxy, axis=0)
px_py = px[:, None] * py[None, :]
# Only non-zero bins contribute
nzs = pxy > 0
mi = np.sum(pxy[nzs] * np.log(pxy[nzs] / px_py[nzs]))
return mi
def adaptive_blur_guidance(neural_activity, visual_candidate,
sigma_range=(0.5, 5.0)):
"""
Find optimal blur level that maximizes MI between neural and visual
"""
best_sigma = sigma_range[0]
best_mi = -np.inf
for sigma in np.linspace(sigma_range[0], sigma_range[1], 20):
blurred = cv2.GaussianBlur(visual_candidate, (0,0), sigma)
mi = mutual_information(neural_activity, blurred)
if mi > best_mi:
best_mi = mi
best_sigma = sigma
return best_sigma, best_mi
```
#### 4. Evidence Integration
```python
class EvidenceIntegrator:
"""Bayesian integration of neural evidence for visual decoding"""
def __init__(self, num_features, num_classes):
self.priors = np.ones(num_classes) / num_classes
self.likelihood_models = []
def compute_likelihood(self, neural_features, class_idx):
"""Compute P(neural_features | class_idx)"""
# Use trained likelihood model
return self.likelihood_models[class_idx].score(neural_features)
def bayesian_update(self, neural_evidence, candidates):
"""
Update posterior probabilities given neural evidence
Args:
neural_evidence: Extracted neural features
candidates: List of candidate visual reconstructions
Returns:
Posterior probabilities for each candidate
"""
posteriors = []
for idx, candidate in enumerate(candidates):
likelihood = self.compute_likelihood(neural_evidence, idx)
posterior = likelihood * self.priors[idx]
posteriors.append(posterior)
# Normalize
posteriors = np.array(posteriors)
posteriors /= posteriors.sum()
return posteriors
```
## Applications
- **Brain-Computer Interfaces**: Visual reconstruction for locked-in patients
- **Neural Prosthetics**: Artificial vision restoration
- **Cognitive Neuroscience**: Understanding visual perception mechanisms
- **AI Alignment**: Building visual systems that match human perception
## Advantages
- **Biological Plausibility**: Matches known HVS processing stages
- **Modality Alignment**: Reduces neural-visual gap through emulation
- **Interpretability**: Clear mapping between neural and visual representations
- **Adaptive Processing**: MI guidance optimizes information transfer
## Pitfalls
- **Individual Variability**: HVS parameters vary across subjects
- **Computational Cost**: Neuromimetic processing is resource-intensive
- **Incomplete HVS Model**: Current implementation covers only early visual areas
- **Training Data Requirements**: Requires paired neural-visual recordings
## Related Skills
- eeg-visual-attention-decoding
- eeg2vision-multimodal-eeg-framework-2d-visual
- neuromimetic-perceptual-compression
- visual-imagery-decoding-fmri
## References
```
@article{bicap2026,
title={Brain-Inspired Capture: Evidence-Driven Neuromimetic Perceptual Simulation for Visual Decoding},
journal={arXiv preprint arXiv:2604.17927},
year={2026}
}
```