Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cardiac-emboli-detection-ultrasoundgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cardiac-emboli-detection-ultrasound/SKILL.md---
name: cardiac-emboli-detection-ultrasound
description: "Convolutional Neural Network framework for detecting gaseous microemboli (GME) during cardiac procedures using transthoracic ultrasound. Activation triggers: emboli detection, cardiac ultrasound, microemboli GME, surgical safety, transcatheter monitoring"
---
# Cardiac Emboli Detection via Transthoracic Ultrasound CNN
> CNN-based system for real-time detection and quantification of gaseous microemboli (GME) during cardiac structural interventions.
## Metadata
- **Source**: arXiv:2604.22258
- **Authors**: Andrea Angino, Ken Trotti, Diego Ulisse Pizzagalli, Rolf Krause, Tiziano Torre
- **Published**: 2026-04-27
- **Categories**: q-bio.NC, cs.CV, eess.IV
## Core Methodology
### Clinical Problem
Gaseous microemboli (GME) are common complications of cardiac structural interventions (surgical and transcatheter). Current detection via transthoracic cardiac ultrasound is:
- Operator-dependent (view quality varies)
- Challenging due to high velocity of GME
- Obscured by similar-density objects (artifacts, anatomical structures)
- Not automated—requires constant expert attention
### CNN-Based Detection Framework
**Input**: Transthoracic cardiac ultrasound video frames
**Output**: GME detection masks + count quantification
Key challenges addressed:
1. **View Standardization**: CNN learns to normalize across operator-dependent views
2. **Velocity Handling**: Temporal modeling across frames
3. **Artifact Discrimination**: Distinguish true GME from anatomical structures
## Implementation Guide
### Prerequisites
- PyTorch or TensorFlow
- Medical imaging library (SimpleITK, MONAI)
- GPU with 8GB+ VRAM
- Annotated ultrasound dataset (cardiac structural procedures)
### Step-by-Step
1. **Data Preprocessing**
```python
import cv2
import numpy as np
def preprocess_ultrasound(video_path, target_size=(224, 224)):
"""Standardize ultrasound views"""
frames = extract_frames(video_path)
# Normalize echogenicity
normalized = []
for frame in frames:
gray = cv2.cvtColor(frame, cv2.COLOR_BGR2GRAY)
# Histogram equalization for contrast
eq = cv2.createCLAHE(clipLimit=2.0).apply(gray)
resized = cv2.resize(eq, target_size)
normalized.append(resized)
return np.array(normalized) / 255.0
```
2. **Model Architecture**
```python
import torch.nn as nn
class GME_Detector(nn.Module):
"""U-Net style architecture with temporal attention"""
def __init__(self):
super().__init__()
# Encoder (pretrained ResNet or EfficientNet)
self.encoder = nn.Sequential(
nn.Conv2d(1, 64, 3, padding=1),
nn.BatchNorm2d(64),
nn.ReLU(),
# ... more layers
)
# Temporal attention for velocity handling
self.temporal_attn = nn.MultiheadAttention(256, num_heads=8)
# Decoder with skip connections
self.decoder = nn.Sequential(
# U-Net style upsampling
)
def forward(self, x):
# x: (batch, time, channels, height, width)
b, t, c, h, w = x.shape
# Encode each frame
features = []
for i in range(t):
f = self.encoder(x[:, i])
features.append(f)
# Temporal attention
features_stack = torch.stack(features, dim=1)
attn_out, _ = self.temporal_attn(features_stack, features_stack, features_stack)
# Decode to segmentation mask
mask = self.decoder(attn_out.mean(dim=1))
return torch.sigmoid(mask)
```
3. **Training Strategy**
```python
# Handle class imbalance (GME are rare)
class_weights = torch.tensor([0.1, 10.0]) # background, GME
criterion = nn.BCEWithLogitsLoss(pos_weight=class_weights[1])
# Data augmentation for robustness
augment = A.Compose([
A.RandomBrightnessContrast(p=0.5),
A.GaussianBlur(blur_limit=3, p=0.3),
A.ShiftScaleRotate(shift_limit=0.1, scale_limit=0.1, p=0.5),
])
```
4. **Deployment**
```python
def real_time_detection(model, ultrasound_stream):
"""Real-time GME detection during procedure"""
frame_buffer = []
for frame in ultrasound_stream:
preprocessed = preprocess_frame(frame)
frame_buffer.append(preprocessed)
if len(frame_buffer) >= 8: # temporal window
input_tensor = torch.tensor(frame_buffer[-8:]).unsqueeze(0)
with torch.no_grad():
mask = model(input_tensor)
gme_count = count_blobs(mask)
if gme_count > threshold:
alert_operators(gme_count)
frame_buffer.pop(0)
```
## Applications
- **Surgical Safety**: Real-time monitoring during cardiac surgery
- **Transcatheter Procedures**: TAVR, MitraClip, LAA closure
- **Quality Control**: Post-procedure emboli load assessment
- **Research**: Large-scale emboli incidence studies
## Pitfalls
- Requires diverse training data across different ultrasound machines
- View-dependent performance—some views are inherently low-quality
- False positives from artifacts can cause alarm fatigue
- Regulatory approval required (FDA Class II medical device)
- Latency constraints for real-time feedback during procedures
## Related Skills
- medical-image-segmentation
- ultrasound-image-analysis
- cardiac-imaging-ai
- surgical-safety-monitoring