Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-graph-laplacian-j-divergencegit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-graph-laplacian-j-divergence/SKILL.md---
name: graph-laplacian-denoising-for-brain-connectivity-s
description: **来源论文:** arXiv:2012.11240 - Improving J-divergence of brain connectivity states by graph Laplacian denoising
---
# Graph Laplacian Denoising for Brain Connectivity States
**来源论文:** arXiv:2012.11240 - Improving J-divergence of brain connectivity states by graph Laplacian denoising
**效用评分:** 0.98
**创建时间:** 2026-03-24 16:03
**期刊:** IEEE Transactions on Signal and Information Processing over Networks
## 概述
使用图拉普拉斯去噪算法改进脑连接状态的可检测性,推导新的 Jensen 散度公式,应用于 MI-BCI 实验中的 EEG 脑网络分析。
## 激活关键词
- graph Laplacian denoising
- J-divergence brain connectivity
- BCI connectivity states
- EEG network denoising
- motor imagery detection
- 图拉普拉斯去噪
- 脑连接状态
## 核心方法
```python
import numpy as np
import scipy.sparse as sp
from scipy.linalg import eigh
class GraphLaplacianDenoiser:
"""
图拉普拉斯去噪器
"""
def __init__(self, alpha=0.5, n_eigenvectors=None):
self.alpha = alpha
self.n_eigenvectors = n_eigenvectors
def compute_laplacian(self, adj_matrix, normalized=True):
"""计算图拉普拉斯矩阵"""
if normalized:
degree = np.sum(adj_matrix, axis=1)
d_inv_sqrt = np.power(degree, -0.5, where=degree>0)
D_inv_sqrt = np.diag(d_inv_sqrt)
L = np.eye(len(adj_matrix)) - D_inv_sqrt @ adj_matrix @ D_inv_sqrt
else:
degree = np.sum(adj_matrix, axis=1)
D = np.diag(degree)
L = D - adj_matrix
return L
def denoise(self, laplacian, signal):
"""
图拉普拉斯去噪
L_denoised = L + alpha * (L^2)
"""
# 特征分解
eigenvalues, eigenvectors = eigh(laplacian)
# 低通滤波
if self.n_eigenvectors:
eigenvalues = eigenvalues[:self.n_eigenvectors]
eigenvectors = eigenvectors[:, :self.n_eigenvectors]
# 去噪
filter_coeffs = 1.0 / (1.0 + self.alpha * eigenvalues)
denoised = eigenvectors @ np.diag(filter_coeffs) @ eigenvectors.T @ signal
return denoised
def j_divergence(laplacian1, laplacian2):
"""
计算 Jensen 散度
J(L1, L2) = 0.5 * (tr(L1^{-1} L2) + tr(L2^{-1} L1) - 2n)
"""
n = laplacian1.shape[0]
# 正则化伪逆
L1_inv = np.linalg.pinv(laplacian1)
L2_inv = np.linalg.pinv(laplacian2)
# J 散度
J = 0.5 * (np.trace(L1_inv @ laplacian2) + np.trace(L2_inv @ laplacian1) - 2*n)
return J
def detect_connectivity_state(fc1, fc2, alpha=0.5):
"""
检测连接状态差异
"""
denoiser = GraphLaplacianDenoiser(alpha=alpha)
L1 = denoiser.compute_laplacian(fc1)
L2 = denoiser.compute_laplacian(fc2)
divergence = j_divergence(L1, L2)
return divergence
```
## 应用场景
- MI-BCI 运动想象检测
- 脑连接状态区分
- 实时 BCI 应用
- EEG 网络去噪
## 相关技能
- `eeg-brain-connectivity-bci`
- `graph-laplacian-denoising`
- `core-periphery-state-space`
---
_此技能用于改进脑连接状态的 J 散度检测_
## Description
Graph Laplacian Denoising for Brain Connectivity States
## Activation Keywords
- graph-laplacian-j-divergence
- graph-laplacian-j-divergence 技能
- graph-laplacian-j-divergence skill
## Tools Used
- `read` - Read documentation and references
- `web_search` - Search for related information
- `web_fetch` - Fetch paper or documentation
## Instructions for Agents
Follow these steps when applying this skill:
### Step 1: Understand the Request
### Step 2: Search for Information
### Step 3: Apply the Framework
### Step 4: Provide Results
### Step 5: Verify Accuracy
## Examples
### Example 1: Basic Application
**User:** I need to apply Graph Laplacian Denoising for Brain Connectivity States to my analysis.
**Agent:** I'll help you apply graph-laplacian-j-divergence. First, let me understand your specific use case...
**Context:** Apply the methodology
### Example 2: Advanced Scenario
**User:** Complex analysis scenario
**Agent:** Based on the methodology, I'll guide you through the advanced application...
### Example 2: Advanced Application
**User:** What are the key considerations for graph-laplacian-j-divergence?
**Agent:** Let me search for the latest research and best practices...