Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-hela-mem-hebbian-memory-llmgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-hela-mem-hebbian-memory-llm/SKILL.md---
name: hela-mem-hebbian-memory-llm
description: >
HeLa-Mem architecture for LLM agent long-term memory using Hebbian learning and associative memory.
Inspired by cognitive neuroscience mechanisms: association, consolidation, and pruning.
Enables LLM agents to build strengthenable associative memory graphs instead of flat embedding retrieval.
Trigger: hela-mem, hebbian memory, associative memory llm, 联想记忆, 赫布学习, LLM agent memory,
long-term memory agents, biological memory architecture.
version: 1.0.0
metadata:
hermes:
tags: [memory, llm-agents, hebbian, associative, neuroscience-inspired]
source_paper: "HeLa-Mem: Hebbian Learning and Associative Memory for LLM Agents (arXiv:2604.16839)"
paper_date: "2026-04-18"
score: 39
---
# HeLa-Mem: Hebbian Associative Memory for LLM Agents
## Overview
HeLa-Mem addresses the long-term memory problem in LLM agents by replacing flat embedding-vector retrieval with a neuroscience-inspired associative memory system. Human memory strengthens connections between related experiences through repeated co-activation — HeLa-Mem captures this via three core mechanisms.
## Core Mechanisms
### 1. Association (联想)
When two memory nodes are co-activated or retrieved together, the connection weight between them is strengthened following Hebbian principle: "neurons that fire together, wire together."
```python
# Hebbian weight update
def hebbian_update(W_ij, activity_i, activity_j, learning_rate=0.01):
"""Strengthen connections between co-active memory nodes."""
delta_W = learning_rate * activity_i * activity_j
return W_ij + delta_W - decay * W_ij # with weight decay
```
### 2. Consolidration (巩固)
Memories undergo consolidation over time, transferring from short-term to long-term storage. Important memories (high retrieval frequency, emotional salience) are prioritized.
- **Synaptic consolidation**: Fast, protein-synthesis-independent (minutes to hours)
- **Systems consolidation**: Slow, involves hippocampal-neocortical transfer (days to months)
```python
def consolidate_memory(memory_node, retrieval_history, time_elapsed):
"""Simulate memory consolidation process."""
importance = compute_importance(retrieval_history)
if importance > threshold:
memory_node.status = 'long_term'
memory_node.stability *= consolidation_factor
else:
memory_node.status = 'decaying'
return memory_node
```
### 3. Pruning (修剪)
Weak or rarely accessed connections are pruned to maintain memory efficiency and prevent interference.
```python
def prune_connections(memory_graph, min_weight=0.05):
"""Remove weak associative connections."""
edges_to_remove = [(u, v) for u, v, w in memory_graph.edges() if w < min_weight]
memory_graph.remove_edges_from(edges_to_remove)
```
## Architecture Design
```
Conversation History → Memory Nodes (episodic traces)
↓
Hebbian Weight Updates
↓
Associative Memory Graph
↙ ↘
High-weight paths Low-weight paths
(fast retrieval) (subject to pruning)
```
### Memory Node Structure
| Field | Description |
|-------|-------------|
| `content` | The episodic trace (text embedding + metadata) |
| `timestamp` | When memory was formed |
| `activation` | Current activation level |
| `retrieval_count` | How often accessed |
| `connections` | Dict of connected node IDs → weights |
| `consolidation_level` | short_term / long_term |
### Retrieval Algorithm
1. **Cue matching**: Find seed nodes matching query semantics
2. **Spreading activation**: Propagate activation through the graph via Hebbian weights
3. **Ranking**: Return top-k nodes by final activation
## Implementation Considerations
- **Weight normalization**: Prevent runaway strengthening
- **Decay schedules**: Exponential or power-law forgetting curves
- **Graph sparsity**: Maintain sparse connectivity for efficiency
- **Batch consolidation**: Periodic consolidation to avoid per-query overhead
## Applications
- LLM agents with persistent multi-session memory
- Conversational AI with contextual recall
- Knowledge-grounded dialogue systems
- Personal assistant agents with user-specific memory
## References
- HeLa-Mem paper: arXiv:2604.16839 (2026-04-18)
- Hebbian learning: Hebb, D.O. (1949). The Organization of Behavior
- Memory consolidation: Dudai, Y. (2004). The neurobiology of consolidations