Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-cerebellar-cortex-motor-controlgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-cerebellar-cortex-motor-control/SKILL.md---
name: cerebellar-cortex-motor-control
description: "Cerebellar cortex microcircuit architecture for motor control — granule cell expansion, Purkinje cell computation, climbing fiber error signals, and cerebellar cortical algorithms. Applies to robotic control, motor learning algorithms, cerebellar-inspired AI, neuromorphic motor systems. 触发词: cerebellar cortex, motor control, granule cell, Purkinje cell, climbing fiber, cerebellar microcircuit, mossy fiber, cerebellar learning, internal model, forward model"
---
# Cerebellar Cortex Motor Control
## Source Paper
- **Primary:** Chen, S. et al. (2026). "Microcircuits of the Cerebellar Cortex." *Brain Research Bulletin*, arXiv:2604.06397v1
## Overview
The cerebellum contains ~80% of all neurons in the brain, yet its architecture is remarkably uniform and crystalline. The cerebellar cortex implements a universal computational algorithm: expand inputs through a massive granule cell layer, compute through Purkinje cell dendritic integration, and learn via climbing fiber error signals. This architecture implements predictive internal models — the cerebellum predicts the sensory consequences of motor commands and corrects errors before they manifest. This skill describes the cerebellar microcircuit and its computational principles.
## Core Architecture
### Cellular Components
```
Cerebellar Cortex Layers (from surface to deep):
┌─────────────────────────────────────────────────┐
│ Molecular Layer │
│ ├── Stellate cells (inhibitory interneurons) │
│ ├── Basket cells (inhibitory interneurons) │
│ └── Purkinje cell dendrites (elaborate trees) │
├─────────────────────────────────────────────────┤
│ Purkinje Cell Layer (single cell layer) │
│ ├── Purkinje cells (only output neurons) │
│ └── Bergmann glia │
├─────────────────────────────────────────────────┤
│ Granular Layer │
│ ├── Granule cells (most numerous: ~50B) │
│ ├── Golgi cells (inhibitory interneurons) │
│ └── Unipolar brush cells │
├─────────────────────────────────────────────────┤
│ White Matter │
│ ├── Deep cerebellar nuclei (output target) │
│ └── Inferior olive (climbing fiber source) │
└─────────────────────────────────────────────────┘
```
### Input-Output Pathways
```python
# Cerebellar circuit model
class CerebellarCortex:
"""Model of cerebellar cortex microcircuit for motor control."""
def __init__(self, n_granule=1000, n_purkinje=100, n_golgi=20):
# Granule cell layer (massive expansion)
self.n_granule = n_granule
self.n_purkinje = n_purkinje
self.n_golgi = n_golgi
# Mossy fiber → Granule cell connections (sparse)
self.mossy_granule = self._sparse_connect(
n_inputs=50, n_granule=n_granule, sparsity=0.01
)
# Granule cell → Purkinje cell (each PC receives ~200K GC inputs)
self.granule_purkinje = self._sparse_connect(
n_granule=n_granule, n_purkinje=n_purkinje,
sparsity=0.001 # Each PC connects to subset of granule cells
)
# Golgi cell feedback inhibition
self.golgi_granule = self._sparse_connect(
n_golgi=n_golgi, n_granule=n_granule, sparsity=0.05
)
self.granule_golgi = self._sparse_connect(
n_granule=n_granule, n_golgi=n_golgi, sparsity=0.1
)
# Climbing fiber → Purkinje cell (1:1 mapping)
self.climbing_purkinje = np.eye(n_purkinje)[:min(50, n_purkinje)]
# Synaptic weights (modifiable)
self.parallel_fiber_weights = np.random.randn(
n_granule, n_purkinje
) * 0.01 # Granule→Purkinje (PF-PC synapses)
def _sparse_connect(self, n_pre, n_post, sparsity):
"""Create sparse random connectivity matrix."""
mask = np.random.random((n_pre, n_post)) < sparsity
weights = np.random.randn(n_pre, n_post) * mask
return weights
def process_motor_command(self, mossy_input, climbing_error=None):
"""
Process motor command through cerebellar circuit.
This is the cerebellar computational algorithm:
1. Mossy fibers carry state/context info
2. Granule cells expand the representation
3. Parallel fibers (granule axons) drive Purkinje cells
4. Purkinje cells integrate and output correction
5. Climbing fibers provide error teaching signal
"""
# Step 1: Mossy fiber → Granule cell (with Golgi inhibition)
granule_input = self.mossy_granule.T @ mossy_input
# Golgi cell feedback inhibition (gain control)
golgi_activity = self.granule_golgi.T @ granule_input
granule_inhibition = self.golgi_granule @ golgi_activity
granule_activity = np.maximum(0, granule_input - granule_inhibition)
granule_activity = np.tanh(granule_activity) # Sparsification
# Step 2: Granule → Purkinje (parallel fiber computation)
purkinje_input = self.parallel_fiber_weights.T @ granule_activity
# Purkinje cell output (simple spike rate)
purkinje_output = np.tanh(purkinje_input)
# Step 3: Climbing fiber teaching signal (if error available)
if climbing_error is not None:
self._learn_from_error(climbing_error, granule_activity)
return purkinje_output # Motor correction signal
def _learn_from_error(self, error_signal, granule_activity):
"""
Cerebellar learning via climbing fiber error signals.
Climbing fibers (from inferior olive) fire complex spikes
in Purkinje cells, inducing LTD at active parallel fiber
synapses. This is the cerebellar teaching mechanism.
"""
# LTD at active PF-PC synapses (error-driven)
for pc_idx in range(self.n_purkinje):
if error_signal[pc_idx] != 0:
# Only modify synapses from recently active granule cells
active_granule = granule_activity > 0.1
# LTD: reduce weights at active synapses
delta = -0.01 * error_signal[pc_idx] * granule_activity
self.parallel_fiber_weights[:, pc_idx] += delta
# Weight normalization
self.parallel_fiber_weights[:, pc_idx] = np.clip(
self.parallel_fiber_weights[:, pc_idx], -1, 1
)
```
### Computational Principles
#### 1. Massive Expansion Recoding
```
Inputs (Mossy Fibers) ~200
↓
Granule Cells ~50 billion (in human)
↓
Expansion ratio: ~250 million : 1
```
The granule cell layer performs a **random projection** that expands the input space into a much higher dimensional space. This is mathematically equivalent to a **kernel method** — the high-dimensional representation makes previously nonlinearly-separable patterns linearly separable.
```python
# Cerebellar expansion as random projection kernel
def cerebellar_expansion(input_vector, n_granule=10000):
"""
Granule cell expansion — random projection to high dimension.
Each granule cell receives input from only 4-5 mossy fibers,
creating a sparse, high-dimensional representation.
"""
# Random sparse projection matrix (like mossy→granule)
n_inputs = len(input_vector)
W = np.random.randn(n_granule, n_inputs)
# Sparsify: each granule cell only responds to subset of inputs
mask = np.random.random((n_granule, n_inputs)) < (5.0 / n_inputs)
W *= mask
# Granule cell activation (threshold + sparsification)
granule_activity = W @ input_vector
granule_activity = np.maximum(0, granule_activity - 0.5)
return granule_activity # High-dimensional sparse representation
```
#### 2. Predictive Internal Model
The cerebellum implements a **forward model** that predicts the sensory consequences of motor commands:
```
Motor Command (efference copy) ──→ Cerebellum ──→ Predicted sensory state
↓
Actual sensory feedback ──────────→ Error computation
↓
Climbing fiber signal
↓
Update internal model
```
```python
class CerebellarForwardModel:
"""Cerebellar forward model for motor prediction."""
def __init__(self, state_dim, action_dim):
# The cerebellar circuit approximates this function:
# next_state_prediction = f(current_state, motor_command)
self.cerebellum = CerebellarCortex(
n_granule=500, n_purkinje=state_dim
)
def predict_consequence(self, state, motor_command):
"""Predict the sensory consequence of a motor command."""
# Combine state and command as mossy fiber input
mossy_input = np.concatenate([state, motor_command])
# Cerebellar processing gives prediction
prediction = self.cerebellum.process_motor_command(mossy_input)
return prediction
def update_from_error(self, state, motor_command, actual_outcome):
"""Update internal model based on prediction error."""
prediction = self.predict_consequence(state, motor_command)
error = actual_outcome - prediction
# Climbing fiber signal carries error to cerebellum
mossy_input = np.concatenate([state, motor_command])
self.cerebellum.process_motor_command(mossy_input, climbing_error=error)
```
#### 3. Timing and Sequence Learning
The cerebellum is crucial for **temporal processing**:
- **Classical conditioning:** Predicting timing of unconditioned stimulus
- **Smooth pursuit:** Predicting moving target trajectory
- **Speech:** Precise timing of articulatory movements
- **Balance:** Predicting body dynamics for postural control
### Key Microcircuit Algorithms
| Algorithm | Biological Mechanism | Computational Equivalent |
|-----------|---------------------|-------------------------|
| Pattern separation | Granule cell expansion | Random projection / kernel method |
| Error-driven learning | Climbing fiber LTD | Supervised learning / backpropagation |
| Gain control | Golgi cell feedback | Normalization / attention |
| Temporal filtering | Granule-Golgi loop | Recurrent filtering |
| Motor coordination | Purkinje cell output | Inverse/forward model |
## Key Parameters
| Parameter | Biological Value | Function |
|-----------|-----------------|----------|
| Granule cells | ~50 billion (human) | Massive expansion layer |
| Purkinje cells | ~15 million (human) | Sole output neurons |
| Parallel fiber synapses | ~200K per PC | Input integration |
| Climbing fiber synapses | 1 per PC (but 1500 contacts) | Teaching signal |
| Mossy fiber inputs | ~4-5 per granule cell | Sparse coding |
| Learning rule | LTD at PF-PC synapses | Error-driven weight change |
## Applications
### Robotics
```python
# Cerebellar-inspired robot arm controller
class CerebellarRobotController:
"""Motor controller using cerebellar forward model."""
def __init__(self, robot):
self.robot = robot
self.forward_model = CerebellarForwardModel(
state_dim=robot.state_size,
action_dim=robot.action_size
)
self.inverse_model = None # Learned through repetition
def execute_movement(self, target_state):
"""Execute movement with cerebellar prediction and correction."""
current_state = self.robot.get_state()
# Initial motor command (from inverse model or heuristic)
motor_command = self._plan_movement(current_state, target_state)
# Forward model predicts outcome
predicted_state = self.forward_model.predict_consequence(
current_state, motor_command
)
# If prediction differs from target, adjust command
prediction_error = target_state - predicted_state
if np.linalg.norm(prediction_error) > 0.1:
motor_command += self._correct_command(prediction_error)
# Execute
self.robot.execute(motor_command)
# After execution, update forward model with actual outcome
actual_state = self.robot.get_state()
self.forward_model.update_from_error(
current_state, motor_command, actual_state
)
```
### AI Motor Learning
- **Adaptive control:** Real-time motor adaptation using cerebellar-like internal models
- **Imitation learning:** Observing and predicting others' movements
- **Reinforcement learning:** Cerebellar forward model as world model for planning
## Limitations
1. **Simplified model:** Real cerebellar circuit has many more cell types ( Lugaro cells, unipolar brush cells)
2. **Learning complexity:** Real cerebellar learning involves multiple sites (PF-PC, MF-DCN, etc.)
3. **Non-motor functions:** Cerebellum also contributes to cognition, emotion, language
4. **Timescale:** Biological learning takes minutes to hours; model converges faster
## References
- Chen, S. et al. (2026). "Microcircuits of the Cerebellar Cortex." *Brain Research Bulletin*. arXiv:2604.06397v1
- Ito, M. (2008). "Control of Mental Activities by Internal Models in the Cerebellum." *Nature Reviews Neuroscience*, 9(4), 304-313.
- Marr, D. (1969). "A Theory of Cerebellar Cortex." *Journal of Physiology*, 202(2), 437-470.
- Albus, J.S. (1971). "A Theory of Cerebellar Function." *Mathematical Biosciences*, 10(1-2), 25-61.
## Related Skills
- [[advanced-control-systems-2026]]
- [[adaptive-spiking-neurons-asn]]
- [[dual-envelope-mpc-vehicle-drift]]