Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-dnn-guided-pso-optimizationgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-dnn-guided-pso-optimization/SKILL.md---
name: dnn-guided-pso-optimization
description: "Deep Neural Network-guided Particle Swarm Optimization for tracking global optimal positions in dynamic environments. DNNs predict optimal particle movements and environment changes to adapt PSO to non-stationary optimization problems. Activation: DNN-guided PSO, dynamic optimization, particle swarm optimization, neural-guided optimization, adaptive PSO, online learning optimization."
---
# DNN-Guided Particle Swarm Optimization
Particle Swarm Optimization variants incorporated with Deep Neural Networks (DNNs) for tracking globally optimal positions in dynamic environments.
## Core Concept
Particle Swarm Optimization (PSO) is a heuristic approach for solving complex optimization problems. However, canonical PSO and its variants struggle to adapt efficiently to dynamic environments where the global optimum moves over time. This methodology incorporates DNNs to guide particles in pursuing optimal positions, enabling better adaptation to environment changes.
## Background: Particle Swarm Optimization
### Standard PSO
```
Particle velocity update:
v_i(t+1) = w*v_i(t) + c1*r1*(pbest_i - x_i) + c2*r2*(gbest - x_i)
Particle position update:
x_i(t+1) = x_i(t) + v_i(t+1)
where:
- w: inertia weight
- c1, c2: cognitive and social coefficients
- r1, r2: random numbers in [0,1]
- pbest_i: personal best of particle i
- gbest: global best
```
### Limitations in Dynamic Environments
1. **Convergence**: Particles converge, losing diversity needed for tracking moving optima
2. **Memory**: Relies on outdated pbest/gbest when environment changes
3. **Adaptation**: No mechanism to detect or respond to environment changes
4. **Exploration**: Insufficient exploration for tracking dynamic targets
## DNN-Guided PSO Architecture
### Neural Network Components
```
DNN-Guided PSO Framework:
┌─────────────────────────────────────────────────────┐
│ Environment │
│ (Dynamic fitness landscape with moving optimum) │
└─────────────┬───────────────────┬───────────────────┘
│ │
▼ ▼
┌─────────────────────┐ ┌──────────────────────┐
│ Particle Swarm │ │ DNN Models │
│ │ │ │
│ - Positions (x) │◄──┤ - Movement Predictor│
│ - Velocities (v) │ │ - Change Detector │
│ - Fitness values │ │ - Position Advisor │
└─────────────────────┘ └──────────────────────┘
│ │
└─────────┬─────────┘
▼
┌───────────────────┐
│ Guided Update │
│ (Enhanced PSO) │
└───────────────────┘
```
### DNN Components
1. **Movement Predictor Network**
- Predicts the movement direction of the global optimum
- Input: Historical particle positions and fitness values
- Output: Predicted optimum movement vector
2. **Change Detection Network**
- Detects environment changes
- Input: Current fitness distribution vs historical
- Output: Change probability and magnitude
3. **Position Advisor Network**
- Advises particle movements based on learned patterns
- Input: Particle state, environment features
- Output: Suggested velocity adjustment
## Implementation
### Pseudocode
```python
import torch
import torch.nn as nn
import numpy as np
class MovementPredictor(nn.Module):
"""DNN for predicting optimum movement direction."""
def __init__(self, input_dim, hidden_dim=128):
super().__init__()
self.network = nn.Sequential(
nn.Linear(input_dim, hidden_dim),
nn.ReLU(),
nn.Dropout(0.2),
nn.Linear(hidden_dim, hidden_dim),
nn.ReLU(),
nn.Linear(hidden_dim, 3) # delta_x, delta_y, confidence
)
def forward(self, x):
return self.network(x)
class ChangeDetector(nn.Module):
"""DNN for detecting environment changes."""
def __init__(self, history_length, n_particles):
super().__init__()
input_dim = history_length * n_particles * 2 # positions + fitness
self.network = nn.Sequential(
nn.Linear(input_dim, 256),
nn.ReLU(),
nn.Linear(256, 64),
nn.ReLU(),
nn.Linear(64, 2) # change probability, severity
)
def forward(self, history):
x = history.view(history.size(0), -1)
return torch.sigmoid(self.network(x))
class DNNGuidedPSO:
"""DNN-guided Particle Swarm Optimization."""
def __init__(self, n_particles, dim, bounds, dnn_models):
self.n_particles = n_particles
self.dim = dim
self.bounds = bounds
self.dnn = dnn_models
# PSO parameters
self.w = 0.7 # inertia
self.c1 = 1.5 # cognitive
self.c2 = 1.5 # social
# Initialize particles
self.x = np.random.uniform(bounds[0], bounds[1], (n_particles, dim))
self.v = np.random.uniform(-1, 1, (n_particles, dim))
self.pbest = self.x.copy()
self.pbest_fitness = np.full(n_particles, float('inf'))
self.gbest = self.x[0].copy()
self.gbest_fitness = float('inf')
# History for DNN
self.position_history = []
self.fitness_history = []
self.history_length = 10
def update_dnn_guidance(self):
"""Get guidance from DNN models."""
if len(self.position_history) < self.history_length:
return None, 0.0
# Prepare input for DNN
recent_positions = np.array(self.position_history[-self.history_length:])
recent_fitness = np.array(self.fitness_history[-self.history_length:])
# Change detection
change_input = torch.FloatTensor(np.concatenate([
recent_positions.flatten(),
recent_fitness.flatten()
])).unsqueeze(0)
with torch.no_grad():
change_pred = self.dnn['change_detector'](change_input)
change_prob = change_pred[0, 0].item()
change_severity = change_pred[0, 1].item()
# Movement prediction
if change_prob > 0.5:
movement_input = torch.FloatTensor(recent_positions[-1].flatten()).unsqueeze(0)
with torch.no_grad():
movement_pred = self.dnn['movement_predictor'](movement_input)
predicted_delta = movement_pred[0, :self.dim].numpy()
confidence = movement_pred[0, -1].item()
return predicted_delta, confidence
return None, 0.0
def update_particles(self, fitness_func):
"""Update particle positions with DNN guidance."""
# Evaluate fitness
fitness = np.array([fitness_func(x) for x in self.x])
# Update history
self.position_history.append(self.x.copy())
self.fitness_history.append(fitness.copy())
if len(self.position_history) > self.history_length:
self.position_history.pop(0)
self.fitness_history.pop(0)
# Get DNN guidance
predicted_movement, confidence = self.update_dnn_guidance()
# Update personal best
improved = fitness < self.pbest_fitness
self.pbest[improved] = self.x[improved]
self.pbest_fitness[improved] = fitness[improved]
# Update global best
best_idx = np.argmin(fitness)
if fitness[best_idx] < self.gbest_fitness:
self.gbest = self.x[best_idx].copy()
self.gbest_fitness = fitness[best_idx]
# Update velocity with DNN guidance
r1, r2 = np.random.rand(2)
cognitive = self.c1 * r1 * (self.pbest - self.x)
social = self.c2 * r2 * (self.gbest - self.x)
# Add DNN guidance if available
if predicted_movement is not None and confidence > 0.6:
# Guide particles toward predicted optimum position
dnn_guidance = confidence * predicted_movement
self.v = self.w * self.v + cognitive + social + dnn_guidance
else:
self.v = self.w * self.v + cognitive + social
# Update position
self.x = self.x + self.v
# Apply bounds
self.x = np.clip(self.x, self.bounds[0], self.bounds[1])
return self.gbest, self.gbest_fitness
def optimize(self, fitness_func, max_iter=1000):
"""Run optimization."""
for iteration in range(max_iter):
gbest, gbest_fitness = self.update_particles(fitness_func)
# Optional: Adapt parameters based on iteration
self.w = 0.9 - 0.5 * iteration / max_iter # Decay inertia
return gbest, gbest_fitness
class DynamicEnvironment:
"""Example dynamic environment with moving optimum."""
def __init__(self, dim, change_frequency=50, change_magnitude=1.0):
self.dim = dim
self.change_frequency = change_frequency
self.change_magnitude = change_magnitude
self.iteration = 0
self.optimum = np.random.randn(dim) * 5
def fitness(self, x):
"""Fitness function with moving optimum."""
self.iteration += 1
# Periodically move optimum
if self.iteration % self.change_frequency == 0:
self.optimum += np.random.randn(self.dim) * self.change_magnitude
return np.sum((x - self.optimum) ** 2)
def get_optimum(self):
return self.optimum.copy()
# Training the DNN models
def train_dnn_models(env_sampler, n_episodes=100):
"""Train DNN models on dynamic environment."""
# Collect training data
trajectories = []
for _ in range(n_episodes):
env = env_sampler()
pso = DNNGuidedPSO(n_particles=30, dim=2, bounds=(-10, 10), dnn_models=None)
trajectory = {'positions': [], 'fitness': [], 'optimum_moves': []}
for step in range(200):
positions = pso.x.copy()
fitness = np.array([env.fitness(x) for x in positions])
trajectory['positions'].append(positions)
trajectory['fitness'].append(fitness)
# Update PSO
pso.update_particles(env.fitness)
trajectories.append(trajectory)
# Train movement predictor
# Train change detector
# ... (training loop)
return {'movement_predictor': None, 'change_detector': None}
```
## Key Features
1. **Dynamic Tracking**: DNN predicts optimum movement for proactive tracking
2. **Change Detection**: Neural network detects environment changes
3. **Adaptive Guidance**: Confidence-weighted DNN guidance
4. **Learned Patterns**: DNN learns from historical optimization trajectories
5. **Hybrid Approach**: Combines swarm intelligence with neural predictions
## Advantages over Standard PSO
- **Better Tracking**: Proactive movement toward predicted optimum
- **Faster Adaptation**: Quick response to environment changes
- **Reduced Stagnation**: DNN guidance prevents premature convergence
- **Transfer Learning**: DNN can transfer knowledge across similar problems
## Applications
- Dynamic optimization problems
- Real-time parameter tuning
- Tracking moving targets
- Non-stationary fitness landscapes
- Online learning systems
- Adaptive control
## Activation Keywords
- DNN-guided PSO
- dynamic optimization
- particle swarm optimization
- neural-guided optimization
- adaptive PSO
- online learning optimization
- moving optimum tracking
- dynamic environment optimization
- neural PSO
- deep learning optimization
## References
- Paper: "Deep Neural Network-guided PSO for Tracking a Global Optimal Position in Complex Dynamic Environment"
- arXiv: 2604.14064v1
- Authors: Stephen Raharja, Toshiharu Sugawara
- Category: cs.NE
- Date: 2026-04-15
## Related Skills
- particle-swarm-optimization
- neural-network-optimization
- dynamic-optimization
- evolutionary-algorithms
- reinforcement-learning