Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-deep-neural-network-guided-pso-tracking-globalgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-deep-neural-network-guided-pso-tracking-global/SKILL.md---
name: deep-neural-network-guided-pso-tracking-global
description: "Deep Neural Network-guided Particle Swarm Optimization for tracking global optima in complex dynamic environments. Uses DNN as surrogate model to predict promising search regions, accelerating PSO convergence while maintaining tracking of moving optima. Activation: PSO, particle swarm optimization, DNN-guided search, global optimization, surrogate model"
---
# DNN-Guided Particle Swarm Optimization
## Overview
A hybrid optimization framework combining Deep Neural Networks with Particle Swarm Optimization (PSO) for tracking global optimal positions in complex dynamic environments. The DNN serves as a surrogate model predicting fitness landscapes, guiding particle placement toward promising regions and significantly accelerating convergence while maintaining the ability to track moving optima.
## Source Paper
- **Title**: Deep Neural Network-guided PSO for Tracking a Global Optimal Position in Complex Dynamic Environment
- **Authors**: Stephen Raharja, Toshiharu Sugawara
- **arXiv**: 2604.14064v1
- **Published**: 2026-04-15
- **Categories**: N/A
- **PDF**: https://arxiv.org/pdf/2604.14064v1
## Core Concepts
### Key Innovation
Traditional PSO limitations addressed:
1. **Premature convergence** to local optima in complex landscapes
2. **Poor tracking** of moving optima in dynamic environments
3. **Inefficient sampling** in high-dimensional spaces
DNN-guided PSO solutions:
1. Train DNN surrogate on evaluated points to predict fitness landscape
2. Use DNN predictions plus uncertainty to guide particle placement
3. Periodically re-evaluate to update surrogate model
4. Balance exploration (DNN uncertainty) and exploitation (DNN prediction)
### Implementation
```python
import numpy as np
import torch
import torch.nn as nn
class DNNSurrogate(nn.Module):
"""DNN surrogate for fitness landscape prediction."""
def __init__(self, input_dim, hidden_dims=[64, 64]):
super().__init__()
layers = []
prev = input_dim
for h in hidden_dims:
layers.extend([nn.Linear(prev, h), nn.ReLU(), nn.Dropout(0.1)])
prev = h
layers.append(nn.Linear(prev, 2)) # mean + variance
self.network = nn.Sequential(*layers)
def forward(self, x):
out = self.network(x)
return out[:, 0], out[:, 1].exp()
class DNNGuidedPSO:
def __init__(self, n_particles, dim, bounds, w=0.7, c1=1.5, c2=1.5):
self.n = n_particles
self.dim = dim
self.bounds = bounds
self.w, self.c1, self.c2 = w, c1, c2
self.positions = np.random.uniform(*bounds, (n_particles, dim))
self.velocities = np.zeros_like(self.positions)
self.pbest_pos = self.positions.copy()
self.pbest_fit = np.full(n_particles, np.inf)
self.gbest_pos = None
self.gbest_fit = np.inf
self.surrogate = DNNSurrogate(dim)
self.history = {'x': [], 'y': []}
def step(self, fitness_fn):
fitness = np.array([fitness_fn(p) for p in self.positions])
self.history['x'].extend(self.positions.tolist())
self.history['y'].extend(fitness.tolist())
# Update bests
improved = fitness < self.pbest_fit
self.pbest_fit[improved] = fitness[improved]
self.pbest_pos[improved] = self.positions[improved]
best_idx = np.argmin(self.pbest_fit)
if self.pbest_fit[best_idx] < self.gbest_fit:
self.gbest_fit = self.pbest_fit[best_idx]
self.gbest_pos = self.pbest_pos[best_idx].copy()
# DNN-guided exploration
if len(self.history['x']) > 20:
self._train_surrogate()
# PSO velocity update
r1, r2 = np.random.random((2, self.n, self.dim))
self.velocities = (self.w * self.velocities
+ self.c1 * r1 * (self.pbest_pos - self.positions)
+ self.c2 * r2 * (self.gbest_pos - self.positions))
self.positions = np.clip(self.positions + self.velocities, *self.bounds)
return self.gbest_fit
```
## Applications
1. **Robot path planning** in dynamic environments
2. **Hyperparameter optimization** for ML models
3. **Real-time control** with changing objectives
4. **Sensor network optimization**
## Activation Keywords
- DNN-guided PSO
- particle swarm optimization
- surrogate model optimization
- neural network optimization
- dynamic optimization
## Tools Used
- `Read` - Read existing files and documentation
- `Write` - Create new files and documentation
- `Bash` - Execute commands when needed
## Instructions for Agents
1. Identify user's intent and specific requirements
2. Gather necessary context from files or user input
3. Execute appropriate actions using available tools
4. Provide clear results and suggest next steps
## Examples
### Basic Deep Neural Network Guided Pso Tracking Global usage
```
User: "Help me with deep neural network guided pso tracking global"
→ Understand requirements → Execute actions → Provide results
```
### Advanced usage
```
User: "I need detailed deep neural network guided pso tracking global assistance"
→ Clarify scope → Provide comprehensive solution → Follow up
```