Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install ronnasayd-ai-pair-programming-skills-taskmaster-mappergit clone https://github.com/Ronnasayd/AI-pair-programming.gitcp AI-pair-programming/SKILL.MD ~/.claude/skills/ronnasayd-ai-pair-programming-skills-taskmaster-mapper/SKILL.md---
name: taskmaster-mapper
description: >-
Analyzes task lists (.taskmaster/tasks.json or any task/subtask structure) and, for each task and subtask,
identifies which skills and agents are most relevant based on the task context, then adds them
to the `details` section. Use this skill whenever the user wants to enrich a task file with
skill/agent recommendations, wants to know "which agent should handle this task", asks to
annotate tasks with tooling context, or says things like "add skills and agents to my tasks",
"map agents to tasks", "which skills apply here", "enrich my task spec with agents", or
provides a .taskmaster/tasks.json and asks what skills/agents to use. Always trigger this skill when the
user wants to associate available skills or specialist agents with a list of tasks or subtasks,
even if they phrase it as "who should do this" or "what tool fits each task".
---
# Task Skills Mapper
Reads a task list in any format (.taskmaster/tasks.json, Taskmaster JSON, markdown task list, plain array) and enriches each task and subtask by appending a `suggested_skills` and `suggested_agents` block to the `details` field. The goal is to give the developer or orchestrator immediate clarity on which specialist capabilities are needed before execution begins.
## Process
### 1. Ingest and Parse
Accept the task structure in whatever format is provided:
- `.taskmaster/tasks.json` (Taskmaster format with `tasks[]` and nested `subtasks[]`)
- A flat array of task objects
- A markdown list with task descriptions
- A single task description pasted as text
Extract the following fields from each task/subtask to use as analysis input:
- `title`
- `description`
- `details`
- `testStrategy` (when present)
If the input is JSON, output enriched JSON. If the input is markdown, output enriched markdown. Match the user's format.
### 2. Analyse Each Task and Subtask
For each task and each nested subtask, reason over the combined text (title + description + details + testStrategy) and apply the mapping tables in `references/mapping.md`.
Reasoning approach:
- Identify the **primary domain** (security, testing, refactoring, documentation, architecture, etc.)
- Identify the **type of action** (implement, review, audit, test, plan, document, research, etc.)
- Match domain + action to the skills and agents in the mapping tables
- Pick only the most relevant ones — don't list everything. Aim for 1–3 skills and 1–2 agents per task. If a task is very broad, up to 4–5 is acceptable, but be selective.
When uncertain between two candidates, prefer the one whose description most closely matches the task's **primary deliverable** (what must be produced), not just what it touches.
### 3. Output Format
**For JSON input**, add two new fields inside each task and subtask. Append to `details` a clear section separator followed by the suggestions:
```json
{
"details": "<original details>\n\n---\n**Suggested Skills:** skill-a, skill-b\n**Suggested Agents:** agent-x",
"suggested_skills": ["skill-a", "skill-b"],
"suggested_agents": ["agent-x"]
}
```
Keep `suggested_skills` and `suggested_agents` as clean arrays for programmatic use, and the inline append in `details` for human readability.
**For markdown/text input**, append to each task entry:
```
> **Skills:** skill-a, skill-b
> **Agents:** agent-x
```
### 4. Preserve Everything Else
Do **not** modify any field other than `details` (where you append) and the two new fields. IDs, titles, statuses, dependencies — leave them untouched.
### 5. Output the Full Structure
Return the complete enriched structure, not a diff or partial. If the input is large (>50 tasks), process in sections and make clear you're doing so.
---
## Output Example
**Input task:**
```json
{
"title": "Implement Secure Path Normalization for SVG Assets",
"description": "Refactor local SVG load logic to use filepath.Join and filepath.Abs",
"details": "Replace string concatenation with filepath.Join() ...",
"testStrategy": "Test with traversal paths like ../etc/passwd"
}
```
**Output task:**
```json
{
"title": "Implement Secure Path Normalization for SVG Assets",
"description": "Refactor local SVG load logic to use filepath.Join and filepath.Abs",
"details": "Replace string concatenation with filepath.Join() ...\n\n---\n**Suggested Skills:** coding-standards, code-review\n**Suggested Agents:** cybersecurity-specialist, developer-specialist",
"testStrategy": "Test with traversal paths like ../etc/passwd",
"suggested_skills": ["coding-standards", "code-review"],
"suggested_agents": ["cybersecurity-specialist", "developer-specialist"]
}
```
---
## Edge Cases
- **No details field**: Add one containing only the suggestions block.
- **Already has suggestions**: Check whether they were generated by this skill (look for `---\n**Suggested Skills:**`). If yes, replace only that section. If no, append after existing content.
- **Very short tasks** (title only, no description): Base analysis solely on the title and use the test strategy if available.
- **Cross-cutting tasks** (e.g., end-to-end tests that touch everything): List the union of all relevant skills/agents but cap at 5 to avoid noise.