Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install vivekkarmarkar-claude-code-os-skills-human-emulategit clone https://github.com/VivekKarmarkar/claude-code-os.gitcp claude-code-os/SKILL.MD ~/.claude/skills/vivekkarmarkar-claude-code-os-skills-human-emulate/SKILL.md---
name: human-emulate
description: Emulate how a human expert would approach a complex task — decompose it into granular steps, build a task table, then execute each step with dedicated agentic teams orchestrated by coordinator agents. Use this skill when the user says "human emulate", "think like a human expert", "how would an expert do this", "decompose this properly", "break this down like a pro", "do this the way a real engineer would", "methodical approach", "expert mode", or when facing a complex multi-step task that benefits from rigorous decomposition before execution. Also triggers on "/human-emulate". This is the most thorough execution mode — use it for tasks where getting it right matters more than getting it fast.
---
# Human Emulate — Expert Decomposition + Agentic Execution
You are emulating how a top-tier human expert would approach a complex task. Not how an AI would approach it — how a HUMAN would. Humans don't try to do everything at once. They pause, think, decompose, plan dependencies, then delegate to specialists while maintaining oversight.
## Philosophy
The difference between a junior and a senior isn't speed — it's decomposition quality. A senior engineer facing a complex task will:
1. Stop and think before touching anything
2. Break the problem into pieces that can be reasoned about independently
3. Identify which pieces depend on each other and which are parallel
4. Assign the right specialist to each piece
5. Set up communication channels between specialists
6. Oversee the whole thing, catching integration issues early
That's exactly what this skill does, but with agentic teams instead of human specialists.
## Step 1: Deep Expert Thinking
Before decomposing, think through the task the way an expert in the relevant domain would. This is NOT a quick brainstorm — this is the kind of thinking a principal engineer does before a whiteboard session.
Ask yourself (and write down the answers):
**Understanding the problem:**
- What is the ACTUAL goal? (not what was literally asked, but what the user needs to achieve)
- What are the constraints? (time, quality, compatibility, existing code, user preferences)
- What are the risks? (what could go wrong, what's hard to undo, what has dependencies)
- What does "done" look like? (specific, testable criteria)
**Domain expertise:**
- If a human expert in [relevant domain] were doing this, what would their first 3 moves be?
- What would they check before starting?
- What mistakes would a novice make that the expert avoids?
- What's the non-obvious hard part? (every complex task has one — find it)
**Architecture:**
- What are the natural seams in this problem? (where does one piece end and another begin)
- Which pieces are independent? Which have ordering constraints?
- Where do pieces need to share information?
- What's the critical path? (the longest chain of dependent steps)
Write this thinking out explicitly — show it to the user. This transparency is the value.
## Step 2: Build the Decomposition Table
Break the task into granular steps. Each step should be:
- **Small enough** that a single focused agent can complete it
- **Well-defined enough** that success/failure is unambiguous
- **Independent where possible** so steps can run in parallel
Present as a numbered table:
```markdown
## Task Decomposition
| # | Step | Description | Dependencies | Parallel Group | Agent Type | Est. Complexity |
|---|------|-------------|-------------|----------------|------------|-----------------|
| 1 | Understand existing code | Read and map the relevant codebase | None | A | Explorer | Low |
| 2 | Design the data model | Define types, schemas, relationships | Step 1 | B | Architect | Medium |
| 3 | Implement core logic | Write the main business logic | Step 2 | C | Builder | High |
| 4 | Write unit tests | Test core logic in isolation | Step 3 | C | Tester | Medium |
| 5 | Build the API layer | Create endpoints, validation, error handling | Step 2 | C | Builder | Medium |
| 6 | Integration tests | Test API + core logic together | Steps 3,4,5 | D | Tester | Medium |
| 7 | UI components | Build the frontend interface | Step 2 | C | Builder | Medium |
| 8 | E2E verification | Full flow test | Steps 5,6,7 | E | Verifier | Low |
```
**Column definitions:**
- **Dependencies**: Which steps must complete before this one can start
- **Parallel Group**: Steps in the same group can run simultaneously (A runs first, then all B's in parallel, then all C's, etc.)
- **Agent Type**: What kind of agent this step needs (Explorer, Architect, Builder, Tester, Verifier, Researcher)
- **Est. Complexity**: Low/Medium/High — affects how much context the agent needs
### Decomposition Rules
1. **No step should require more than one domain of expertise** — if a step needs both database knowledge and frontend knowledge, split it
2. **Dependencies must be acyclic** — no circular dependencies
3. **Each parallel group should be completable independently** — agents in the same group don't need to talk to each other
4. **The first group should always be research/understanding** — never start building without understanding
5. **The last group should always be verification** — never declare done without checking
Show the table to the user and ask: "Does this decomposition look right? Want to adjust any steps before I start executing?"
## Step 3: Set Up Orchestration
Before executing, establish the orchestration structure:
**Orchestrator Agent** — You (Claude Code) serve as the top-level orchestrator. You:
- Track which steps are complete, in-progress, or blocked
- Pass outputs from completed steps to dependent steps as context
- Detect when a step's output changes assumptions for other steps
- Make go/no-go decisions at each group boundary
**Group Coordinators** — For parallel groups with 3+ agents, spawn a coordinator agent that:
- Receives outputs from all agents in its group
- Checks for conflicts or inconsistencies between agent outputs
- Synthesizes a group summary for the next group
- Flags issues that need the orchestrator's (your) attention
**Communication Protocol:**
- Each agent outputs a structured result: `{status, output, files_changed, issues_found, context_for_next}`
- Group coordinators merge these into a group report
- The orchestrator (you) reviews the group report before advancing to the next group
## Step 4: Execute Group by Group
Process each parallel group using the Agent tool (or `/swarm` for groups with 3+ agents):
### For each group:
**4a. Prepare context bundle**
Gather outputs from all completed dependency steps. Each agent in this group gets:
- The original task description
- The decomposition table (so they know where they fit)
- Outputs from their specific dependency steps
- Any issues or decisions from previous groups
**4b. Launch agents**
For each step in the current parallel group, spawn an agent:
```
Agent prompt template:
"You are executing Step [N] of a larger task.
OVERALL GOAL: [the user's original task]
YOUR STEP: [step description]
YOUR ROLE: [agent type — e.g., Builder, Tester]
CONTEXT FROM PREVIOUS STEPS:
[outputs from dependency steps]
DELIVERABLES:
1. Complete the step as described
2. Report: {status: complete/blocked/needs-review, output: [what you produced],
files_changed: [list], issues_found: [any problems], context_for_next: [what the
next steps need to know]}
CONSTRAINTS:
- Stay within scope — do NOT do work that belongs to other steps
- If you discover something that changes the plan, report it as an issue rather than
going off-script
- If blocked, explain exactly what you need"
```
**4c. Monitor and collect results**
As agents complete:
- Read their outputs
- Check for issues or blockers
- If an agent is blocked, decide: provide what it needs, or restructure
**4d. Group checkpoint**
After all agents in a group complete:
- Review all outputs together
- Check for conflicts (did two agents make incompatible decisions?)
- If conflicts exist, resolve them before advancing
- Update the task table with completion status
- Show the user a progress update:
```markdown
## Progress Update — Group [X] Complete
| # | Step | Status | Notes |
|---|------|--------|-------|
| 1 | Understand existing code | ✅ Done | Found 3 key files, mapped dependencies |
| 2 | Design data model | ✅ Done | 4 types defined, schema written |
| 3 | Implement core logic | 🔄 In Progress | Group C |
| 4 | Write unit tests | 🔄 In Progress | Group C |
| 5 | Build API layer | 🔄 In Progress | Group C |
| 6 | Integration tests | ⏳ Waiting | Blocked on Group C |
| 7 | UI components | 🔄 In Progress | Group C |
| 8 | E2E verification | ⏳ Waiting | Blocked on Group D |
```
**4e. Advance or adapt**
- If everything looks good → advance to next group
- If issues were found → decide whether to re-run a step, add a new step, or adjust the plan
- If the user gave feedback → incorporate it before advancing
## Step 5: Final Verification
After all groups complete:
1. **Integration check** — do all the pieces fit together? Run the code, check for conflicts
2. **Against the original goal** — does the result actually achieve what the user asked for?
3. **Expert review** — if a human expert looked at this, what would they critique?
4. **Present to user** — show the final result with a summary of what was done at each step
```markdown
## Execution Complete
**Original task**: [what the user asked]
**Steps completed**: [N/N]
**Issues encountered**: [summary of problems and how they were resolved]
**Result**: [what was produced]
### Per-step summary:
[collapsed details of each step's output]
```
## Adaptation Patterns
**Task too small for full decomposition** (< 3 steps):
- Skip the table, skip the agents, just do it with expert thinking from Step 1
**Task evolves mid-execution** (user changes requirements):
- Pause, re-decompose from the current state, show the updated table, get approval
**Agent fails or produces bad output**:
- Don't retry blindly — analyze WHY it failed
- Adjust the step's scope or provide more context
- If the step was misconceived, restructure
**Critical path is too long** (too many sequential dependencies):
- Look for ways to parallelize by doing speculative work
- Or break long chains by introducing intermediate checkpoints where the user can course-correct
## When NOT to Use This Skill
- Simple tasks (< 10 minutes of work) — overkill
- Tasks with no decomposable structure (e.g., "read this file and explain it")
- When the user explicitly wants speed over thoroughness
This skill is for when getting it RIGHT matters. It's slow. It's thorough. It's how experts actually work.