Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install seb155-atlas-plugin-skills-plan-reviewgit clone https://github.com/seb155/atlas-plugin.gitcp atlas-plugin/SKILL.MD ~/.claude/skills/seb155-atlas-plugin-skills-plan-review/SKILL.md---
name: plan-review
description: "Iterative engineering plan review with optional 4-persona perspective (CEO/Eng/Design/DevEx). This skill should be used when the user asks to 'review the plan', 'validate plan', 'consolidate plans', '/a-plan-review', or has a plan needing multi-pass simulation and HITL gates."
mode: [engineering]
effort: high
context: fork
agent: plan-reviewer
version: 2.0.0
---
# Iterative Plan Review (4-persona)
Review, simulate, and consolidate engineering plans through multi-pass iterative improvement. v2.0 adds optional persona-specific perspective (CEO / Eng / Design / DevEx) loaded from cognitive-patterns reference files.
## When to Use
- Before implementation: validate plan quality and readiness (Gate G1)
- After major plan changes: re-score and re-validate
- When consolidating multiple sub-plans into a mega plan
- Periodically: every 2-3 sprints for active plans
- When user says: "review plan", "validate plan", "review this plan", "plan review --persona ceo"
## v2.0 Persona Selector (FIRST step)
When invoked, ask the user via AskUserQuestion (format v2 per `references/askuserquestion-format-v2.md`) which persona lens to apply:
```
D1 — Which persona lens for this review?
Project/branch/task: <current branch>
ELI10: A plan review can use one of 4 distinct lenses, each with its own
cognitive patterns. CEO challenges scope and ambition. Eng locks
architecture, data flow, edge cases. Design rates UI/UX dimensions
and challenges aesthetic choices. DevEx audits API/CLI/SDK
discoverability and TTHW. Pick one — or pick "all" to run sequentially.
Stakes if we pick wrong: wrong lens = miss the relevant landmines (e.g., CEO
lens won't catch a silent failure mode an Eng
review would).
Recommendation: <persona> because <one-line reason from plan content>
Pros / cons:
A) CEO — challenge scope, find 10× product, 4 modes (EXPANSION/SELECTIVE/HOLD/REDUCTION)
✅ Best for: greenfield plans, mega plans, strategic pivots
❌ Less surgical than Eng for code-level concerns
B) Eng — lock architecture, error paths, edge cases, observability
✅ Best for: code-level plans, refactors, infrastructure changes
❌ Less ambitious than CEO for scope expansion
C) Design — rate UI/UX dimensions 0-10, challenge aesthetic choices
✅ Best for: plans with UI/UX scope, customer-facing features
❌ Skip if plan is backend-only
D) DevEx — TTHW, magical moments, friction points, persona traces
✅ Best for: API/CLI/SDK/library plans, documentation reviews
❌ Skip if plan is internal-only
E) All — sequential CEO → Design → Eng → DevEx (full coverage)
✅ Maximum coverage, surfaces cross-persona insights
❌ Longer (4× single-persona), some patterns may not apply
Net: Single persona = focused depth. All = full coverage but longer cycle.
```
**Auto-detect**: if plan has `frontend/` or UI/UX mockups → suggest Design or All. If plan is API/CLI/SDK → suggest DevEx. If plan is greenfield strategic → suggest CEO. If plan is implementation-detail → suggest Eng.
## Persona Loading
After persona selected, **read the matching cognitive-patterns reference file**:
| Persona | Ref file | Patterns count |
|---------|----------|---------------:|
| `ceo` | `references/cognitive-patterns-ceo.md` | 18 |
| `eng` | `references/cognitive-patterns-eng.md` | 15 |
| `design` | `references/cognitive-patterns-design.md` | 12 |
| `devex` | `references/cognitive-patterns-devex.md` | 10 |
Use the Read tool. Internalize the patterns — they're instincts, not checklist items. Apply them throughout the review.
For `all` persona, sequentially load each ref file at start of each phase (CEO → Design → Eng → DevEx).
## Step 0 — Scope Challenge Gate (BEFORE Pass 1)
**MANDATORY** before any review section starts.
Read `references/scope-challenge-gate.md`. Run the 6 scope challenge questions:
1. Existing solution check (grep for overlap)
2. Minimum scope check (what's deferrable?)
3. **Complexity check** (8+ files / 2+ new classes / 3+ new modules → SMELL)
4. Search check (Layer 1/2/3 + EUREKA tagging)
5. TODOS cross-reference
6. Completeness check (Boil the Lake — shortcut vs complete?)
7. Distribution check (if applicable)
**If any trigger fires** → STOP. Force AskUserQuestion (AUQ format v2). Wait for user response. Do NOT proceed to Pass 1, do NOT edit plan, do NOT call ExitPlanMode.
## Workflow: Single Plan Review
### Pass 1 — Score & Identify Weaknesses
1. Read the plan file completely
2. Score against 15 criteria (A-O sections):
| # | Criterion | 1 pt if... |
|---|-----------|-----------|
| 1 | Vision explains WHY + chain impact |
| 2 | Inventory lists code + reusable hooks |
| 3 | Architecture has diagram + sourced decisions |
| 4 | DB schema with migrations |
| 5 | Backend services detailed |
| 6 | API endpoints table |
| 7 | Frontend UX mockups |
| 8 | Persona impact assessed |
| 9 | Security/RBAC defined |
| 10 | AI-native + observability |
| 11 | Infrastructure targets |
| 12 | Reusability/multi-tenant |
| 13 | Traceability/audit |
| 14 | Phases with effort + deps |
| 15 | Verification commands |
3. For each section scoring 0: flag as WEAK
4. **For each finding, attach confidence (1-10)** per `references/confidence-calibration.md`:
- Format: `[SEVERITY] (confidence: N/10) section:line — description`
- Suppress confidence ≤ 4 to appendix (unless P0 severity)
5. Output: score table + WEAK sections + findings (with confidence) + persona-specific recommendations
### Pass 2 — Persona-Specific Enrichment
Apply the loaded persona's cognitive patterns to enrich WEAK sections.
For each WEAK section (max 3 per pass):
1. Identify which cognitive pattern from the loaded ref applies
2. Cite the pattern name + author when relevant ("Per Brooks essential vs accidental complexity (#10), this section needs ...")
3. Draft improved section content informed by the pattern
4. Present to user via AskUserQuestion (format v2) for HITL approval — one issue per AUQ call
5. Apply approved improvements via Edit
**STOP** between Pass 2 issues. Do NOT batch multiple issues. Do NOT proceed to Pass 3 until user responds to each Pass 2 AUQ.
### Pass 3 — Cross-Section Consistency
Verify internal consistency:
- [ ] Phases (N) reference files from Inventory (B)?
- [ ] Architecture (C) decisions align with DB Schema (D)?
- [ ] API endpoints (F) match Backend services (E)?
- [ ] Frontend mockups (G) use hooks from Inventory (B)?
- [ ] Effort totals in Phases (N) sum correctly?
- [ ] Verification commands (O) match actual file paths?
Each finding gets a confidence score per `references/confidence-calibration.md`.
### Pass 4 — Mental Simulation
Walk through each phase mentally:
1. **File touch order**: What files are created/modified in what sequence?
2. **Dependency chain**: Can phases execute in declared order?
3. **DB migration timing**: Migrations before service code?
4. **Test coverage**: Every deliverable has a test?
5. **Rollback plan**: If phase fails, what's the recovery?
6. **Effort realism**: No tasks > 12h (too vague) or < 1h (too granular)?
Output simulation report:
```
SIMULATION REPORT — SP-XX (persona: <ceo|eng|design|devex>)
Phase 1: ✅ Executable (DB migration → service → endpoint → test)
Phase 2: ⚠️ Risk — needs SP-06 P1 complete first (auth dep)
Phase 3: ✅ Executable (frontend only, no backend deps)
Overall: READY with 1 dependency to verify
```
### Stop Conditions
- Score ≥ 12/15 AND simulation PASS → APPROVE (Gate G1)
- Score < 12/15 after 3 passes → ESCALATE to user via AskUserQuestion
- Simulation FAIL (blocking dependency) → ESCALATE with options
## Workflow: Mega Plan Review (M1-M16)
When reviewing a mega plan (M1-M16 format), apply the same persona selector, then:
### Step 1 — Registry Validation
- All sub-plans listed in M3 exist as files?
- Effort totals sum correctly?
- Dependencies form a valid DAG (no cycles)?
- Bidirectional links present in each sub-plan?
### Step 2 — Cross-Plan Consistency
- Integration points (IP-1 to IP-9) addressed by responsible sub-plans?
- Shared DB tables defined consistently across sub-plans?
- API contracts compatible between sub-plans?
- Auth/RBAC model consistent?
### Step 3 — Timeline Simulation
- Critical path calculation: does timeline match effort?
- Phase gates achievable with single developer + AI co-dev?
- Parallel tracks don't exceed capacity?
### Step 4 — Gap Detection
- Any engineering chain steps not covered by any sub-plan?
- Any persona not served by any sub-plan?
- Any integration point without test strategy?
Output:
```
MEGA PLAN REVIEW — ticklish-tinkering-puppy.md (persona: <X>)
━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━━
Registry: 12/12 sub-plans exist ✅
Links: 12/12 bidirectional links ✅
DAG: No cycles, critical path OK ✅
Effort: 2,582h total (sums correct) ✅
Integration: 9/9 IPs covered ✅
Gaps: SP-08 missing sub-plan file ⚠️ (confidence: 9/10)
Timeline: Phase 0-6 feasible at 1 dev ✅
OVERALL: READY (1 minor gap)
```
## Workflow: Multi-Plan Consolidation
When consolidating N sub-plans:
1. Read all N sub-plans
2. Build dependency matrix (which plans share what)
3. Detect duplicate tasks across plans
4. Detect conflicting architecture decisions
5. Generate consolidation report (with confidence on each conflict)
6. Present to user for HITL approval (AUQ format v2 per conflict)
## Output Files
- Review results: shown in conversation (not saved to file)
- If improvements applied: update the plan file directly
- If mega plan: update registry table in mega plan
- Findings logged to `~/.atlas/runtime/findings.jsonl`
## HITL Gates (NON-NEGOTIABLE)
- **Always ask** before modifying a plan section (AUQ v2)
- **Always present** simulation results before approving
- **Never auto-approve** plans — user must confirm Gate G1
- **Step 0 Scope Challenge**: STOP and force AUQ when complexity check fires
- Use AskUserQuestion for all decisions (per `feedback_ask_user_question.md`)
## Quality Gate
| Score | Action |
|-------|--------|
| 15/15 | EXEMPLARY — approve immediately |
| 12-14/15 | PASS — approve with notes |
| 9-11/15 | NEEDS WORK — 1 more pass, then escalate |
| < 9/15 | REJECT — major rewrite needed |
## Commands
- `/atlas plan-review <plan-file>` — opens persona selector AUQ
- `/atlas plan-review <plan-file> --persona ceo` — CEO lens only
- `/atlas plan-review <plan-file> --persona eng` — Eng lens only
- `/atlas plan-review <plan-file> --persona design` — Design lens only
- `/atlas plan-review <plan-file> --persona devex` — DevEx lens only
- `/atlas plan-review <plan-file> --persona all` — sequential 4-persona (CEO → Design → Eng → DevEx)
- `/atlas plan-review --mega <mega-plan-file>` — mega plan + all sub-plans
- `/atlas plan-review --simulate <plan-file>` — simulation only (no scoring)
- `/atlas plan-review --consolidate <plan1> <plan2> ...` — consolidate multiple plans
## Cross-references
- `ETHOS.md` — Building Doctrine (Boil the Lake / Search / User Sovereignty / Build for Yourself)
- `references/askuserquestion-format-v2.md` — AUQ format v2 (mandatory output for all HITL gates)
- `references/confidence-calibration.md` — 1-10 scale + display rules per range
- `references/cognitive-patterns-{ceo,eng,design,devex}.md` — persona-specific patterns
- `references/scope-challenge-gate.md` — Step 0 mandatory scope gate
---
*Skill v2.0 — 2026-05-05 — adapted with attribution from gstack plan-{ceo,eng,design,devex}-review (MIT, Garry Tan)*