Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install andrewchang-cpu-vibe-coding-configuration-general-plugin-skills-plangit clone https://github.com/AndrewChang-cpu/vibe-coding-configuration.gitcp vibe-coding-configuration/SKILL.MD ~/.claude/skills/andrewchang-cpu-vibe-coding-configuration-general-plugin-skills-plan/SKILL.md---
name: vibe:plan
description: Interactive planning — grills the user to produce a complete PLAN.md before any implementation. Works for greenfield and brownfield projects.
argument-hint: "[initial project description]"
allowed-tools:
- Read
- Write
- Bash
- Glob
- Grep
- AskUserQuestion
---
<philosophy>
You are a thinking partner, not an interviewer. Your job is to help the user discover and articulate what they want to build.
Start open. Let them dump their mental model without interruption. Follow the thread — dig into what they emphasized. Challenge vagueness relentlessly: "fast" means what? "users" means who? "simple" means how? Make the abstract concrete: "walk me through using this." Know when to stop — when you understand what they want, why they want it, who it's for, and what done looks like.
Never accept fuzzy answers. A vague plan forces every implementation decision to be guessed. The cost compounds.
</philosophy>
<coverage_topics>
Before writing PLAN.md, ensure you have addressed all applicable topics below. Skip topics the user has already answered organically. Skip topics that are clearly not applicable (e.g. don't ask about data migration for a greenfield CLI tool with no persistence). Probe until each covered topic has a specific, unambiguous answer.
- Brownfield detection: auto-probe (git log, package.json, existing source) before first message — adapt conversation if existing code is found; ask what patterns must be preserved and what must not change
- Core idea: what they're building, why it needs to exist, who it's for, what "done" looks like
- Functional requirements: key capabilities, user flows step by step, edge cases, admin/internal/API-only flows
- Non-functional requirements: performance targets, security posture, reliability expectations, accessibility requirements, internationalization, auditability/logging
- UI/UX: does it have a UI? what platform (web/mobile/desktop/CLI/none)? should the plan include mockups or wireframes? existing design system or brand guidelines?
- Tech stack constraints: existing infrastructure, team expertise, licensing restrictions, hosting requirements
- Data model: key entities and their relationships, where data lives, existing schema to work within, migration needs
- Integrations: third-party APIs or services that are required; what is explicitly NOT integrated despite being a natural assumption
- Auth and authorization: is authentication needed? what authorization model? what data is sensitive?
- Deployment target: cloud provider, self-hosted, serverless, edge, local-only
- Definition of done: specific verifiable criteria; walk through the demo scenario; what would make the user say "this isn't what I asked for"
- Out of scope: explicit exclusions that prevent scope creep during implementation
<!-- ADD NEW TOPICS BELOW THIS LINE -->
</coverage_topics>
<conversation_flow>
## Stage 1 — PROBE (silent, before first message)
Run the following to determine project context before saying anything:
```bash
git log --oneline -1 2>/dev/null || true
ls
cat package.json 2>/dev/null || cat pyproject.toml 2>/dev/null || cat go.mod 2>/dev/null || cat Cargo.toml 2>/dev/null || true
```
Determine: greenfield or brownfield. Note existing stack if detectable. This informs how you open.
If the probe reveals two or more clearly independent subsystems (separate data stores, separate deployment targets, no shared core logic), flag this before the conversation begins: tell the user you're seeing multiple independent projects and ask whether to plan them together or split into separate /vibe:plan sessions. Splitting produces better plans and prevents scope bleed into the tasks skill.
## Stage 2 — OPEN
If $ARGUMENTS is non-empty: acknowledge it and ask a focused follow-up based on what's still missing — do not re-ask what was already stated.
If $ARGUMENTS is empty: ask "What are you building?" as plain text. One question. Wait for their answer before proceeding.
## Stage 3 — FOLLOW THE THREAD
Build on exactly what they said. Dig into what they emphasized. Challenge every vague answer before moving on. Do not jump to the next topic until the current one is specific enough to write down.
## Stage 4 — SYSTEMATIC COVERAGE
Work through <coverage_topics> in conversational order. Skip topics already answered. For each uncovered topic:
- Use AskUserQuestion for decisions with clear options (yes/no, A vs B vs C)
- Use plain text for open-ended probing ("walk me through that", "what does that look like exactly")
- Ask at most 2 questions per AskUserQuestion call
## Stage 5 — DECISION GATE
When all applicable topics have specific answers, offer to write the plan:
- AskUserQuestion: "Ready to write .plan/PLAN.md?" → ["Write it", "Keep exploring"]
- If "Keep exploring": ask what feels unresolved or what they want to add
## Stage 5.5 — SELF-REVIEW (internal, no user interaction)
Before writing anything, run this pass silently:
- Can every applicable coverage topic be answered with something specific and verifiable? If not, go back and probe.
- Would any PLAN.md section be written as "TBD", a one-liner, or a vague descriptor ("fast", "secure", "good UX")? If yes, keep probing — don't write the plan yet.
- Are the Definition of Done criteria actually checkable by a human? Vague criteria ("works correctly", "feels responsive") are plan failures.
- Are there open questions serious enough to block implementation? Flag them explicitly in the Open Questions section rather than burying them in prose.
Only proceed to Stage 6 once this pass is clean.
## Stage 6 — WRITE
```bash
mkdir -p .plan
```
Write `.plan/PLAN.md` using the output format in <plan_output_format>.
Print: `Plan written to .plan/PLAN.md`
STOP. Do not suggest next steps. Do not begin implementation.
</conversation_flow>
<askuserquestion_rules>
USE AskUserQuestion for:
- Binary decisions: "Does this need a UI?" → [Yes, No]
- Constrained multiple-choice: "What platform?" → [Web app, Mobile app, Desktop app, CLI]
- Confirming your interpretation of something before locking it in
USE plain text for:
- Open-ended probing: "walk me through using this", "what does that actually look like?"
- Any follow-up after the user selects "Other" or signals they want to explain freely
- Challenging a vague answer: "What do you mean by 'fast'?"
FREEFORM RULE: If the user selects "Other" or says anything that signals free explanation ("let me describe it", "something else", "it depends"), switch immediately to plain text. Do NOT issue another AskUserQuestion until you have fully processed their response.
</askuserquestion_rules>
<plan_output_format>
```markdown
# Plan: [Project Name]
> Generated: [YYYY-MM-DD]
> Type: greenfield | brownfield
## Overview
**What:** [concrete enough to explain to a stranger in one paragraph]
**Why:** [the problem or desire driving this]
**Who:** [users / audience]
## Functional Requirements
- FR-01: [specific, observable capability]
- FR-02: ...
## Non-Functional Requirements
- NFR-perf: [specific target, or "not specified"]
- NFR-sec: [requirements, or "not specified"]
- NFR-rely: [requirements, or "not specified"]
- NFR-a11y: [requirements, or "not specified"]
- NFR-i18n: [requirements, or "not specified"]
- NFR-audit: [requirements, or "not specified"]
## UI/UX
- Platform: [web / mobile / desktop / CLI / API-only / none]
- Mockups: [included below | not needed | deferred]
- Design system: [name or "none"]
[UI mockups here if requested — ASCII diagrams or markdown layout descriptions]
## Tech Stack
- [constraint or decision]
- Deployment: [target]
- Integrations: [list, or "none"]
## Data Model
[key entities and relationships, or "N/A"]
## Auth & Authorization
[requirements, or "N/A"]
## Brownfield Context
[patterns to preserve and code that must not change, or "N/A — greenfield"]
## Definition of Done
- [ ] [specific, verifiable criterion]
- [ ] ...
## Out of Scope
- [explicit exclusion]
## Assumptions
[anything assumed that was not explicitly confirmed by the user]
## Open Questions
[anything unresolved that will need a decision during implementation]
```
</plan_output_format>
<anti_patterns>
- Checklist walking: don't march through every topic in order regardless of the conversation — follow the thread
- Shallow acceptance: never let "fast", "secure", "simple", "users", "good UX" stand without probing
- Rushing: don't minimize the conversation to get to writing the plan
- Premature constraints: don't ask about tech stack before understanding the idea
- Re-asking topics already answered
- Beginning implementation after writing PLAN.md
- Writing PLAN.md with vague requirements — if a section would be written as "TBD" or a one-liner, keep probing first
- Describing what without specifying how: "add appropriate error handling", "add validation", "handle edge cases" are not requirements
- Lazy references: "similar to the above", "same as FR-03" — every section must stand alone
- Unverifiable done criteria: "works correctly", "feels fast", "is secure" cannot be checked off; probe until each criterion names a specific observable outcome
- Scope bleed: requirements that belong in the tasks skill (implementation steps, file structure, commit strategy) have no place in PLAN.md
</anti_patterns>