Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install outer-heaven-technologies-arsenal-skills-skills-plan-mvpgit clone https://github.com/Outer-Heaven-Technologies/arsenal-skills.gitcp arsenal-skills/SKILL.MD ~/.claude/skills/outer-heaven-technologies-arsenal-skills-skills-plan-mvp/SKILL.md--- name: plan-mvp description: Validates a product idea before any code gets written. Produces market research, competitive analysis, and an MVP spec ending with a go/pivot/kill recommendation. Adapts depth from lean (side projects) to deep (ventures with TAM/SAM/SOM). Use when the user describes an idea and wants validation before building. --- # Plan MVP Validate a product idea and produce a strategic MVP plan before any code gets written. This skill adapts its depth based on the project — a quick side project gets a lean pass, while a serious venture gets thorough research. ## Philosophy - **Validate before building.** The goal is to kill bad ideas early and sharpen good ones. - **Be honest, not hype.** If the market is saturated or the idea has obvious problems, say so constructively. - **Strategy first, stack later.** No technical decisions here — that's what `/execute-docs` is for. - **Actionable output.** Every doc should help the user make a go/no-go decision and know exactly what to build first. ## Output Files | File | Purpose | |------|---------| | `planning/MARKET_RESEARCH.md` | Target audience, market size, demand signals, trends | | `planning/COMPETITIVE_ANALYSIS.md` | Competitor breakdown, positioning, differentiation | | `planning/MVP_SPEC.md` | Feature scope, user stories, phased roadmap, success metrics | All files go in a `planning/` directory at the project root (or wherever the user prefers). ## Workflow ### Step 1: Scope Selection Before anything else, ask the user what they need: - **Full planning suite** — Market Research + Competitive Analysis + MVP Spec (all three docs) - **MVP Spec only** — Skip the research, go straight to defining what to build If they choose MVP Spec only, skip Steps 2 and 3 entirely and go straight to Step 4 (MVP Spec). The idea intake questions in Step 1b still apply — you need to understand the idea to spec it — but you won't produce the research or competitive docs. This is useful when the user already knows their market, has done their own research, or just wants to scope out a build quickly. ### Step 1b: Idea Intake Start with an open conversation. Let the user describe their idea naturally, then ask clarifying questions. Don't use a rigid questionnaire — adapt based on what they share. **Core things to understand:** - What's the idea in one sentence? - Who is it for? (be specific — not "everyone" or "developers") - What problem does it solve? What's the current alternative? - Why now? Is there a trigger or trend making this timely? - Is this a side project, a freelance offering, a startup, or an internal tool? - What's the user's unfair advantage? (domain knowledge, existing audience, technical skill, etc.) - What does success look like? (revenue, users, personal use, portfolio piece) **Gauge depth from context:** - Side project / personal tool → lean pass (quick validation, focused MVP spec) - Freelance service / small business → moderate (local market research, competitor scan, clear scope) - Startup / serious venture → deep dive (TAM/SAM/SOM, pricing analysis, detailed competitive matrix) Tell the user which depth level you're planning and why. Let them override. ### Step 2: Market Research Use web search to gather real data. Don't fabricate market stats. **For all projects (lean pass):** - Who is the target audience? Demographics, psychographics, behaviors - What are they currently using to solve this problem? - Are people actively searching for solutions? (search trend signals) - Are there communities discussing this pain point? (Reddit, forums, social) - What's the general market trajectory? (growing, stable, declining) **For moderate/deep projects, also include:** - Market sizing (TAM/SAM/SOM with sources and methodology) - Pricing benchmarks (what do competitors charge? what's willingness to pay?) - SEO/keyword demand signals (search volume for key terms if relevant) - Distribution channels (where does the target audience discover new tools?) - Regulatory or platform risks (App Store rules, API dependencies, legal) - Adjacent trends that could amplify or threaten the idea **Format: `planning/MARKET_RESEARCH.md`** ```markdown # Market Research: [Project Name] **Date:** [date] **Depth:** [Lean / Moderate / Deep] ## Executive Summary [2-3 sentences: is this idea worth pursuing and why] ## Target Audience ### Primary Persona - **Who:** [specific description] - **Pain Point:** [what frustrates them today] - **Current Solution:** [what they're doing now] - **Willingness to Switch:** [high/medium/low + reasoning] ### Secondary Persona (if applicable) [same structure] ## Market Landscape ### Size & Trajectory [market size data, growth trends, sources] ### Demand Signals [search trends, community activity, funding in space] ### Distribution Channels [where target audience discovers tools like this] ## Key Risks [honest assessment of what could go wrong] ## Opportunity Assessment [go/no-go recommendation with reasoning] ``` ### Step 3: Competitive Analysis Research actual competitors. Use web search to find them — don't just list ones the user mentions. **For all projects:** - Identify 3-8 direct and indirect competitors - What do they do well? - What do they do poorly? (check reviews, Reddit complaints, Twitter) - How do they monetize? - Where is the gap the user can exploit? **For moderate/deep projects, also include:** - Feature comparison matrix - Pricing comparison table - Positioning map (2x2 or spectrum) - Competitor funding/team size (if available) - Moat analysis (what makes each competitor defensible?) **For deep projects, also include a Structural Forces assessment:** Analyze the three forces that actually matter for software products (skip supplier/buyer power — they're rarely insightful for SaaS/apps): - **Competitive Rivalry:** How crowded is this space? Are competitors growing fast or stagnating? Is differentiation easy or is everyone converging on the same feature set? High rivalry = need a sharper wedge to break in. - **Threat of Substitutes:** What non-obvious alternatives exist? (e.g., spreadsheets, manual processes, "just ignore the problem"). How painful is it to switch away from the substitute? Low switching cost from substitutes = easier acquisition. - **Barriers to Entry:** What stops someone from cloning this in a weekend? Consider: data network effects, integrations/ecosystem lock-in, regulatory requirements, brand trust, technical complexity. Low barriers = need to move fast and build a moat early. **Format: `planning/COMPETITIVE_ANALYSIS.md`** ```markdown # Competitive Analysis: [Project Name] **Date:** [date] ## Landscape Overview [1-2 paragraphs summarizing the competitive space] ## Direct Competitors ### [Competitor 1] - **URL:** [link] - **What they do:** [one sentence] - **Strengths:** [what they nail] - **Weaknesses:** [where they fall short — use real user complaints] - **Pricing:** [model and price points] - **Audience:** [who they serve] ### [Competitor 2] [same structure] ## Indirect Competitors / Alternatives [tools that aren't direct competitors but solve the same underlying problem] ## Feature Comparison Matrix | Feature | [Competitor 1] | [Competitor 2] | [Our MVP] | |---------|----------------|----------------|-----------| | [Feature] | ✅ / ❌ / 🟡 | ... | ... | ## Pricing Comparison (if applicable) | | Free Tier | Paid | Enterprise | |---|-----------|------|------------| | [Competitor] | ... | ... | ... | ## Positioning & Differentiation [Where does this product fit? What's the unique angle?] ## Strategic Takeaways [3-5 bullet points: what to learn from competitors, what to do differently] ## Structural Forces (Deep projects only) ### Competitive Rivalry [How intense is the competition? Are competitors growing, stagnating, or consolidating? Is differentiation strong or is the space commoditizing?] ### Threat of Substitutes [What non-obvious alternatives do people use instead? Spreadsheets, manual processes, ignoring the problem? How hard is it to pull users away from these?] ### Barriers to Entry [What would stop someone from cloning this? Data moats, network effects, ecosystem lock-in, regulatory hurdles, technical complexity? What does this mean for your defensibility?] ``` ### Step 4: MVP Spec This is the actionable output. Define what to build first and why. **Principles:** - MVP = the smallest thing that validates the core hypothesis - Every feature must tie back to a user problem identified in research - Ruthlessly cut scope — if it's not in the core loop, it's post-MVP - Include clear success metrics so the user knows if the MVP worked **Format: `planning/MVP_SPEC.md`** ```markdown # MVP Spec: [Project Name] **Date:** [date] **One-liner:** [what it is in one sentence] ## Core Hypothesis [What are we trying to validate? Frame as: "We believe [target user] will [action] because [reason]"] ## Target User [Refined from market research — one specific persona for MVP] ## Core Value Loop [The single repeating loop that delivers value. E.g., "User logs habit → sees streak → feels motivated → comes back tomorrow"] ## MVP Feature Set ### Must Have (Launch Blockers) - [ ] [Feature] — [why it's essential, ties to which user need] - [ ] [Feature] — [why] - [ ] ... ### Should Have (Week 2-4 post-launch) - [ ] [Feature] — [why] - [ ] ... ### Won't Have (Explicitly Deferred) - [ ] [Feature] — [why it's tempting but not MVP] - [ ] ... ## User Stories [3-5 key user stories in "As a [user], I want to [action] so that [outcome]" format] ## Monetization Strategy (if applicable) [Free? Freemium? Paid? When does money enter the picture?] ## Distribution Hypothesis [How will the first 100 users find this? Be specific — not "social media" but "post Show HN + cross-post to r/selfhosted". Name 1-2 primary channels and explain why they'll work for this specific audience. This isn't a full GTM plan — it's a sanity check that there's a path to early users.] ## Success Metrics | Metric | Target | Timeframe | |--------|--------|-----------| | [e.g., Daily active users] | [e.g., 50] | [e.g., 30 days post-launch] | | [e.g., Retention D7] | [e.g., 30%] | [e.g., 30 days post-launch] | ## Risks & Mitigations | Risk | Likelihood | Impact | Mitigation | |------|-----------|--------|------------| | [risk] | H/M/L | H/M/L | [what to do] | ## Phased Roadmap ### Phase 1: MVP (Weeks 1-X) [Core features, launch goal] ### Phase 2: Validate (Weeks X-Y) [Iterate based on metrics, add Should Haves] ### Phase 3: Scale (Weeks Y-Z) [Growth features, monetization, expansion] ## Next Step → Run `/execute-docs` to scaffold the technical foundation. ``` ### Step 5: Review & Recommendation After generating all three docs, give the user: **SWOT Synthesis (for moderate/deep projects):** Before the final recommendation, synthesize findings from market research and competitive analysis into a quick SWOT. This isn't a standalone doc — it's the reasoning backbone for the go/pivot/kill call. Keep it tight (2-3 bullets per quadrant max): - **Strengths:** What advantages does this specific founder/team bring? (existing skills, audience, domain knowledge, speed) - **Weaknesses:** What's missing? (no audience yet, no domain expertise, resource constraints, solo operator limits) - **Opportunities:** What market gaps, timing advantages, or underserved segments did the research surface? - **Threats:** What competitive responses, platform risks, or market shifts could derail this? Include this synthesis in the conversation (not as a separate file) as the lead-in to the recommendation. **Then provide:** - A clear **go / pivot / kill** recommendation with reasoning grounded in the SWOT - The single most important thing to validate first - Any quick validation steps they could do before writing code (e.g., landing page test, Reddit post, DM 10 potential users) - Remind them that `/execute-docs` is the natural next step if they're ready to build - If going ahead, note that a `/plan-gtm` plan is the logical step after the MVP is built ## Adaptive Depth Guide | Signal | Depth | Research Scope | |--------|-------|---------------| | "side project", "just for me", "weekend build" | Lean | Quick competitor scan, focused MVP spec, skip market sizing | | "freelance offering", "local business", "client work" | Moderate | Local market research, 3-5 competitors, pricing benchmarks | | "startup", "raising money", "serious about this" | Deep | Full TAM/SAM, 5-8 competitors, detailed positioning, pricing strategy | | "not sure yet", "exploring" | Start lean | Expand if the idea has legs after initial research | ## Important Guidelines - **Use real data.** Search the web. Don't invent market stats or fake competitor names. - **Be constructively honest.** If the market is crowded, say so — then explain how to differentiate. If the idea is weak, suggest pivots. - **Respect the user's time.** A solo dev building a side project doesn't need a 20-page market analysis. Match depth to stakes. - **Connect the dots.** Market research should inform competitive analysis, which should inform the MVP spec. They're not independent docs. - **No tech decisions.** Don't recommend frameworks, databases, or hosting. That's for `/execute-docs`. - **Separate research from conclusions.** Do all web searching and data gathering before synthesizing findings. Don't interleave searching with writing — gather first, then analyze. When using subagents for research (in Claude Code), delegate research to separate contexts to keep the main conversation clean for synthesis and decision-making. - **Research with precision, not just volume.** For moderate and deep projects, ground every claim in real data by targeting the right sources. Crawl competitor pricing pages directly rather than relying on search snippets. Check review sites (G2, Capterra, Product Hunt) for real user complaints. Read Reddit threads and HN discussions where the target audience talks about their pain points. Look at Crunchbase for funding signals and job postings in the space for what companies are building. Search for public revenue data (indie hackers posts, OpenStartup pages, SimilarWeb traffic estimates). Use multiple search queries per topic when the first results are thin — refine the query rather than settling for weak data. The quality of planning docs is directly proportional to the specificity of the research. - **Cap output files at ~500 lines.** Long markdown bloats downstream context and becomes unscannable. Sweet spot: 100–400 lines. If `MARKET_RESEARCH.md` or `COMPETITIVE_ANALYSIS.md` would exceed 500 lines, split by segment (e.g., `COMPETITIVE_ANALYSIS.md` overview + per-competitor sub-files in a `competitors/` folder) and cross-link. `MVP_SPEC.md` should stay tight — push exploratory research into the other docs and keep the spec under 400 lines.