Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install vivekkarmarkar-claude-code-os-skills-product-reviewgit clone https://github.com/VivekKarmarkar/claude-code-os.gitcp claude-code-os/SKILL.MD ~/.claude/skills/vivekkarmarkar-claude-code-os-skills-product-review/SKILL.md---
name: product-review
description: End-to-end product review workflow — test a feature, document findings, draft tweets, and write a Medium article. Use when the user says "review this product", "write a review", "let's write about our findings", "turn this into a review", "publish our results", "write up what we found", or wants to transform a testing/evaluation session into publishable content (tweets + article). Also use when the user has been testing something all session and wants to share their experience publicly.
user-invocable: true
allowed-tools:
- Read
- Write
- Bash
- Agent
- Glob
- Grep
- WebFetch
---
# /product-review — Feature Evaluation to Published Review
Transform a hands-on testing session into a structured, publishable product review with tweets and a Medium article.
Arguments passed: `$ARGUMENTS`
## Workflow
### Step 1: Gather Evidence
Reconstruct the testing session from conversation context:
- What was tested?
- What worked?
- What broke? (specific evidence: error messages, message ID gaps, process lists)
- What alternatives exist and how do they compare?
- What's the user's honest verdict?
If evidence is thin, ask the user to fill gaps. A good review needs specifics, not vibes.
### Step 2: Structure the Narrative
Every product review follows this arc:
1. **Hook** — Why you tried it, what you expected
2. **Setup** — How easy/hard was getting started
3. **The Promise** — What it claims to do
4. **Reality** — What actually happened (with evidence)
5. **Investigation** — Digging deeper into why (source code, architecture)
6. **Comparison** — How it stacks up against alternatives
7. **Verdict** — Honest assessment, who should/shouldn't use it
### Step 3: Draft the Tweets
Create 2-3 tweet drafts that capture the key finding:
**Tweet 1 (The Hook):**
- Bold claim or question
- Tag relevant people/companies
- Under 280 characters
**Tweet 2 (The Evidence):**
- Specific finding from testing
- Link to the article
**Tweet 3 (The Ask):**
- Open question to the team/community
- Invites dialogue, not just criticism
Present tweets to user for approval before posting. Adjust tone based on how spicy/diplomatic the user wants to be.
### Step 4: Write the Medium Article
Draft article content organized as:
- Title with hook
- Subtitle that summarizes the finding
- 4-6 sections following the narrative arc from Step 2
- Keep it under 1200 words — short and punchy
- Sign off with "— Claude Code (Opus 4.6) and [user name]"
Save the draft to `product-review-draft.md` in the current directory for the user to review before posting.
### Step 5: Publish (with user approval)
Only after user approval:
- If /tweet skill is available, offer to post the tweets
- If /medium skill is available, offer to write to Medium
- If Chrome MCP is available, offer to use browser automation
Never publish without explicit user confirmation.
### Step 6: Save Artifacts
Save all artifacts to the current directory:
- `product-review-draft.md` — Full article text
- `product-review-tweets.md` — Tweet drafts
- `product-review-evidence.md` — Raw evidence collected (logs, screenshots, findings)
## Tone Guide
The default tone is **constructive-critical**:
- Lead with what works before what doesn't
- Use specific evidence, not generalizations
- Ask questions rather than make declarations ("What am I missing?" > "This is useless")
- Acknowledge effort ("Credit where it's due")
- End with an invitation for dialogue
If the user wants spicier, dial up the directness. If they want diplomatic, soften the framing. But never sacrifice honesty — the review's value comes from being truthful.
## Principles
- A good review helps the builder, not just the reader. Specific, actionable feedback > vague complaints.
- Evidence over assertions. Every claim should have a backing data point.
- Compare fairly. Same tasks, same conditions.
- Disclose your relationship. If you're testing a feature of the tool you're currently using (like Claude Code reviewing a Claude Code plugin), say so.