Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install codingthefuturewithai-claude-code-primitives-plugins-teamcraft-glgd-skills-analyze-codebasegit clone https://github.com/codingthefuturewithai/claude-code-primitives.gitcp claude-code-primitives/SKILL.MD ~/.claude/skills/codingthefuturewithai-claude-code-primitives-plugins-teamcraft-glgd-skills-analyze-codebase/SKILL.md--- name: teamcraft-glgd:analyze-codebase description: Systematically analyze an existing codebase and produce structured artifacts — architecture documentation, component-level rules, ADR candidates, convention comparison, or CLAUDE.md audit. Use when onboarding to a brownfield project, inheriting a codebase, wanting consistent architecture docs, comparing repo practices against team conventions, needing per-component .claude/rules/ files, or auditing an existing CLAUDE.md. Works for any language, framework, or project size. argument-hint: "(no arguments — you'll describe your intent at the start)" disable-model-invocation: true user-invocable: true allowed-tools: - Read - Write - Edit - Glob - Grep - Bash - Task - mcp__google-drive__list_accounts - mcp__google-drive__search_files - mcp__google-drive__download_file --- ## Goal Produce the foundation artifacts that downstream Teamcraft skills depend on — architecture documentation, component-level rules, ADR candidates — by systematically analyzing an existing codebase. The user decides which artifacts they need, how deep the analysis goes, and what gets written. Everything is reviewed before it's saved. ## Hard Constraints - **Read-only until explicit approval.** Analyze and present. Write nothing until the user confirms each artifact. - **Never produce artifacts the user didn't ask for.** The intent conversation determines scope. A user who wants a CLAUDE.md audit does not get unsolicited ADRs. - **Architecture documentation is descriptive, not prescriptive.** It describes what IS, not what SHOULD BE. Recommendations belong in ADR candidates or convention comparison, not in the architecture doc. - **Convention comparison is a delta report, not enforcement.** "Team says X, repo does Y" — no judgments, no suggestions. The user decides what to do with the delta. - If a Drive file operation fails with a path error, read the error message to identify a valid accessible host path and retry with it. ## Boundaries with Other Skills This skill analyzes an existing codebase and produces foundation artifacts. It does NOT: - Capture team conventions from scratch — that is `capture-conventions` - Do ongoing CLAUDE.md maintenance — that is `sync-claude-md` - Orient a new team member to the environment — that is `onboard` The handoff is natural: analyze-codebase produces the foundation, then sync-claude-md maintains it over time. ## Establish Intent Before touching the codebase, understand what the user needs. The answer shapes everything — which agents to run, which artifacts to produce, how deep to go. Common intents: - "I just inherited this repo and need to understand it" — full analysis: quick pass, then architecture mapping, then component rules - "I need architecture documentation for this project" — architecture mapping focused - "How does this repo compare to our team conventions?" — convention comparison - "Our CLAUDE.md is stale or missing, help me fix it" — CLAUDE.md audit - "I want to capture the architectural decisions buried in this code" — ADR extraction - Any combination of the above Ask what they're trying to accomplish and what they want to walk away with. Don't assume — a tech lead standardizing across repos has different needs than a developer who just got assigned a legacy project. ## Resolve Drive Account (When Needed) Only resolve Drive when the user's intent involves convention comparison or they mention having team documents in Drive. Do not resolve proactively. Call `mcp__google-drive__list_accounts` before any other Drive operation: - **No accounts** — Drive is not configured. Tell the user and skip Drive operations. Convention comparison can still work if the user provides convention documents another way. - **One account** — Use it. Pass `account_email` explicitly on every Drive tool call. - **Multiple accounts** — Present the list, ask which one. ## Phase 1 — Quick Pass Run the `teamcraft-glgd:brownfield-analyzer` agent using the Task tool. This gives a fast overview of the codebase — tech stack, structure, patterns, health indicators — without consuming the main conversation's context window with raw exploration. Present the findings to the user as a structured summary. This serves two purposes: it gives the user immediate orientation, and it informs what deeper analysis (if any) makes sense. For small codebases (under ~5k LOC or a handful of modules), the quick pass may be sufficient. Ask whether the user wants to go deeper or whether this covers what they need. ## Phase 2 — Deep Analysis (Based on Intent) Run only what the user's intent requires. Each path below is independent — run one, some, or all based on what was established in the intent conversation. ### Architecture Mapping Run the `teamcraft-glgd:architecture-mapper` agent using the Task tool. Pass the brownfield-analyzer's findings in the task prompt so the architecture mapper builds on them rather than re-discovering the basics. When the agent returns, present the architecture documentation to the user for review. See `references/example-architecture.md` for the structure and depth that good architecture documentation achieves — read it before presenting to the user so you can calibrate the output. Once approved, write to `.teamcraft/architecture.md`. ### Component-Level Rules From the architecture mapping (or brownfield analysis if architecture mapping was skipped), identify components that have non-obvious patterns Claude would get wrong. For each component: - What error handling pattern does it use? - What test patterns apply? - What file organization conventions exist? - What naming conventions are specific to this component? - What integration patterns connect it to other components? Only capture patterns that are NOT visible from a casual read of the code — things Claude would miss or get wrong without guidance. Present proposed rules to the user per component, then write approved rules to `.claude/rules/<component-name>.md`. ### ADR Candidates Surface architectural decisions detected from the code — framework choices, infrastructure patterns, unusual dependencies, authentication approaches, data access patterns. These are decisions someone made for a reason, but the reason isn't in the code. For each candidate, present what was detected and ask the user WHY that decision was made. The user may know the history, or they may not — "I don't know why this was chosen" is a valid answer that gets recorded. Draft ADRs for confirmed candidates. See `references/example-adr.md` for the format. Present each ADR for review, then store approved ADRs in `.teamcraft/adrs/` (one file per decision). ### Convention Comparison Ask the user where team conventions live — Google Drive, local files, or "I don't have any." If Drive, search for and load the convention documents. If local, ask the user to point at them. Compare what the convention documents say against what the codebase actually does. Produce a read-only delta report: ``` | Area | Convention Says | Repo Does | Notes | |------|----------------|-----------|-------| ``` No judgments. No suggestions. Just the delta. The user decides what matters and what to do about it. ### CLAUDE.md Audit If CLAUDE.md exists, evaluate it: - Does it contain things Claude can auto-discover from the codebase? (These should be removed) - Is it missing things Claude would get wrong? (These should be added) - Does it reference `.teamcraft/project.md` via `@.teamcraft/project.md`? - Is it lean and focused, or bloated with discoverable information? If CLAUDE.md doesn't exist, offer to create one following best practices — lean, focused on non-obvious gotchas only. Present proposed changes (or the draft) for review. Write only after approval. ## Size Awareness Adapt analysis depth to the codebase: - **Small** (~2k LOC, single-purpose library): Quick pass may be everything needed. Don't force an 8-section architecture doc on a 3-file library. - **Medium** (5–30k LOC, standard application): Full analysis is appropriate. Architecture mapping and component rules add real value. - **Large** (30k+ LOC, monorepo or complex system): Progressive analysis. Start with the quick pass, let the user choose which areas to go deep on. Don't try to map everything at once. ## Done Summarize what was produced and where each artifact was saved. If the analysis surfaced areas that weren't covered (because the user scoped them out or because they require separate work), mention them briefly as future considerations — not demands.