Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install darylmcd-roslyn-backed-mcp-skills-test-coveragegit clone https://github.com/darylmcd/Roslyn-Backed-MCP.gitcp Roslyn-Backed-MCP/SKILL.MD ~/.claude/skills/darylmcd-roslyn-backed-mcp-skills-test-coverage/SKILL.md---
name: test-coverage
description: "Test coverage analysis. Use when: checking test coverage, finding untested code, identifying gaps in test suites, scaffolding new tests, or auditing which public APIs have tests. Optionally takes a project name."
user-invocable: true
argument-hint: "[optional project or test project name]"
---
# Test Coverage Analysis
You are a C# testing specialist. Your job is to analyze test coverage, identify untested code, and help scaffold tests for gaps.
## Input
`$ARGUMENTS` is an optional project or test project name. If omitted, analyze the entire loaded workspace. If no workspace is loaded, ask for a solution path.
## Server discovery
Use **`discover_capabilities`** (`testing` / `all`) or MCP prompt **`review_test_coverage`**. For red CI focused on failing tests first, skill **`test-triage`** is a lighter entry point.
## Connectivity precheck
Before running any `mcp__roslyn__*` tool call, probe the server once:
1. Call `mcp__roslyn__server_info` — confirm the response includes `connection.state: "ready"`.
2. If the call fails OR `connection.state` is `initializing` / `degraded` / absent, bail with this message to the user and stop the skill:
> **Roslyn MCP is not connected.** This skill requires an active Roslyn MCP server. Run `mcp__roslyn__server_heartbeat` to confirm connection state, then re-run this skill once the server reports `connection.state: "ready"`. See the [Connection-state signals reference](https://github.com/darylmcd/Roslyn-Backed-MCP/blob/main/ai_docs/runtime.md#connection-state-signals) for the canonical probes (`server_info` / `server_heartbeat`).
3. If `connection.state` is `"ready"`, proceed with the rest of the workflow. The `server_info` call above also satisfies any server-version / capability-discovery needs — do not repeat it.
## Workflow
### Step 1: Discover Tests
1. Ensure a workspace is loaded.
2. Call `test_discover` to find all test cases in the solution.
3. Summarize: total test count, test projects, test frameworks detected.
### Step 2: Run Tests with Coverage
1. Call `test_coverage` with the optional project filter.
2. If `coverlet.collector` is not installed, note this and fall back to `test_run` for pass/fail only.
3. Parse coverage results: line coverage, branch coverage, per-module and per-class breakdown.
### Step 3: Identify Coverage Gaps
1. From coverage data, find classes and methods with low coverage (< 50% line coverage).
2. Call `document_symbols` on key source files (non-test) to list declared symbols.
3. For key public APIs, call `test_related` to find associated tests.
4. Identify public types/methods with zero related tests.
### Step 4: Analyze Untested Code
For the top untested types:
1. Call `symbol_info` to understand the type's purpose.
2. Call `callers_callees` to see how it's used.
3. Assess testability: does it have dependencies that need mocking? Is it a pure function?
### Step 5: Rank & Scaffold Tests
Rank the untested public APIs from Steps 3-4 using this priority rubric:
| Tier | Signal | Weight |
|------|--------|--------|
| P0 — critical gap | public method with cyclomatic complexity >= 10 AND zero related tests | 100 |
| P1 — broadly used | public method called by >= 3 other methods AND zero related tests | 50 |
| P2 — orphan type | public type with no related tests whose enclosing project has other tests | 25 |
| P3 — remaining | other untested public APIs | 5 |
Present the top 5-10 as a ranked list with tier label, type/method name, file:line, and the signal that landed it there.
Then prompt the user: "Scaffold tests for the top N?" (default N=5 if the user says yes without a number). If the user agrees OR the user invoked this skill with `--scaffold-top=N`:
1. For each target, call `scaffold_test_preview` with:
- `testProjectName`: the appropriate test project
- `targetTypeName`: the type to test
- `targetMethodName`: optionally a specific method
- `referenceTestFile`: a sibling test file when the pattern is inferable (v1.22+)
2. If `scaffold_test_batch_preview` is available and N > 1, prefer batch mode for a single preview token.
3. Show the preview(s) to the user.
4. After confirmation, call `scaffold_test_apply` for each token.
5. Call `compile_check` to verify the scaffolded tests compile.
6. Note: scaffolded tests are stubs — they need real assertions before they add value.
### Step 6: Related Test Lookup
If the user provides changed files:
1. Call `test_related_files` with the file paths.
2. Return the filter expression for running only affected tests.
3. Suggest: `test_run` with the filter to validate changes.
## Output Format
```
## Test Coverage Report: {solution-name}
### Summary
- Test Projects: {count}
- Total Tests: {count}
- Overall Line Coverage: {percent}%
- Overall Branch Coverage: {percent}%
### Coverage by Module
{table: project, line%, branch%, uncovered lines}
### Coverage by Class (lowest coverage)
{table: class, file, line%, branch%, key untested methods}
### Untested Public APIs
{table: type/method, file:line, complexity, suggestion}
### Test Scaffolding Opportunities
{list of types where scaffold_test_preview can generate stubs}
### Recommendations
1. {highest-impact coverage gap}
2. {next priority}
...
```
## Guidelines
- Coverage percentages are guides, not goals. 100% coverage doesn't mean bug-free code.
- Focus on high-value coverage gaps: complex logic, error handling, edge cases.
- Note when a low-coverage class is a DTO/model that doesn't need behavioral tests.
- Scaffolded tests are starting points — always note they need real assertions.