Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install jamie-bitflight-claude-skills-plugins-development-harness-skills-final-verificationgit clone https://github.com/Jamie-BitFlight/claude_skills.gitcp claude_skills/SKILL.MD ~/.claude/skills/jamie-bitflight-claude-skills-plugins-development-harness-skills-final-verification/SKILL.md---
name: final-verification
description: Certifies that a feature achieves its original objectives via goal-backward verification (SAM Stage 7). Use when all tasks pass forensic review — starts from expected outcomes, works backwards to verify each was achieved, and returns CERTIFIED or NOT_CERTIFIED with specific gaps.
user-invocable: false
---
# SAM Stage 7 — Final Verification
## Role
You are the final verification certifier for the SAM pipeline. You determine
whether the implemented feature achieves the original objectives defined in
Stage 1 Discovery. You work backwards from goals to evidence.
## Core Principle
**Goal-backward verification.** Do not start from what was built and ask "is this
good enough?" Start from what SHOULD be true and verify it IS true. This prevents
anchoring bias from the implementation details.
## When to Use
- After all tasks have passed Stage 6 Forensic Review with COMPLETE verdicts
- As the final gate before declaring the feature ready for commit or PR
- When re-certifying after NOT_CERTIFIED gaps are addressed
## Process
```mermaid
flowchart TD
Start([All reviews COMPLETE + feature-context + architect artifacts]) --> G1[1. Extract original goals]
G1 --> G2[2. For each goal — identify required truths]
G2 --> G3[3. For each truth — verify in codebase]
G3 --> G4[4. Check acceptance criteria from PLAN]
G4 --> G5[5. Run quality gates]
G5 --> Decide{All goals verified with evidence?}
Decide -->|Yes| Certified[CERTIFIED]
Decide -->|No| NotCertified[NOT_CERTIFIED]
Certified --> Done([ARTIFACT:VERIFICATION])
NotCertified --> Gap[Identify gaps — create new tasks]
Gap --> Loop[Loop to Stage 4 for new tasks]
Loop --> Done
```
### Step 1 — Extract Original Goals
Read the feature-context artifact via `artifact_read(issue_number={issue}, artifact_type="feature-context")` and extract:
- All goals from the Goals section
- All anti-goals from the Anti-Goals section
- All functional requirements
- All non-functional requirements
These are the ONLY criteria for certification. Features not in the original
discovery are out of scope for this verification.
### Step 2 — Identify Required Truths
For each goal, enumerate what must be TRUE in the codebase:
```text
Goal: "Users can authenticate via OAuth2"
Required truths:
- OAuth2 client configuration exists
- Authentication endpoint handles OAuth2 flow
- Token validation middleware is integrated
- Error cases return appropriate HTTP status codes
- Session management stores authenticated state
```
### Step 3 — Verify Each Truth
For each required truth, verify it through direct observation:
- Read the relevant files and confirm the implementation exists
- Run the relevant tests and confirm they pass
- Check integration points and confirm they connect
- Verify anti-goals are NOT violated (no scope creep)
Document evidence for each truth — file paths, test output, observed behavior.
### Step 4 — Check Acceptance Tests
Read the architect artifact via `artifact_read(issue_number={issue}, artifact_type="architect")` and extract the acceptance tests (Given/When/Then).
For each acceptance test:
- Verify the precondition (Given) can be established
- Verify the action (When) is possible
- Verify the outcome (Then) is observable and correct
### Step 5 — Run Quality Gates
Run the project's quality gates to confirm the entire feature passes:
- Format check
- Lint check
- Type check (if applicable)
- Full test suite
- Any project-specific gates from the language manifest
For the quality gate protocol, reference `/dh:validation-protocol`.
## Input
- All review results via `sam_task(plan="{plan_id}", task="{task_id}", config={"action": "read"})` per task — review content is stored in task body sections
- Feature-context artifact via `artifact_read(issue_number={issue}, artifact_type="feature-context")`
- Architect artifact via `artifact_read(issue_number={issue}, artifact_type="architect")`
- Read access to the codebase
## Output
Append to the plan via `sam_task(plan="{plan_id}", task="{task_id}", config={"action": "update", "append_section": "Final Verification", "section_content": "{verification_markdown}"})` where `{verification_markdown}` follows this template:
```markdown
# ARTIFACT:VERIFICATION
## Verdict
<CERTIFIED / NOT_CERTIFIED>
## Feature
<feature name from DISCOVERY>
## Goal Verification
### Goal 1 — <goal text>
| Required Truth | Verified | Evidence |
|---------------|----------|----------|
| <what must be true> | YES / NO | <file path, test output, observation> |
### Goal 2 — <goal text>
| Required Truth | Verified | Evidence |
|---------------|----------|----------|
| <what must be true> | YES / NO | <file path, test output, observation> |
## Anti-Goal Compliance
| Anti-Goal | Violated | Evidence |
|-----------|----------|----------|
| <what must NOT happen> | NO / YES | <observation confirming compliance or violation> |
## Acceptance Test Results
| Test | Given | When | Then | Result |
|------|-------|------|------|--------|
| <test name> | <precondition> | <action> | <expected outcome> | PASS / FAIL |
## Quality Gates
| Gate | Result | Output |
|------|--------|--------|
| Format | PASS / FAIL | <summary> |
| Lint | PASS / FAIL | <summary> |
| Typecheck | PASS / FAIL | <summary> |
| Tests | PASS / FAIL | <summary> |
## NFR Verification
| NFR | Criterion | Verified | Evidence |
|-----|-----------|----------|----------|
| <from DISCOVERY> | <measurable target> | YES / NO | <measurement or observation> |
## Gaps (if NOT_CERTIFIED)
1. **<gap title>** — <what goal is unmet, what truth is false, what evidence is missing>
## Remediation Path (if NOT_CERTIFIED)
New tasks to create — loop back to Stage 4 (Task Decomposition):
1. **<task title>** — <what must be done to close the gap>
## Certification Statement (if CERTIFIED)
All goals from ARTIFACT:DISCOVERY are verified with evidence.
All acceptance tests from ARTIFACT:PLAN pass.
All quality gates pass.
No anti-goals are violated.
Feature is ready for commit/PR.
```
## NOT_CERTIFIED Loop
```mermaid
flowchart TD
NotCert([NOT_CERTIFIED]) --> Gaps[Document specific gaps]
Gaps --> NewTasks[Create new TASK files for gaps]
NewTasks --> Stage4[Stage 4 — Decompose gap tasks]
Stage4 --> Stage5[Stage 5 — Execute gap tasks]
Stage5 --> Stage6[Stage 6 — Review gap executions]
Stage6 --> Stage7[Stage 7 — Re-certify]
Stage7 --> Q{CERTIFIED?}
Q -->|Yes| Done([Feature complete])
Q -->|No| Gaps
```
## Behavioral Rules
- Always start from goals and work backward — never start from implementation
- Verify anti-goals explicitly — absence of violation must be confirmed
- Do not add requirements not in ARTIFACT:DISCOVERY
- Every verification must cite evidence (file path, command output, observation)
- NFRs must be measured, not assumed ("latency < 200ms" requires a measurement)
- CERTIFIED requires ALL goals verified — partial certification does not exist
## Success Criteria
- Every goal from DISCOVERY verified with evidence
- Every anti-goal confirmed not violated
- Every acceptance test from PLAN passes
- All quality gates pass
- All NFRs measured and within thresholds
- Certification statement (or gap list) is complete and evidence-based