Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install vivekkarmarkar-claude-code-os-skills-hl-vision-correctgit clone https://github.com/VivekKarmarkar/claude-code-os.gitcp claude-code-os/SKILL.MD ~/.claude/skills/vivekkarmarkar-claude-code-os-skills-hl-vision-correct/SKILL.md---
name: hl-vision-correct
description: Use when a baseline hl-paper-* or hl-popup-paper-* skill has been run on a real paper and left journal entries unwrapped because regex matching couldn't bridge the pdftotext-to-LaTeX transformation (soft hyphens rejoined, math in equation blocks, section headers wrapped, %/& escapes, \textbf/\textit markup, em-dash conversion). Runs the baseline skill then dispatches a vision-based agentic loop that closes the remaining gaps by reading the PDF page and the .tex file together.
---
# hl-vision-correct
2nd-order correction skill. Wraps any of the 8 baseline highlight skills and closes the gaps that regex can't.
## The gap this skill exists to close
The baseline skills (`/hl-paper-*` and `/hl-popup-paper-*`) bridge two text representations of the same source:
- **Journal excerpts** come from pdftotext → raw, unescaped, soft-hyphens preserved, math mangled
- **`.tex` file prose** comes from the reproduce-basic-paper-tex pipeline → LaTeX-cleaned, soft-hyphens rejoined, math hand-transcribed, escapes applied
Regex matching between these two cousins works on ~60-80% of entries on real papers. The remaining 20-40% need vision to locate because the transformations can't be undone by string manipulation alone. This skill closes that gap.
## Arguments
- `<baseline_skill>` (required) — one of the 8 baseline skills:
- `hl-paper-content`, `hl-paper-answers`, `hl-paper-evidence`, `hl-paper-all`
- `hl-popup-paper-content`, `hl-popup-paper-answers`, `hl-popup-paper-evidence`, `hl-popup-paper-all`
- `--fresh` OR `--exists` (required) — reconstruction state:
- `--fresh`: baseline hasn't been run yet; run it first, then close gaps
- `--exists`: baseline already ran; skip it and only close gaps
If either argument is missing, interview the user. Do NOT try to detect state from the filesystem — explicit user input is more robust.
## Pipeline
### Step 0 — Validate arguments / interview
If `<baseline_skill>` is missing or not one of the 8 valid values, ask:
> Which baseline skill are we correcting? Valid options: hl-paper-content, hl-paper-answers, hl-paper-evidence, hl-paper-all, hl-popup-paper-content, hl-popup-paper-answers, hl-popup-paper-evidence, hl-popup-paper-all.
If `--fresh` / `--exists` missing, ask:
> Has the baseline skill already been run on this paper? (fresh = not yet, exists = already run)
### Step 1 — Run baseline (only if `--fresh`)
Invoke the `<baseline_skill>` via the Skill tool. Wait for it to complete. The baseline produces partial output — some entries wrapped by regex, some gaps left.
If `--exists`, skip this step.
### Step 2 — Inventory the reconstruction directory
From session context or user prompt, identify:
- `<pdf_path>` — source PDF
- `<recon_dir>` — reconstruction directory (contains per-page `.tex` files + journals)
- `<stem>` — filename prefix
- `<highlighted_dir>` — `<recon_dir>/highlighted/` (contains the partially-wrapped per-page `.tex` files)
From the baseline's contract, identify which journals are in play:
- `hl-paper-content` / `hl-popup-paper-content` → content journal only
- `hl-paper-answers` / `hl-popup-paper-answers` → answers journal only
- `hl-paper-evidence` / `hl-popup-paper-evidence` → evidence journal only
- `hl-paper-all` / `hl-popup-paper-all` → all three journals that exist
Popup vs plain is also fixed by the baseline: `hl-popup-*` → popups + highlights; `hl-*` → highlights only.
### Step 2.5 — Preprocess journals (dedup + conflict resolution)
Before detecting gaps, scan all in-scope journal entries for structural conflicts the baseline can't handle:
- **Identical-excerpt duplicates**: two or more entries (within a journal or across journals) with the same `excerpt` and same `page_pdf`. The baseline will double-wrap and corrupt — see the Mei-GANs test evidence below. These must be COMBINED into a single `\hl{}` wrap with multiple `\pdfcomment[...]{...}` stickies attached in sequence.
- **Nested excerpts** (one's excerpt is a strict substring of another's on the same page): wrap the LONGER excerpt and attach both stickies. OR, if the shorter is meaningfully contiguous and the longer has an ellipsis (`[...]`) splitting it, treat them as two non-overlapping regions with one sticky each.
- **Cross-journal category conflicts** (same excerpt appears in content + evidence): combine stickies on a single wrap. Color priority: evidence (green/pink/orange) > answer (cyan) > content (yellow) — the more specific category owns the highlight color.
- **Cross-page excerpts**: an excerpt long enough to exceed one page's prose budget (rule of thumb: >~800 chars, or explicitly contains multiple paragraphs). The `page_pdf` in the journal points to the STARTING page — the excerpt may continue onto `page+1`. When detected, the wrap-plan produces TWO items: a wrap on `page` covering the portion that lives there, and a wrap on `page+1` covering the continuation. Stickies attach only to the first wrap (the one on `page`); the continuation wrap has no stickies. If the vision subagent discovers during Step 4 that an excerpt it's asked to wrap crosses the page boundary, it should wrap the page's portion and report the spillover so a follow-up wrap can be dispatched on `page+1`.
Produce a **wrap plan** — a list of `{page, target_excerpt, wrap_color, stickies: [{color, payload}, ...]}` records. Step 3 and Step 4 operate on this plan, not on raw journal entries. The plan is the deduplicated ground truth.
### Step 3 — Detect unwrapped, partially-wrapped, OR damaged entries
First do a **page-level damage scan** across every `.tex` file in `<highlighted_dir>`. A page is damaged if ANY of the following is true:
1. **Nested `\hl{...}` contents**: any `\hl{...}` block whose content contains `\pdfcomment`, `\sethlcolor`, another `\hl{`, `\begin{`, `\end{`, `\subsection`, or `\section`. (Use balanced-brace matching, not regex with greedy `.*`.) This is the catastrophic `apply_hl.py` failure we saw on Mei-GANs.
2. **`\hl{` inside an open LaTeX command argument**: the wrap started between `<cmd>{` and its matching `}` for any of `\subsection*{`, `\section*{`, `\textbf{`, `\textit{`, `\emph{`, `\textsc{`. Check by scanning backward from each `\hl{` position for an unclosed command-brace. On Mei-2018 p26 the baseline produced `\textbf{Section 3.\sethlcolor{}\hl{1}, we solved...` — `\hl` started INSIDE `\textbf{}`. Rendering is undefined (soul inside `\textbf{}` has known issues).
3. **Orphaned `\pdfcomment` with no adjacent `\hl{}`** (popup mode only): a `\pdfcomment[...]{...}` on a line where no `\hl{` appears within the next ~200 characters. Means the sticky helper's wrap-locating phase failed for that entry.
4. **Wrap-color mismatch against wrap plan**: the wrap exists and is structurally clean, but its `\sethlcolor{hl<color>}` doesn't match the Step 2.5 wrap plan's expected `wrap_color`. On Mei-2018 p26 the baseline wrapped answer#1's excerpt in cyan (its own journal color), but the plan says green because evidence#1 shares the excerpt and evidence wins precedence. Always reset for color mismatches — you can't in-place re-color a wrap that also needs its sticky stack rebuilt.
5. **Sticky misattachment (popup mode only)**: a `\pdfcomment[color=X]` preceding an `\hl{}` whose wrap-plan says it should have a sticky of color Y, not X. Happens when `add_popup_stickies.py` anchors a sticky to a wrap that belongs to a different journal (because excerpts were identical but wrapped by the "wrong" journal pass first).
If any page is flagged as damaged → go to Step 3.5.
Then, for each wrap-plan item from Step 2.5, do per-entry gap detection against the (possibly reset) `.tex` file:
1. Open the target page's `.tex` file in `<highlighted_dir>/<stem>_page<N>.tex`.
2. Locate every `\hl{...}` block using balanced-brace parsing.
3. Check if the planned wrap is in place:
- **First-5-words check**: are the first ~5 normalized words of the target excerpt inside some `\hl{...}` block? If not → **fully missing** gap.
- **Last-3-words check**: are the last ~3 normalized words of the excerpt inside the **SAME** `\hl{...}` block? If not → **partial-match** gap (baseline truncated at a sentence boundary — `apply_hl.py` Strategy 3 extends only to the next period).
- **Sticky-count check** (popup mode): does the wrap have the expected number of `\pdfcomment[...]{...}` prefixes? A wrap-plan item combining N stickies needs N `\pdfcomment` prefixes. If fewer → **sticky-missing** gap.
**Normalization for word-matching**: collapse whitespace; replace `--`, `–`, `—` with `-`; unescape `\%` → `%`, `\&` → `&`, `\_` → `_`, `\#` → `#`; strip `\textbf{...}` and `\textit{...}` wrappers around candidate match regions; treat `Fig.~3` as `Fig. 3`.
Also identify `status: null` entries — legitimate nulls (paper silent on the question); report but don't correct.
**Equation-boundary exception**: if a wrap-plan excerpt contains a display equation (e.g. `sigma = 2*mu*epsilon + p*I`), the `.tex` version will have `\begin{equation}...\end{equation}` at that spot. The last-3-words check will correctly fail. The right correction is to wrap ONLY the prose up to the colon preceding the equation, NOT to span the equation environment. The subagent prompt must tell the subagent this is acceptable — the "partial match" here is by design.
### Step 3.5 — Reset damaged pages from source
If Step 3 flagged any pages as damaged:
1. For each damaged page `<N>`, copy the clean source file over the damaged one:
```bash
cp <recon_dir>/<stem>_page<N>.tex <highlighted_dir>/<stem>_page<N>.tex
```
2. All wrap-plan items targeting those pages — including ones Step 3 thought were "already wrapped" — are now in the gap list. They'll be rewrapped fresh via vision.
Damage cannot be incrementally fixed. The baseline's broken output becomes tangled with any attempt to add more wraps on top. Reset is the only safe path.
### Step 4 — Dispatch vision-based correction subagent per wrap-plan item
For each wrap-plan item in the gap list, dispatch a `general-purpose` subagent. **One wrap-plan item = one subagent = one contiguous `\hl{...}` on the page (with potentially multiple stickies attached).**
**Parallel dispatch rule**: items on DIFFERENT pages can run in parallel (send multiple Agent tool calls in one message). Items on the SAME page must run SEQUENTIALLY (one subagent handles all wraps on that page, applying them in file-order, re-reading between edits) — otherwise concurrent Edits race and can corrupt the file again.
**Subagent prompt template:**
```
You are applying N highlight+sticky wrap(s) on one .tex page using vision. The baseline either damaged this page (in which case it has been reset to source) or left gap(s) regex couldn't close. Your job is to produce clean wraps.
Shared context:
Target page (PDF page): <page>
PDF: <pdf_path>
.tex file: <highlighted_dir>/<stem>_page<page>.tex
Popup mode: <true|false>
Wrap plan items (process in file-order):
<for each item in this page's wrap-plan, in file-order>
ITEM <k>:
target_excerpt (pdftotext form): "<excerpt>"
wrap_color: <yellow|cyan|green|pink|orange>
stickies (in attach order):
- {color: <c1>, payload: "User input: <esc_req_1>\\\\Claude comments: <esc_com_1>"}
- {color: <c2>, payload: "User input: <esc_req_2>\\\\Claude comments: <esc_com_2>"}
...
<end for>
Pipeline:
1. Read(pdf_path, pages="<page>") to see the actual passages in context.
2. Read the .tex file.
3. For each item k in order:
a. Identify the exact .tex substring corresponding to target_excerpt.
b. Write it to /tmp/target_p<page>_item<k>.txt.
c. For each sticky in the item's stickies list, generate a \pdfcomment[...]{<payload>} prefix (use /tmp/build_annotation.py OR construct manually — see payload-escape rules below).
d. Concatenate all sticky prefixes, then \sethlcolor{hl<wrap_color>}\hl{<exact substring>}.
e. Edit the .tex: old_string = exact substring, new_string = concatenated annotation.
f. Re-Read the .tex before the next item.
4. Return a per-item report.
Known pdftotext→.tex transformations (the passage may differ from target_excerpt by any of these):
- Soft hyphens rejoined: "stabi- lized" → "stabilized"
- Math display-wrapped: "sigma = 2*mu*epsilon + p*I" → "\begin{equation} \boldsymbol{\sigma} = 2\mu\boldsymbol{\epsilon} + p\mathbf{I} \end{equation}"
- Section headers: "2.1 Conventional Method" → "\subsection*{2.1\quad Conventional Method}"
- LaTeX escapes: "3%" → "3\%", "Refs. " → "Refs.\ ", "Fig. 3" → "Fig.~3"
- Font markup: reference titles wrapped in \textbf{} or \textit{}
- Em-dashes: "—" → "---"
- Ellipsis in excerpt: the journal may have "[...]" between two chunks — wrap only the second chunk (or both as separate items, per the wrap-plan)
Critical DO-NOT rules:
- DO NOT wrap text containing \begin{equation}, \end{equation}, \begin{align}, \end{align}, or any other environment boundary.
- DO NOT wrap INSIDE \subsection*{...}, \section*{...}, \textbf{...}, \textit{...}, or \pdfcomment[...]{...}. These are LaTeX commands/structures, not prose.
- DO NOT let a \hl{...} wrap contain another \hl, \sethlcolor, or \pdfcomment inside it.
- DO NOT retype prose from vision. The .tex substring is the source of truth for old_string.
- If an excerpt's .tex form crosses a math environment, wrap only the prose portion up to the boundary.
- If a passage cannot be safely wrapped, report "unable-to-locate" for that item and continue with the next.
Report format (≤100 words total):
item 1: wrapped | unable-to-locate: <reason>
item 2: wrapped | unable-to-locate: <reason>
...
```
Build one subagent call per page (each handling all wrap-plan items for that page). Send all page-level subagents in a single message for parallel execution.
### Step 5 — Re-merge, re-compile, open
1. Run `concatenate_pages.py`:
```bash
python3 ~/.claude/skills/reproduce-basic-paper-tex/helpers/concatenate_pages.py \
<highlighted_dir> <stem>
```
2. Inject the highlighting preamble into `<stem>_full.tex` (this is what `concatenate_pages.py` strips out):
```latex
\usepackage{soul}
\usepackage{pdfcomment} % only for popup variants
\definecolor{hlyellow}{RGB}{255,255,170}
\definecolor{hlcyan}{RGB}{170,255,255}
\definecolor{hlgreen}{RGB}{170,255,170}
\definecolor{hlpink}{RGB}{255,170,170}
\definecolor{hlorange}{RGB}{255,221,170}
```
3. Compile twice: `pdflatex -interaction=nonstopmode <stem>_full.tex` (twice for page-ref resolution).
4. Rename to match baseline's output convention:
- `hl-paper-<cat>`: `<stem>_hl_<cat>.pdf`
- `hl-popup-paper-<cat>`: `<stem>_hl_popup_<cat>.pdf`
5. Open:
- Popup variants: `evince <stem>_hl_popup_<cat>.pdf` + print "hover only, don't click (Evince crashes on click)"
- Plain variants: `xdg-open <stem>_hl_<cat>.pdf`
### Step 6 — Report
Print a per-journal summary:
```
Journal: <content|answers|evidence>
Total selected entries: N
Already wrapped by baseline: M
Wrapped by vision correction: K
Legitimate nulls (status: null): L
Unable to locate: U ← should be 0 in a healthy run
```
If `U > 0`, list the failed entries so the user can intervene.
## Known failure modes (the transformations vision must bridge)
| pdftotext excerpt | .tex representation | Why regex fails |
|---|---|---|
| `stabi- lized finite elements` | `stabilized finite elements` | Soft-hyphen rejoin |
| `Babuska- Brezzi conditions` | `Babuska-Brezzi conditions` | Soft-hyphen + capitalization |
| `u = g on Γ_g` | `\begin{equation} \mathbf{u} = \mathbf{g} \quad \text{on} \quad \Gamma_g ... \end{equation}` | Math wrapping |
| `2.1 Conventional Method` | `\subsection*{2.1\quad Conventional Method}` | Section header wrapping |
| `3% white Gaussian noise` | `3\% white Gaussian noise` | Percent escape |
| `Refs. [51] and [52]` | `Refs.\ [51] and [52]` | Non-breaking space |
| `Mei, Y., et al. "Strain weighted..."` | `Mei, Y., et al. \textit{``Strain weighted...''}` | Font + quote markup |
| `gradient—based` | `gradient---based` | Em-dash conversion |
| two-sentence excerpt → only first sentence wrapped | `\hl{First sentence.} Second sentence.` | `apply_hl.py` Strategy 3 extends only to the next `.` — baseline reports "wrapped" but excerpt is truncated |
| two entries, identical excerpt, same page | `\sethlcolor{A}\hl{\sethlcolor{B}\hl{excerpt...` (nested) OR the second wrap swallows a preceding `\pdfcomment` block | `apply_hl.py` doesn't dedup — both wraps apply, producing nested `\hl` or cross-contamination with adjacent wraps |
| middle-phrase match inside `\subsection*{...}` | `\subsection*{2.\sethlcolor{}\hl{3\quad Experimental Data}...}` | Strategy 3's period-expansion reaches across the opening `{` of the subsection command — section commands aren't caught by the `\begin/\end` safety check |
| evidence comments mention "partially" in AI interpretation | wrap colored orange even though primary label is "supports" | `detect_evidence_color` keyword-searches the whole `comments` string — use the first-line "EVIDENTIAL RELATIONSHIP: <label>" header instead |
| popup baseline produces `\pdfcomment` but `apply_hl` already fuzz-matched a fragment the sticky locator disagrees with | `\pdfcomment` orphan (no adjacent `\hl{}`) OR `\hl{}` orphan (no preceding `\pdfcomment`) | The two helpers use different matching heuristics; under fuzzy conditions they can disagree on where to anchor |
| middle-phrase match inside `\textbf{...}` | `\textbf{Section 3.\sethlcolor{}\hl{1}, we solved...}` | Strategy 3 matches a distinctive word inside `\textbf{}` content and extends the wrap bounds into (or across) the textbf argument. Same category as subsection contamination; now also covers `\textit{}`, `\emph{}`, `\textsc{}` |
| wrap applied in the wrong color (cross-journal conflict) | answer #1 and evidence #1 share an excerpt; baseline wraps it in cyan (answer's color) but Step 2.5's wrap plan says green (evidence wins category) | `apply_hl.py` runs per journal, oblivious to cross-journal conflicts. Only the wrap-plan knows the intended color. Step 3's wrap-color check is the fix |
| sticky attached to a wrap from a different journal | cyan `\hl{}` from answers has a GREEN `\pdfcomment` attached — because evidence and answer share an excerpt, and the sticky helper anchored to the existing cyan wrap | `add_popup_stickies.py` finds the nearest `\hl{}` to the excerpt, regardless of color. When excerpts overlap across journals, stickies land on the "first-wrapped" journal's highlight |
| excerpt spans a page boundary | journal says `page_pdf: 28` but the paragraph continues on page 29. Baseline only wraps the page-28 portion; page 29 portion is left unhighlighted | The journal `page_pdf` field marks the STARTING page only. Long excerpts (>~800 chars, multi-paragraph) typically cross pages. Step 2.5 should detect and plan continuation wraps |
The agentic loop sees both representations simultaneously (PDF via vision, .tex via Read) and uses linguistic judgment to align them. Regex can't.
**Note on the partial-match mode:** this is the most insidious failure because the baseline's success count is misleading. `apply_hl.py` reports "1 highlighted" when it wrapped a sentence matching the middle phrase, even if the excerpt spanned multiple sentences. Step 3's last-3-words check is what catches this.
**Note on the damage modes:** nested-`\hl`, subsection-contamination, and sticky-orphaning can't be incrementally fixed — once they're in the file, any further edit risks compounding the mess. Step 3's page-level damage scan and Step 3.5's source-reset are the load-bearing safeguard.
## Colors and popup payloads
**Yellow** → content highlights (from `text-to-highlight-in-paper` journal)
**Cyan** → answer highlights (from `ask-question-about-paper` journal)
**Green / Pink / Orange** → evidence highlights (from `find-evidence-in-paper` journal). **Parse the first non-empty line of `comments` for the pattern `EVIDENTIAL RELATIONSHIP: <label>`**:
- `supports` → green
- `contradicts` → pink
- `partially supports` → orange
If the first line doesn't match that pattern (older journals), fall back to keyword search — but know that keyword search false-positives when AI interpretation text uses "partially", "supports", or "contradicts" outside the classification sense. The first-line header is the authoritative label.
**Popup payload** (for popup variants only):
```
User input: <escaped request>\\\\Claude comments: <escaped comments>
```
Escape these LaTeX specials in the payload: `\\`, `{`, `}`, `%`, `$`, `&`, `#`, `_`, `^`, `~`. Convert `\n\n` → `\\\\\\\\` and `\n` → space.
## Common mistakes
| Mistake | Fix |
|---|---|
| Detecting filesystem state instead of using `--fresh`/`--exists` | Trust the user's flag; it's explicit by design. |
| Dispatching subagents serially | Dispatch in parallel. Each subagent is independent. |
| Letting the subagent decide the color or popup payload | Pre-compute these in Step 4 and hand them to the subagent as data. |
| Rewriting the entire `.tex` file | Use `Edit` with a single `old_string` → `new_string` swap. |
| Forgetting to inject `\usepackage{soul}` + color defs after concatenation | The concatenation helper strips them. Always re-inject in Step 5. |
| Forgetting `\usepackage{pdfcomment}` for popup variants | Popup baselines need it; plain baselines don't. |
| Clicking a sticky in Evince | It crashes. Hover only. Tell the user. |
## Real-world evidence (RED baseline)
Papers below hit gap-closing requirements after the baseline skill ran:
- **Mei-2016-Strain-Weighted** (10 pages): 11 highlights + 12 stickies, majority closed by vision after baseline's regex couldn't bridge math-block wrapping and `\subsection*{}` headers
- **Yue_Mei_MBT_LLM_Summary** (13 pages): 17 highlights + 15 stickies, gaps closed around `\textbf{}` titles and `\textit{}` reference-list entries
- **Mechanics_based_Tomography_paper_2017** (19 pages, 1 cyan entry + 1 evidence null): baseline reported "1 highlighted" but only the first sentence of a two-sentence excerpt was wrapped — the `[...,19–21]` unicode en-dash at the end of the second sentence couldn't be matched, so Strategy 3 fell back to the middle-phrase method and truncated at the next period. This is the **partial-match** failure mode — exposed only by the Step 3 last-3-words check, not by counting the baseline's output.
- **Mei-GANs-Elastography** (25 pages, 2 content + 5 evidence): baseline produced **ACTIVE DAMAGE** on 2 pages. (a) `apply_hl.py` Strategy 3 wrapped inside `\subsection*{2.3\quad Experimental Data}` creating `\subsection*{2.\sethlcolor{}\hl{3\quad Experimental Data}...`. (b) Two entries with near-identical excerpts on page 6 produced nested wraps; a second entry's wrap on page 3 swallowed a preceding `\pdfcomment` block. (c) Evidence entry 5 was colored orange because the word "partially" appeared in the AI-interpretation paragraph of `comments` even though the first-line classification was "supports". Vision correction reset both damaged pages from source and rewrapped all 7 entries cleanly (6 `\hl` + 7 stickies — content #1 and evidence #2 shared one `\hl` with 2 stickies because they had identical excerpts, a case the baseline can't handle).
- **Mei-2018-Linear-vs-Nonlinear** (33 pages, 2 content + 3 answers + 2 evidence): exposed three NEW failure modes beyond Mei-GANs. (a) **`\textbf{}` contamination on p26** — baseline's Strategy 3 wrapped `\hl{}` INSIDE `\textbf{Section 3.1}`, producing `\textbf{Section 3.\sethlcolor{}\hl{1}, we solved...}`. Like the subsection-contamination mode but with `\textbf` — revealed the damage scan needed to check against `\textbf{}`, `\textit{}`, `\emph{}`, `\textsc{}` as well. (b) **Wrong-color cross-journal wrap on p26** — answer#1 and evidence#1 had identical excerpts; baseline wrapped the excerpt in cyan (answer's color), but the wrap plan specifies green because evidence wins category precedence. The wrap was structurally clean but semantically wrong. Added wrap-color check to Step 3. (c) **Sticky misattachment** — `add_popup_stickies.py` attached evidence#1's (incorrectly orange-colored) sticky to answer#1's cyan `\hl{}`, because it finds the nearest `\hl{}` regardless of color/journal. (d) **Cross-page excerpt on p28→p29** — answer#2's 942-char excerpt straddled a page break. Journal `page_pdf: 28` marks the starting page only; the continuation on p29 needed a separate wrap. Vision correction reset p26 and applied the correct wrap plan (green combined wrap for ans#1+ev#1, yellow standalone for content#1); also dispatched a subagent for p28 that wrapped what it could and reported the spillover; a follow-up wrap was applied on p29 to complete coverage.
These were done manually prior to this skill. The skill codifies that manual pattern.
## Do-not rules
- Do NOT attempt to detect state from the filesystem. Use `--fresh`/`--exists`.
- Do NOT modify per-page `.tex` files outside `<highlighted_dir>`. Those are source.
- Do NOT re-wrap an already-wrapped excerpt (unless the wrap was damaged — then reset the page from source via Step 3.5, don't try to edit the damaged wrap in place).
- Do NOT retype prose from vision — only from pdftotext (for the journal) or from the `.tex` (for the Edit old_string).
- Do NOT run concurrent Agent subagents against the same `.tex` page — collapse same-page items into one subagent that handles them sequentially.
- Do NOT trust `apply_hl.py`'s "N highlighted" count. It counts WRAPS APPLIED, not excerpts FULLY CAPTURED. Step 3 is still required.
- Do NOT trust keyword-search color detection for evidence. Parse the first-line "EVIDENTIAL RELATIONSHIP: <label>" header.
- Do NOT click a sticky in Evince.