Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install wanshuiyin-auto-claude-code-research-in-sleep-skills-skills-codex-gemini-review-novelty-checkgit clone https://github.com/wanshuiyin/Auto-claude-code-research-in-sleep.gitcp Auto-claude-code-research-in-sleep/SKILL.MD ~/.claude/skills/wanshuiyin-auto-claude-code-research-in-sleep-skills-skills-codex-gemini-review-novelty-check/SKILL.md---
name: "novelty-check"
description: "Verify research idea novelty against recent literature. Use when user says \"查新\", \"novelty check\", \"有没有人做过\", \"check novelty\", or wants to verify a research idea is novel before implementing."
---
> Override for Codex users who want **Gemini**, not a second Codex agent, to act as the reviewer. Install this package **after** `skills/skills-codex/*`.
# Novelty Check Skill
Check whether a proposed method/idea has already been done in the literature: **$ARGUMENTS**
## Constants
- **REVIEWER_MODEL = `gemini-review`** — Gemini reviewer invoked through the local `gemini-review` MCP bridge. Set `GEMINI_REVIEW_MODEL` if you need a specific Gemini model override.
## Instructions
Given a method description, systematically verify its novelty:
### Phase A: Extract Key Claims
1. Read the user's method description
2. Identify 3-5 core technical claims that would need to be novel:
- What is the method?
- What problem does it solve?
- What is the mechanism?
- What makes it different from obvious baselines?
### Phase B: Multi-Source Literature Search
For EACH core claim, search using ALL available sources:
1. **Web Search** (via `WebSearch`):
- Search arXiv, Google Scholar, Semantic Scholar
- Use specific technical terms from the claim
- Try at least 3 different query formulations per claim
- Include year filters for 2024-2026
2. **Known paper databases**: Check against:
- ICLR 2025/2026, NeurIPS 2025, ICML 2025/2026
- Recent arXiv preprints (2025-2026)
3. **Read abstracts**: For each potentially overlapping paper, WebFetch its abstract and related work section
### Phase C: Cross-Model Verification
Call REVIEWER_MODEL via `mcp__gemini-review__review_start` with high-rigor review:
```
mcp__gemini-review__review_start:
prompt: |
[Full novelty briefing + prior work list + specific novelty questions]
```
After this start call, immediately save the returned `jobId` and poll `mcp__gemini-review__review_status` with a bounded `waitSeconds` until `done=true`. Treat the completed status payload's `response` as the reviewer output, and save the completed `threadId` for any follow-up round.
Prompt should include:
- The proposed method description
- All papers found in Phase B
- Ask: "Is this method novel? What is the closest prior work? What is the delta?"
### Phase D: Novelty Report
Output a structured report:
```markdown
## Novelty Check Report
### Proposed Method
[1-2 sentence description]
### Core Claims
1. [Claim 1] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
2. [Claim 2] — Novelty: HIGH/MEDIUM/LOW — Closest: [paper]
...
### Closest Prior Work
| Paper | Year | Venue | Overlap | Key Difference |
|-------|------|-------|---------|----------------|
### Overall Novelty Assessment
- Score: X/10
- Recommendation: PROCEED / PROCEED WITH CAUTION / ABANDON
- Key differentiator: [what makes this unique, if anything]
- Risk: [what a reviewer would cite as prior work]
### Suggested Positioning
[How to frame the contribution to maximize novelty perception]
```
### Important Rules
- Be BRUTALLY honest — false novelty claims waste months of research time
- "Applying X to Y" is NOT novel unless the application reveals surprising insights
- Check both the method AND the experimental setting for novelty
- If the method is not novel but the FINDING would be, say so explicitly
- Always check the most recent 6 months of arXiv — the field moves fast