Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-dr-rtl-autonomous-agentic-rtl-optimization-thrgit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-dr-rtl-autonomous-agentic-rtl-optimization-thr/SKILL.md--- name: dr.rtl-autonomous-agentic-rtl-optimization-through description: 'Research paper: Dr.~RTL: Autonomous Agentic RTL Optimization through Tool-Grounded Self-Improvem' metadata: source: arXiv arxiv_id: 2604.14989 published: 2026-04-16 utility_score: 1.0 keywords: multi-agent, agentic, tool, tools, evaluation, autonomous --- # Dr.~RTL: Autonomous Agentic RTL Optimization through Tool-Grounded Self-Improvement **arXiv ID:** 2604.14989 **Published:** 2026-04-16 **Utility Score:** 1.0 **URL:** http://arxiv.org/abs/2604.14989 ## Authors Wenji Fang, Yao Lu, Shang Liu ## Categories cs.AI, cs.AR ## Abstract Recent advances in large language models (LLMs) have sparked growing interest in automatic RTL optimization for better performance, power, and area (PPA). However, existing methods are still far from realistic RTL optimization. Their evaluation settings are often unrealistic: they are tested on manually degraded, small-scale RTL designs and rely on weak open-source tools. Their optimization methods are also limited, relying on coarse design-level feedback and simple pre-defined rewriting rules. To address these limitations, we present Dr. RTL, an agentic framework for RTL timing optimization in a realistic evaluation environment, with continual self-improvement through reusable optimization skills. We establish a realistic evaluation setting with more challenging RTL designs and an industrial EDA workflow. Within this setting, Dr. RTL performs closed-loop optimization through a multi-agent framework for critical-path analysis, parallel RTL rewriting, and tool-based evaluation. We further introduce group-relative skill learning, which compares parallel RTL rewrites and distills the optimization experience into an interpretable skill library. Currently, this library contains 47 pattern--strategy entries for cross-design reuse to improve PPA and accelerate convergence, and it can continue evolving over time. Evaluated on 20 real-world RTL designs, Dr. RTL achieves average WNS/TNS improvements of 21\%/17\% with a 6\% area reduction over the industry-leading commercial synthesis tool. ## Matched Keywords multi-agent, agentic, tool, tools, evaluation, autonomous ## Relevance to AI Agents This paper is highly relevant to AI agent systems research with focus on: - multi-agent, agentic, tool, tools, evaluation ## Quick Reference ```bash # View paper open http://arxiv.org/abs/2604.14989 # Download PDF open http://arxiv.org/pdf/2604.14989.pdf ``` --- *Auto-generated from arXiv on 2026-04-17* ## Activation Keywords - "dr.rtl-autonomous-agentic-rtl-optimization-through" - "dr.rtl autonomous agentic rtl optimization through" - "use dr.rtl autonomous agentic rtl optimization through" - "dr.rtl autonomous agentic rtl optimization through help" - "dr.rtl autonomous agentic rtl optimization through tool" ## Tools Used - `Read` - Read existing files and documentation - `Write` - Create new files and documentation - `Bash` - Execute commands when needed ## Instructions for Agents 1. Identify user's intent and specific requirements 2. Gather necessary context from files or user input 3. Execute appropriate actions using available tools 4. Provide clear results and suggest next steps ## Examples ### Basic Dr.Rtl Autonomous Agentic Rtl Optimization Through usage ``` User: "Help me with dr.rtl autonomous agentic rtl optimization through" → Understand requirements → Execute actions → Provide results ``` ### Advanced usage ``` User: "I need detailed dr.rtl autonomous agentic rtl optimization through assistance" → Clarify scope → Provide comprehensive solution → Follow up ```