Free SKILL.md scraped from GitHub. Clone the repo or copy the file directly into your Claude Code skills directory.
npx versuz@latest install hiyenwong-ai-collection-collection-skills-can-llms-deobfuscate-binarygit clone https://github.com/hiyenwong/ai_collection.gitcp ai_collection/SKILL.MD ~/.claude/skills/hiyenwong-ai-collection-collection-skills-can-llms-deobfuscate-binary/SKILL.md--- name: can-llms-deobfuscate-binary description: "Deobfuscating binary code remains a fundamental challenge in reverse engineering, as obfuscation is widely used to hinder analysis and conceal program logic. Although large languag... Activation: optimization" --- # Can LLMs Deobfuscate Binary Code? A Systematic Analysis of Large Language Models into Pseudocode Deobfuscation ## Overview Deobfuscating binary code remains a fundamental challenge in reverse engineering, as obfuscation is widely used to hinder analysis and conceal program logic. Although large language models (LLMs) have shown promise in recovering semantics from obfuscated binaries, a systematic evaluation of their effectiveness is still lacking. In this work, we present BinDeObfBench, the first comprehensive benchmark for assessing LLM-based binary deobfuscation across diverse transformations spanning pre-compilation, compile-time, and post-compilation stages. Our evaluation shows that deobfuscation performance depends more on reasoning capability and domain expertise than on model scale, and that task-specific supervised fine-tuning consistently outperforms broad domain pre-training. Reasoning models can maintain robustness under severe obfuscation, generalize across different instruction set architectures (ISAs) and optimization levels. In-context learning benefits standard models but yields limited gains for reasoning models. Overall, our study highlights the importance of task-specific fine-tuning and reasoning-driven strategies, and positions BinDeObfBench as a basis for future work in binary deobfuscation. ## Source Paper - **Title**: Can LLMs Deobfuscate Binary Code? A Systematic Analysis of Large Language Models into Pseudocode Deobfuscation - **Authors**: Li Hu, Xiuwei Shang, Jieke Shi, Shaoyin Cheng, Junqi Zhang, Gangyang Li, Zhou Yang, Weiming Zhang, David Lo - **arXiv**: 2604.08083v1 - **Published**: 2026-04-09 - **Categories**: cs.SE - **Primary Category**: cs.SE ## Core Concepts This paper presents research on systems engineering with focus areas including: - Novel methodological frameworks - Theoretical foundations and analysis - Practical implementation strategies - Experimental validation ## Technical Contributions 1. **Novel Approach**: Advanced methodology for complex systems problems 2. **Theoretical Foundation**: Rigorous mathematical analysis 3. **Practical Implementation**: Real-world application and validation ## Applications - Systems engineering research and development - Distributed systems design and optimization - Control system implementation - Multi-agent coordination ## Implementation Guidelines 1. Review the source paper for detailed methodology 2. Understand the theoretical framework 3. Implement the proposed approach 4. Validate with appropriate experiments ## References - Li Hu et al. (2026). "Can LLMs Deobfuscate Binary Code? A Systematic Analysis of Large Language Models into Pseudocode Deobfuscation." arXiv:2604.08083v1. - arXiv URL: https://arxiv.org/abs/2604.08083v1 ## Activation Keywords optimization