Engineering
recursive-decomposition avatar

recursive-decomposition

Handles large-scale tasks by automatically breaking them down into manageable, recursive sub-tasks to overcome context window limits and improve reasoning accuracy on large codebases and document sets.

Introduction

The Recursive Decomposition skill enables your agent to perform complex, long-context analysis that would typically exceed memory limits or degrade performance through context rot. Based on the RLM (Recursive Language Models) research framework, this skill systematically decomposes massive inputs into smaller, independent segments. It is designed for software engineers, data analysts, and researchers working with extensive code repositories, large technical documentation, or multi-document aggregation tasks. Instead of forcing all data into a single prompt, the agent treats input sources as environmental variables, applying programmatic filtering and strategic chunking to process information efficiently. By launching parallel sub-agents for specific segments and verifying synthesized outputs against original source material, the tool ensures high-fidelity reasoning even when processing hundreds of files or deep document archives. This skill empowers the agent to act autonomously, managing its own search space and execution plan.

  • Automatically identifies the optimal processing strategy based on token count and task complexity (direct vs. recursive).

  • Implements multi-stage filtering using glob and grep to narrow the search space before deep analysis.

  • Supports intelligent chunking by line count, file structure, or semantic logic to isolate relevant content.

  • Enables parallel sub-agent invocation for independent document or code segments, significantly reducing execution time.

  • Includes built-in answer verification cycles to mitigate hallucination and ensure consistency in synthesized results.

  • Features robust large-file protocols that prevent context overflow by utilizing line-range reading, metadata checks, and head/tail viewing.

  • Activate by providing tasks that involve 10+ files, 50k+ tokens, or complex cross-file navigation.

  • Use for codebase-wide pattern analysis, massive refactoring planning, or multi-document QA sessions.

  • Follow the provided implementation patterns for effective codebase analysis and feature aggregation.

  • Always identify search space size first via grep or glob to optimize performance.

  • Be aware that smaller, localized tasks (under 30k tokens) are more efficient with standard processing rather than full recursive decomposition.

Repository Stats

Stars
31
Forks
2
Open Issues
0
Language
Not provided
Default Branch
main
Sync Status
Idle
Last Synced
May 3, 2026, 05:22 PM
View on GitHub