Productivity
prompt-improver avatar

prompt-improver

Enriches vague prompts by performing codebase research and asking targeted questions to clarify user intent before execution.

Introduction

The prompt-improver skill acts as an intelligent intermediary that intercepts vague user requests for Claude Code. Designed to minimize back-and-forth communication, it automatically detects ambiguities and triggers a systematic four-phase enrichment process: research, question generation, clarification, and execution. By analyzing conversation history, exploring codebase architecture via Grep, Glob, and Task/Explore, and searching documentation, the skill ensures that subsequent actions are grounded in the specific context of your project. This tool is ideal for developers who want to reduce wasted cycles on ill-defined tasks like 'fix the bug' or 'add tests', transforming them into well-defined, actionable requests.

  • Automatically evaluates prompt clarity using a UserPromptSubmit hook mechanism.

  • Performs multi-modal research including conversation history mining, codebase traversal, git log analysis, and external web documentation fetching.

  • Generates 1-6 grounded, multiple-choice questions via the AskUserQuestion tool to pinpoint user intent, specific file paths, or preferred implementation approaches.

  • Operates with zero overhead for clear prompts, ensuring only ambiguous queries require extra tokens or processing.

  • Supports manual invocation for testing prompt evaluation systems or handling complex scenarios requiring deeper context.

  • Ensure you are using Claude Code 2.0.22 or later to utilize the AskUserQuestion tool functionality effectively.

  • The skill assumes the hook has already flagged the prompt as vague; it focuses solely on gathering context and confirming user requirements.

  • Use the provided reference files (research-strategies.md, question-patterns.md) to fine-tune how the agent gathers data and interacts with project structures.

  • Bypassing is available via prefix characters like *, /, or #, allowing for immediate execution if the automated evaluation incorrectly flags a prompt.

  • The agent maintains a stateful flow from research findings to final execution, ensuring all chosen clarifications are integrated into the final task implementation.

Repository Stats

Stars
1,406
Forks
120
Open Issues
9
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
May 3, 2026, 03:51 PM
View on GitHub