ask-questions-if-underspecified
Minimize incorrect implementations by asking essential clarifying questions when project requirements are ambiguous.
Introduction
The ask-questions-if-underspecified skill is a critical tool for AI agents and developers aiming to improve efficiency and reduce rework. It provides a structured methodology for identifying and resolving ambiguity in project requests, ensuring that the agent does not proceed with assumptions that could lead to technical debt or incorrect outcomes. By forcing a pause before action, it encourages a deliberate approach to complex tasks where scope, constraints, or objectives are not clearly defined.
This skill is designed for software engineers, security auditors, and system architects who interact with AI agents to perform complex technical operations. It is particularly valuable when dealing with ambiguous feature requests, unclear architectural requirements, or vague performance constraints. Instead of guessing, the agent prompts the user for the minimum necessary information to proceed safely.
-
Automatically assesses the clarity of a request against key criteria: objective, 'done' state, scope, constraints, environment, and safety.
-
Implements a question-first workflow that asks 1-5 targeted, scannable, and actionable questions in the first pass.
-
Supports multiple-choice formats to reduce friction for the human operator, including options for default behaviors or 'not sure' scenarios.
-
Enforces a pause-before-action policy, preventing the execution of commands or modifications to the codebase until essential ambiguities are resolved.
-
Facilitates explicit confirmation of interpretations, ensuring that the agent and the user are aligned on the intended outcome before any work begins.
-
Use this skill when you encounter requests that involve multiple plausible interpretations or missing critical details regarding system design or security requirements.
-
Do not use this skill for trivial tasks that can be resolved via low-risk discovery reads of existing documentation or configuration files.
-
Inputs typically include the user's initial request and the agent's internal analysis of the codebase context; outputs are formatted, easy-to-read clarifying questions.
-
Follow the recommended template structure: numbered questions, lettered options, and a clear reply path (e.g., '1a 2b') to expedite the feedback loop.
-
When forced to proceed without answers, strictly adhere to the policy of documenting all assumptions in a numbered list for user verification.
Repository Stats
- Stars
- 4,856
- Forks
- 421
- Open Issues
- 29
- Language
- Python
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- Apr 28, 2026, 12:39 PM