self-reviewer
Implements an autonomous, critical self-verification layer for AI agents to validate code quality, security, and requirement alignment before task completion.
Introduction
The Self-Reviewer is a specialized diagnostic skill designed to minimize human intervention and error in AI-driven software development workflows. By adopting a rigorous verification principle, it mandates that an agent must scrutinize its own output against original requirements, security protocols, and engineering best practices. This skill acts as an internal quality assurance gatekeeper, perfect for AI agents working in complex environments like Salesforce development, where managing governor limits, Apex code, Lightning Web Components (LWC), and declarative flows is critical. It forces the agent to move beyond raw generation into a cycle of reflection, testing, and refinement, significantly reducing the probability of hallucinations, security vulnerabilities, and logic errors. It is an essential component for teams that demand high-confidence autonomous software engineering.
-
Performs comprehensive diff analysis as a senior engineer, validating changes against task requirements and acceptance criteria.
-
Automatically runs internal checklists covering security, code quality (no hardcoded secrets or PII), and style consistency.
-
Enforces verification by running code, testing happy paths, checking error handling, and confirming regression avoidance.
-
Mitigates common AI cognitive biases including confirmation bias, the sunk cost fallacy, and optimism bias.
-
Generates structured self-review reports, providing transparent documentation of the validation process, including pass/fail status and severity levels.
-
Prevents common pitfalls like over-engineering, scope creep, under-testing, and fragility by forcing a systematic review of the implemented solution.
-
Best applied when completing code changes, preparing Pull Requests, or finalizing complex task implementations.
-
Inputs typically include the task requirements, the generated source code, the code diff, and the current session context.
-
Expected output is a structured markdown report that clearly outlines requirement status, quality metrics, security assessments, and final readiness confidence.
-
Designed for seamless integration with CI/CD hooks, including pre-commit hooks, verification scripts, and session logging systems.
-
Prioritizes critical questioning such as 'Would I approve this PR?' and 'Is this the minimal change necessary?' to ensure maintainable, high-quality codebases.
Repository Stats
- Stars
- 5
- Forks
- 1
- Open Issues
- 0
- Language
- Python
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- May 3, 2026, 08:28 PM