agentic-quality-engineering
Orchestrate advanced Quality Engineering workflows using a fleet of 19+ AI agents, PACT principles, and automated test-to-deployment quality gates.
Introduction
The Agentic Quality Engineering (AQE) skill provides a comprehensive framework for orchestrating AI-driven testing across the software development lifecycle. Designed for QE engineers, developers, and DevOps teams, it leverages a sophisticated 19-agent fleet to automate test generation, coverage analysis, security scanning, and quality decision-making. By implementing PACT principles—Proactive analysis, Autonomous operation, Collaborative feedback, and Targeted risk focus—teams can scale their testing infrastructure while ensuring human oversight on critical deployment gates. The skill integrates seamlessly with 11 different coding agent platforms including Claude Code, Cursor, Windsurf, and GitHub Copilot, allowing for automated CI/CD pipeline enhancement and TDD workflow support.
-
Automated test generation for diverse frameworks like Jest, Vitest, Playwright, and Cypress.
-
Risk-weighted coverage analysis to identify and fill untested code paths.
-
ML-powered flaky test detection and root cause analysis with stabilization recommendations.
-
Intelligent coordination patterns including hierarchical, mesh, and sequential execution for complex testing suites.
-
Persistent memory management via the aqe/learning/* namespace to store and reuse patterns across projects.
-
Built-in quality gates that act as go/no-go triggers based on configurable thresholds.
-
Cross-platform MCP server integration for unified command over specialized agents like qe-security-scanner, qe-performance-tester, and qe-chaos-engineer.
-
Always initialize using
aqe init --autoto configure local project environments and MCP tools. -
Use the
Tasktool with specified agent types to spawn appropriate logic; avoid manual overhead for routine verification. -
Leverage the three-phase memory protocol (STATUS, PROGRESS, COMPLETE) for managing long-running coordination tasks.
-
Prioritize human-in-the-loop oversight for production releases and architectural decisions; the agents are force multipliers, not replacements for expertise.
-
Maintain persistent patterns in the memory database to reduce AI costs and improve test generation accuracy over time.
-
Respect the 19-agent fleet hierarchy; start by invoking the qe-fleet-commander for holistic pipeline assessments before delegating to granular sub-agents.
Repository Stats
- Stars
- 331
- Forks
- 65
- Open Issues
- 4
- Language
- TypeScript
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- Apr 30, 2026, 08:07 AM