Engineering
audit-context-building avatar

audit-context-building

Builds deep architectural context for codebases using ultra-granular, line-by-line analysis for advanced security auditing.

Introduction

The audit-context-building skill is a specialized framework designed to govern how an AI agent processes code during the preliminary stages of a security audit. By enforcing a disciplined, bottom-up approach, it enables developers and security researchers to transform raw source code into a comprehensive, accurate mental model of a system's architecture, state transitions, and logic invariants before beginning active vulnerability discovery.

This skill is intended for security professionals, auditors, and senior engineers performing threat modeling, architectural reviews, or manual vulnerability research. It is particularly effective for complex, high-stakes codebases where 'gist-level' comprehension leads to missed edge cases, hallucinated vulnerabilities, or context loss over long analysis sessions. By mandating a rigorous, structured inspection process, it ensures that every function, external call, and state variable is analyzed with clinical precision.

  • Performs line-by-line and block-by-block semantic analysis to capture micro-level logic.

  • Applies First Principles, 5 Whys, and 5 Hows to deconstruct assumptions and identify underlying reasoning hazards.

  • Implements full-stack context propagation, treating entire call chains (including external dependencies and library calls) as a single, continuous execution flow.

  • Automatically builds a persistent global mental model that integrates internal function logic, storage patterns, and trust boundaries.

  • Enforces strict documentation of preconditions, inputs, side effects, and state-altering operations for every analyzed module.

  • This tool is designed exclusively for context acquisition; it should not be used for final exploit generation, bug reporting, or severity impact scoring.

  • Users should expect the agent to be highly methodical, often pausing to verify assumptions against the actual codebase rather than relying on common patterns or heuristics.

  • When analyzing external calls or black-box libraries, the agent will default to an adversarial posture, treating external input as untrusted and modeling all possible return/revert paths.

  • The output is optimized for high-fidelity technical recall, prioritizing accuracy and completeness over speed.

Repository Stats

Stars
4,905
Forks
428
Open Issues
21
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
Apr 30, 2026, 09:35 AM
View on GitHub