guard
Epistemic safety analysis for JSON data in prompts to prevent LLM hallucinations and reasoning errors when handling incomplete or large-scale datasets.
Introduction
The Guard skill is a specialized epistemic safety tool designed to manage JSON data submission within LLM workflows. It acts as an intelligent middleware between raw input and the reasoning engine, specifically targeting the common failure mode where models attempt to perform logical operations on truncated or statistically insufficient data. By implementing a decision-based gate, the skill ensures that only contextually safe and representative data reaches the LLM, thereby reducing hallucinations and ensuring higher reliability in data-driven prompts.
-
Performs lossless reduction of JSON inputs through minification, columnar transformation, and automatic removal of null or redundant values.
-
Executes intelligent token counting using API-based verification or heuristic fallbacks to ensure compliance with strict token budgets.
-
Features a robust decision engine that categorizes submission viability into ALLOW, SAMPLE, or BLOCK states based on safety and context requirements.
-
Provides intelligent trimming capabilities, including first-and-last record preservation and evenly-spaced sampling for large datasets.
-
Implements forensic detection logic to identify and flag high-risk record queries that could lead to biased reasoning.
-
Supports distinct semantic modes such as analysis, summary, and forensics, each with tailored sampling and validation rules.
-
Ideal for developers, data analysts, and LLM engineers working with complex, high-volume, or sensitive JSON datasets requiring validation before ingestion.
-
Used via CLI integration with standard workflows, supporting both file path inputs and inline streaming of JSON fragments.
-
Requires Python 3.8+ and an environment configured with ANTHROPIC_API_KEY for precise model-assisted token analysis.
-
Operates with configurable safety thresholds, allowing for adjustments to token budgets, hard limits on character counts, and specific warning triggers.
-
Enables forensic analysis workflows where explicit blocking of sensitive or potentially misinterpreted queries can be enforced to maintain systemic integrity.
Repository Stats
- Stars
- 13
- Forks
- 1
- Open Issues
- 1
- Language
- JavaScript
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- May 3, 2026, 05:20 PM