Engineering
hs avatar

hs

Pre-execution security guardrails for AI agents. Validates shell commands and file reads against 400+ security patterns to block destructive operations, credential theft, and unauthorized system access.

Introduction

Hardstop acts as a critical safety layer for AI coding agents, providing a mechanical brake on potentially harmful system commands. It is designed for developers who use AI assistants like Claude to automate terminal tasks, infrastructure management, or file system manipulations. By implementing a fail-closed architecture, it ensures that any command failing validation or identified as a potential risk is immediately blocked, protecting your local environment from accidental or malicious execution of dangerous operations.

  • Real-time command interception: Analyzes every shell command and pipeline (bash, sh, xargs, find -exec) before it reaches the system interpreter.

  • Threat pattern matching: Evaluates commands against a comprehensive database of 428 security patterns, covering MITRE ATT&CK techniques, reverse shells, fork bombs, and system-destroying operations (e.g., rm -rf /).

  • Sensitive credential protection: Explicitly prevents the reading of sensitive configuration files, including .ssh, .aws, .env, and other credential stores, mitigating the risk of inadvertent secret exposure.

  • Cloud infrastructure awareness: Includes guardrails for major CLI tools such as aws, gcloud, kubectl, and terraform to prevent accidental infrastructure teardown or destructive resource deletion.

  • LLM-level semantic analysis: Complements deterministic pattern matching with semantic understanding for edge cases, obfuscated commands, and complex shell wrappers.

  • Invocation and Control: Users can manage the plugin via the /hs suite of commands (e.g., /hs status, /hs log, /hs skip) to check system health, audit security events, or authorize one-time bypasses for trusted but complex commands.

  • Safety Protocol: The skill mandates a rigorous pre-execution checklist that requires the AI to assess risk levels (SAFE/RISKY/DANGEROUS) and request explicit user confirmation before proceeding with any action flagged as high-risk.

  • Installation and Compatibility: Compatible with macOS, Linux, and Windows environments. It functions as a plugin for AI development agents and can be installed via npm or manual shell scripts.

  • Operational Constraints: Designed with a fail-closed philosophy; if a command's risk is uncertain, the skill defaults to blocking execution until verified. It relies on both local hook-based monitoring and agent-level linguistic analysis to bridge the gap in environments without native system hooks.

Repository Stats

Stars
29
Forks
2
Open Issues
1
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
May 3, 2026, 10:45 PM
View on GitHub