ai-collaboration-standards
Prevents AI hallucination and ensures evidence-based, verifiable outputs when analyzing code, reviewing technical documents, or providing recommendations.
Introduction
The AI Collaboration Standards skill is a rigorous framework designed to maintain high accuracy and reliability in AI-assisted development environments. It mandates that every assertion made by the AI—whether analyzing code, suggesting architectural changes, or interpreting logs—be explicitly traced to evidence. By implementing a standardized certainty tagging system, the AI clearly distinguishes between confirmed facts retrieved directly from the codebase or documentation, logical inferences derived from observed patterns, and assumptions that require further user verification.
This skill is intended for software engineers, code reviewers, and system architects who need to rely on AI-generated suggestions without the risk of fabricated API calls, non-existent configuration properties, or speculative library usage. It enforces strict citation requirements, including file paths and line numbers, ensuring that all technical claims are verifiable. By shifting the AI workflow toward an evidence-based model, this tool significantly reduces debugging time and increases confidence in automated code refactoring and troubleshooting sessions.
-
Implements a mandatory certainty tagging system ([Confirmed], [Inferred], [Assumption], [Unknown], [Need Confirmation]) for every technical claim.
-
Enforces strict source attribution rules for project files, documentation, external resources, and AI-generated knowledge.
-
Requires explicit citation of file paths and line numbers for all code-related statements.
-
Promotes a 'recommendation-first' output style, ensuring options are presented with reasoning derived from project-specific evidence.
-
Supports bilingual configurations (English and Traditional Chinese) via project-level documentation, allowing teams to standardize their communication language.
-
Use this skill whenever you are performing code audits, debugging complex systems, or evaluating architectural options.
-
Ensure that all required files and documentation are read by the agent to allow for accurate [Source: Code] or [Source: Docs] tagging.
-
When presenting multiple technical solutions, always include a recommended choice backed by analysis of the current project state.
-
If the AI lacks sufficient evidence to make a statement, it is encouraged to use the [Unknown] or [Need Confirmation] tags rather than guessing.
-
The skill actively discourages hallucination by making the 'no fabrication' policy a checklist item in every interaction.
Repository Stats
- Stars
- 44
- Forks
- 10
- Open Issues
- 0
- Language
- Shell
- Default Branch
- main
- Sync Status
- Idle
- Last Synced
- May 3, 2026, 05:23 AM