Research
novelty-check avatar

novelty-check

Verify research idea novelty against recent literature. Use when user says '查新', 'novelty check', or needs to confirm if a method is original.

Introduction

The novelty-check skill is a rigorous research assistant designed to prevent the common pitfall of 'reinventing the wheel' in academic and engineering research. By leveraging multi-source literature search and cross-model adversarial verification, it provides an objective assessment of whether a proposed technical method, mechanism, or research idea has already been explored in recent publications. It is specifically built for researchers, scientists, and engineers who need to validate their hypotheses before investing significant time into implementation or experimentation.

  • Performs automated multi-source literature searches across arXiv, Google Scholar, and Semantic Scholar using specific technical terminology and timeframe filters (2024-2026).

  • Conducts deep analysis of core technical claims, including method, problem space, mechanism, and comparative baselines to detect potential overlap.

  • Employs an external REVIEWER_MODEL (such as gpt-5.4) via Codex MCP to provide a 'brutally honest' critique, preventing confirmation bias and ensuring high-quality reasoning.

  • Generates a structured markdown report including novelty scoring (X/10), a 'Proceed/Abandon' recommendation, identification of closest prior work, and strategic advice on how to position the research contribution.

  • Maintains persistent review tracing to document the search process and verification history, ensuring transparency in how novelty conclusions were reached.

  • Usage: Trigger this skill when a user asks 'Is this novel?', '查新', or 'Check if this method has been done'.

  • Input: A clear, detailed description of the proposed method, the problem it addresses, and the core mechanism.

  • Output: A comprehensive Novelty Check Report covering claims, prior work comparison, assessment of differentiation, and risk analysis.

  • Constraint: Requires an OpenAI-compatible reviewer model (e.g., o3, gpt-4o, gpt-5.4) accessible via Codex MCP to function effectively. It specifically warns against 'Applying X to Y' as a novelty strategy unless the application yields surprising insights.

Repository Stats

Stars
7,817
Forks
729
Open Issues
53
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
Apr 30, 2026, 11:56 AM
View on GitHub