Research
scholar-evaluation avatar

scholar-evaluation

Systematically evaluate scholarly work using the ScholarEval framework, providing structured, quantitative, and qualitative assessment across research quality dimensions with actionable feedback.

Introduction

The scholar-evaluation skill provides a rigorous, standardized methodology for assessing academic papers, research proposals, literature reviews, and scholarly writing. Designed for researchers, reviewers, and academics, it utilizes the comprehensive ScholarEval framework to ensure quality, methodological rigor, and academic integrity across various research domains. By evaluating work through defined dimensions—including problem formulation, methodology, data analysis, and citation accuracy—the skill allows for both objective quantitative scoring and insightful qualitative critique. It empowers users to benchmark their work against established peer-reviewed criteria, identify critical research gaps, and refine their manuscripts for target publication venues.

  • Performs multidimensional assessments covering research questions, theoretical significance, methodology, and scientific reproducibility.

  • Generates structured quantitative scores using a 5-point rubric to provide clear performance benchmarking.

  • Offers detailed qualitative feedback on specific strengths, weaknesses, and actionable improvement steps for academic growth.

  • Integrates with the scientific-schematics skill to automatically generate publication-quality diagrams, flowcharts, and decision trees for complex scholarly concepts.

  • Supports various scholarly formats including empirical studies, theoretical frameworks, thesis chapters, and conference abstracts.

  • Always identify the specific type of research work and the desired evaluation scope (comprehensive vs. targeted) before starting the assessment process.

  • Use the provided programmatic tools like calculate_scores.py for consistent aggregate scoring results.

  • For optimal results, ensure your inputs include clear research questions and detailed methodological descriptions for the agent to parse.

  • The skill is intended to complement, not replace, human peer review; use it as a pre-submission auditing tool or for structured self-reflection.

  • Remember to reference the internal evaluation_framework.md for specific rubrics and criteria when performing a deep-dive analysis of complex academic writing.

  • Always leverage the visualization capabilities to illustrate workflows or conceptual architectures within the final evaluation report.

Repository Stats

Stars
19,706
Forks
2,198
Open Issues
42
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
Apr 29, 2026, 09:14 AM
View on GitHub