Engineering
multi-llm-advisor avatar

multi-llm-advisor

Fetches expert perspectives from OpenAI Codex and Google Gemini for architecture, code reviews, and debugging, with transparent LLM synthesis.

Introduction

The Multi-LLM Advisor is a sophisticated skill designed for software agents, specifically tailored for B2B SaaS development and complex architectural decision-making. By orchestrating a multi-model workflow, it queries Codex 5.1 Pro and Gemini 3 Pro to gather diverse, high-quality technical feedback, ensuring that developers are not reliant on a single LLM perspective. This approach effectively mitigates individual model biases and hallucinations, particularly when navigating critical refactoring, security audits, or performance bottlenecks.

  • Orchestrates concurrent calls to specialized models for comprehensive coverage of architecture, code quality, and runtime error analysis.

  • Provides a fully transparent visualization format that displays exact model inputs, token usage, individual responses, and a final synthesized recommendation from Claude.

  • Includes pre-configured templates for specific expert roles, including Senior Software Architect, Code Reviewer, and Debugging Expert.

  • Integrates seamlessly with Claude Code hooks and slash commands, allowing for on-demand second opinions or automated triggers based on architectural keyword detection.

  • Facilitates GDPR-compliant decision-making by comparing enterprise-ready AI services, helping teams choose the right infrastructure for specific regulatory environments.

  • Requires active API keys for OpenAI (Codex 5.1 Pro) and Google (Gemini 3 Pro) stored in your system environment variables.

  • Best utilized for high-stakes scenarios such as OWASP security vulnerability assessments, TypeScript type-safety enforcement, and large-scale architectural refactors.

  • Triggered manually via direct user requests for second opinions or automatically via keyword hooks that detect intent related to refactoring, migration, or complex debugging.

  • Designed for TypeScript-heavy projects, with instructions provided to integrate the tool into existing ~/.claude/skills environments for rapid, professional-grade code assistance.

  • Operates with a concise output constraint, forcing models to provide actionable, summarized feedback (typically within 200-300 words) to maintain agent flow efficiency.

Repository Stats

Stars
35
Forks
4
Open Issues
1
Language
Python
Default Branch
main
Sync Status
Idle
Last Synced
May 3, 2026, 06:06 AM
View on GitHub