Query 15 AI models simultaneously. Get diverse perspectives, weighted synthesis, and multi-agent debate for your coding decisions.
AI Consultants is a multi-model deliberation system. Instead of relying on a single AI for coding advice, it queries up to 15 different AI models in parallel — each with a unique persona and focus area.
Every consultant provides a confidence-scored response. The system then synthesizes all perspectives into a weighted recommendation, highlights points of agreement and disagreement, and optionally runs multi-round debates where consultants critique each other's positions.
The result: better decisions through cognitive diversity. When 12 out of 15 models agree, you can move fast. When they disagree, you know exactly where the risks are.
Built for developers who want more than a single opinion.
Gemini, Codex, Mistral, Kilo, Cursor, Aider, Amp, Kimi, Claude, Qwen3, GLM, Grok, DeepSeek, MiniMax, and Ollama. Each with a unique persona.
All responses are combined into a single weighted recommendation with confidence scoring and consensus analysis.
Consultants critique each other across multiple rounds. Positions evolve. Anonymous peer review identifies the strongest arguments.
Questions are classified by category and routed to the most relevant consultants. Security questions go to security experts first.
Premium, standard, and economy model tiers. Choose max_quality for critical decisions or fast for quick checks.
Budget enforcement, semantic caching, cost-aware routing, and response limits. Stay within budget without sacrificing quality.
Your question is analyzed and categorized — architecture, security, performance, code review, quick syntax — to determine which consultants are most relevant and what depth of analysis is needed.
Selected consultants are queried in parallel. Each responds with a structured JSON output including a summary, detailed analysis, pros/cons, and a confidence score from 1 to 10.
All responses are aggregated. Consensus is calculated, confidence intervals computed, and a weighted recommendation is generated using your chosen synthesis strategy.
Optionally, consultants enter a multi-round debate. They critique each other's positions, update their confidence scores, and converge (or diverge) on a recommendation. Panic mode triggers extra rigor when uncertainty is high.
Each consultant has a distinct persona that shapes their analysis. The invoking agent is automatically excluded to prevent self-consultation.
# Run directly — no install needed npx ai-consultants "How should I structure my auth system?" # With a preset npx ai-consultants --preset balanced "Redis or Memcached?" # Run diagnostics npx ai-consultants doctor --fix # Install slash commands for Claude Code npx ai-consultants install
# Install the skill curl -fsSL https://raw.githubusercontent.com/matteoscurati/ai-consultants/main/scripts/install.sh | bash # Ask your first question /ai-consultants:consult "How should I structure my auth system?"
# Clone and set up git clone https://github.com/matteoscurati/ai-consultants.git cd ai-consultants ./scripts/doctor.sh --fix # Run a consultation ./scripts/consult_all.sh "How should I structure my auth system?"
| Preset | Consultants | Use Case |
|---|---|---|
| max_quality | All + debate + reflection | Critical decisions |
| medium | 4 + light debate | General questions |
| fast | 2 | Quick checks |
| balanced | 4 (Gemini, Codex, Mistral, Kilo) | Standard consultations |
| high-stakes | All + debate | Critical decisions |
| local | Ollama only | Full privacy |
| security | Security-focused + debate | Security reviews |
| Strategy | Description |
|---|---|
| majority | Most common answer wins (default) |
| risk_averse | Weight conservative responses higher |
| security_first | Prioritize security considerations |
| cost_capped | Prefer simpler, cheaper solutions |
| compare_only | No recommendation, just comparison |
# Core features ENABLE_DEBATE=true # Multi-agent debate ENABLE_SYNTHESIS=true # Automatic synthesis ENABLE_SMART_ROUTING=true # Intelligent consultant selection ENABLE_PANIC_MODE=auto # Automatic rigor for uncertainty # Defaults DEFAULT_PRESET=balanced DEFAULT_STRATEGY=majority # Ollama (local models) ENABLE_OLLAMA=true OLLAMA_MODEL=qwen2.5-coder:32b # Cost management MAX_SESSION_COST=1.00 # Budget limit in USD WARN_AT_COST=0.50