Model Configuration

Configure the two-tier model system for code reviews.

Two-Tier Model System

Reviewate uses two tiers of models to balance quality and cost:

TierAgentsDefaultPurpose
ReviewAnalyzeAgent (x2), FactCheckAgentsonnetDeep analysis, code exploration, bug detection
UtilitySynthesizerAgent, DedupAgent, StyleAgent, IssueExplorerAgent, SummarizerAgent, SummaryParserAgenthaikuFormatting, parsing, lightweight tasks

The review tier needs a capable reasoning model. The utility tier can use a fast, cheap model.

Configuration Methods

Environment Variables

export REVIEWATE_REVIEW_MODEL=sonnet
export REVIEWATE_UTILITY_MODEL=haiku

Config File

The simplest way to configure models is via ~/.reviewate/config.toml (created automatically on first run or when you run reviewate config:

[models]
review = "sonnet"
utility = "haiku"

The config file stores model choices only. API keys must be set via environment variables.

Override Priority

env vars > config file > defaults

Using Other Models (OpenAI, Gemini, etc.)

Reviewate is built on the Claude Agent SDK, which speaks the Anthropic API. To use models from other providers (OpenAI, Gemini, Mistral, Llama, etc.), you need an API proxy that translates between the Anthropic API format and your provider's API.

LiteLLM is the recommended proxy — it supports 100+ models behind a unified API.

Example: Gemini via LiteLLM

  1. Install LiteLLM and create a config:
    pip install 'litellm[proxy]'
    

    Create litellm_config.yaml:
    model_list:
      # Review tier — maps Anthropic model names to Gemini
      - model_name: gemini-3-flash-preview
        litellm_params:
          model: gemini/gemini-3-flash-preview
          api_key: os.environ/GEMINI_API_KEY
    
      # Utility tier
      - model_name: gemini-3.1-flash-lite-preview
        litellm_params:
          model: gemini/gemini-3.1-flash-lite-preview
          api_key: os.environ/GEMINI_API_KEY
    

    The model_name fields must match the Anthropic model IDs that Reviewate sends. LiteLLM routes them to the Gemini models you specify.
  2. Start the proxy:
    export GEMINI_API_KEY=your-key
    litellm --config litellm_config.yaml
    
  3. Configure Reviewate to use the proxy:
    Run reviewate config and pick Custom endpoint (option 3)
    [auth]
    mode = "custom"
    
    [models]
    review = "gemini-3-flash-preview"
    utility = "gemini-3.1-flash-lite-preview"
    
    [urls]
    custom = "http://localhost:4000"
    

    Then set an API key — by default LiteLLM doesn't require auth, so any value works:
    export ANTHROPIC_API_KEY=sk-1234
    reviewate https://github.com/org/repo/pull/123
    

    If you set a master_key in your LiteLLM config, use that key instead.

This same pattern works for any provider LiteLLM supports (OpenAI, Mistral, Llama, etc.) — just swap the litellm_params.model values. See LiteLLM supported models for the full list.

Non-Anthropic models may produce lower quality reviews. The review pipeline is optimized for Claude's tool use and structured output capabilities.

Custom Endpoints

REVIEWATE_BASE_URL works with any Anthropic-compatible endpoint:

# LiteLLM proxy
export REVIEWATE_BASE_URL=http://localhost:4000

# Any Anthropic-compatible endpoint
export REVIEWATE_BASE_URL=https://your-proxy.example.com

First-Run Wizard

When you run Reviewate for the first time, the setup wizard guides you through:

  1. Auth mode — API key, Claude subscription (OAuth), or custom endpoint
  2. Model selection — Choose review and utility models

Config is saved to ~/.reviewate/config.toml. To re-run:

reviewate config
When using subscription mode (option 2), ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL from your environment are ignored — the SDK uses your Claude CLI session directly.