Reviewate uses two tiers of models to balance quality and cost:
| Tier | Agents | Default | Purpose |
|---|---|---|---|
| Review | AnalyzeAgent (x2), FactCheckAgent | sonnet | Deep analysis, code exploration, bug detection |
| Utility | SynthesizerAgent, DedupAgent, StyleAgent, IssueExplorerAgent, SummarizerAgent, SummaryParserAgent | haiku | Formatting, parsing, lightweight tasks |
The review tier needs a capable reasoning model. The utility tier can use a fast, cheap model.
export REVIEWATE_REVIEW_MODEL=sonnet
export REVIEWATE_UTILITY_MODEL=haiku
The simplest way to configure models is via ~/.reviewate/config.toml (created automatically on first run or when you run reviewate config:
[models]
review = "sonnet"
utility = "haiku"
The config file stores model choices only. API keys must be set via environment variables.
env vars > config file > defaults
Reviewate is built on the Claude Agent SDK, which speaks the Anthropic API. To use models from other providers (OpenAI, Gemini, Mistral, Llama, etc.), you need an API proxy that translates between the Anthropic API format and your provider's API.
LiteLLM is the recommended proxy — it supports 100+ models behind a unified API.
pip install 'litellm[proxy]'
litellm_config.yaml:model_list:
# Review tier — maps Anthropic model names to Gemini
- model_name: gemini-3-flash-preview
litellm_params:
model: gemini/gemini-3-flash-preview
api_key: os.environ/GEMINI_API_KEY
# Utility tier
- model_name: gemini-3.1-flash-lite-preview
litellm_params:
model: gemini/gemini-3.1-flash-lite-preview
api_key: os.environ/GEMINI_API_KEY
model_name fields must match the Anthropic model IDs that Reviewate sends. LiteLLM routes them to the Gemini models you specify.export GEMINI_API_KEY=your-key
litellm --config litellm_config.yaml
reviewate config and pick Custom endpoint (option 3)[auth]
mode = "custom"
[models]
review = "gemini-3-flash-preview"
utility = "gemini-3.1-flash-lite-preview"
[urls]
custom = "http://localhost:4000"
export ANTHROPIC_API_KEY=sk-1234
reviewate https://github.com/org/repo/pull/123
master_key in your LiteLLM config, use that key instead.This same pattern works for any provider LiteLLM supports (OpenAI, Mistral, Llama, etc.) — just swap the litellm_params.model values. See LiteLLM supported models for the full list.
REVIEWATE_BASE_URL works with any Anthropic-compatible endpoint:
# LiteLLM proxy
export REVIEWATE_BASE_URL=http://localhost:4000
# Any Anthropic-compatible endpoint
export REVIEWATE_BASE_URL=https://your-proxy.example.com
When you run Reviewate for the first time, the setup wizard guides you through:
Config is saved to ~/.reviewate/config.toml. To re-run:
reviewate config
ANTHROPIC_API_KEY and ANTHROPIC_BASE_URL from your environment are ignored — the SDK uses your Claude CLI session directly.