Feedback Loop

Reviewate learns from your team's reactions to improve future reviews.
The feedback loop is coming soon. The feature is implemented in the codebase but currently disabled. This page documents how it will work once released.

How It Will Work

Reviewate will learn from how your team interacts with its reviews. When someone reacts to or dismisses an AI comment, that signal is captured and periodically summarized into team guidelines — natural language rules like "Don't flag missing type hints in test files."

These guidelines are then injected into future reviews so the same mistakes aren't repeated.

User reacts to AI comment (thumbs down, reply, dismiss)
  ↓
Feedback stored per organization/repository
  ↓
Periodic summarization via LLM
  ↓
Team guidelines injected into review agents

Feedback Signals

SignalWhen it's captured
Thumbs downEmoji reaction on an AI comment
ReplyUser replies to an AI comment
DismissedReview marked as resolved

Scoping

Guidelines will be generated at two levels:

  • Organization-wide — Applied to all repositories
  • Repository-specific — Overrides org-wide guidelines for a single repo

Configuration (when available)

The feedback loop will be controlled via environment variables on the platform backend:

VariableDefaultDescription
FEEDBACK_LOOP_ENABLEDfalseEnable feedback learning
FEEDBACK_LOOP_MODELgemini-2.0-flashModel for summarizing feedback
FEEDBACK_LOOP_PROVIDERgeminiProvider (gemini, anthropic, openai, openrouter)
FEEDBACK_LOOP_API_KEY-API key (optional if using standard provider env vars)

The summarization LLM call is lightweight (~800 input / ~250 output tokens), costing roughly $0.01/month with Gemini Flash at typical usage.