Reviewate will learn from how your team interacts with its reviews. When someone reacts to or dismisses an AI comment, that signal is captured and periodically summarized into team guidelines — natural language rules like "Don't flag missing type hints in test files."
These guidelines are then injected into future reviews so the same mistakes aren't repeated.
User reacts to AI comment (thumbs down, reply, dismiss)
↓
Feedback stored per organization/repository
↓
Periodic summarization via LLM
↓
Team guidelines injected into review agents
| Signal | When it's captured |
|---|---|
| Thumbs down | Emoji reaction on an AI comment |
| Reply | User replies to an AI comment |
| Dismissed | Review marked as resolved |
Guidelines will be generated at two levels:
The feedback loop will be controlled via environment variables on the platform backend:
| Variable | Default | Description |
|---|---|---|
FEEDBACK_LOOP_ENABLED | false | Enable feedback learning |
FEEDBACK_LOOP_MODEL | gemini-2.0-flash | Model for summarizing feedback |
FEEDBACK_LOOP_PROVIDER | gemini | Provider (gemini, anthropic, openai, openrouter) |
FEEDBACK_LOOP_API_KEY | - | API key (optional if using standard provider env vars) |
The summarization LLM call is lightweight (~800 input / ~250 output tokens), costing roughly $0.01/month with Gemini Flash at typical usage.