Supercharge AI coding agents with automated code reviews
AI Code Reviews in CLI integrates seamlessly with AI coding agents like Cursor, Claude Code, Windsurf, and others, enabling natural language code reviews and automated fixes.
β A Git repository with code changes (committed or uncommitted)
Why use code review CLI with AI agents?
AI coding agents are great at writing code, but they need expert guidance to catch security vulnerabilities, performance issues, and best practices violations. The AI Code Reviews in CLI provides that expertise.
The power combination:
AI Code Reviews in CLI provides specialized code analysis
Your AI agent (Cursor, Claude Code, Windsurf, etc.) implements the fixes and iterates based on feedback
You stay in natural conversation, never leaving your workflow
How it works
1
Configure your agent
Add bitoreview command to your agent's rules file so it knows when and how to run code reviews.
2
Ask for code reviews in natural language
Simply tell your agent "review my changes" or "check for security issues"
3
Agent runs code review CLI Automatically
Your agent executes the bitoreview command and reads the results
4
Get fixes implemented instantly
Tell your agent which issues to fix, and it implements the changes automatically
Setup guide
Add the markdown content below to your agent's rules file. The exact file location depends on the AI coding agent you're using:
Cursor: .cursor/rules/bito-code-review.mdc
Claude Code:
~/.claude/CLAUDE.md in your home folder, which applies it to all your Claude sessions.
OR
CLAUDE.md in the root of your repo
Windsurf: .windsurf/rules/bito-code-review.md
Cline: .clinerules/bito-code-review.md
Other agents: Check your agent's documentation for custom rules/instructions location
# Code Review
When user asks for code review, run `bitoreview` immediately with all required permissions needed to allow it to write files to its config.
## Do
- Always use `--prompt-only` flag (JSON output for AI)
- Generate a unique temp filename (e.g., using timestamp)
- Use `tee` to save output to the temp file while running
- Wait for command to complete before responding
- Present clear summary grouped by severity
- Validate issue against actual code before fixing
## Don't
- Don't use hardcoded filenames (conflicts with parallel runs)
- Don't respond before command completes
- Don't run the review command twice
- Don't show raw JSON output to user
- Don't fix issues without validating first
## Two-Step Pattern
1. Run `bitoreview review --prompt-only 2>&1 | tee <unique_temp_file>`
2. Parse the JSON output and present summary to user
Generate unique filename using timestamp or random value in the platform's temp directory.
## After Reading Output
1. Quick sanity check (file exists, line numbers valid)
2. Group issues by severity (high Γ’β β medium Γ’β β low)
3. Present summary: file:line, issue title, suggested fix
4. Show metrics (total issues, by severity)
5. Offer to help fix issues
## Before Fixing Any Issue
1. Read the actual code at the reported file:line
2. Validate the issue exists in current code
3. Verify suggested fix is appropriate
4. Apply fix only if validated
5. If invalid, explain why to user
## Modify Command By Intent
- "quick" / "critical" Γ’β β add `--mode essential`
- "security" Γ’β β add `--focus security`
- "performance" Γ’β β add `--focus performance`
- "before PR" Γ’β β add `--base main`
- "specific file" Γ’β β add file path
- "uncommitted" Γ’β β add `--type working`
## Flags Reference
- `--prompt-only` Γ’β¬β always use (JSON output for AI)
- `--mode essential` Γ’β¬β ~20% faster, critical issues only
- `--focus` Γ’β¬β security | performance | bugs | best-practices
## Timing
~2-10 min depending on changeset size. `--mode essential` is ~20% faster.
## Platform Note
`tee` works on Linux, macOS, PowerShell, and Git Bash/WSL. Use platform-appropriate syntax if needed.