A production-quality, modular code review service that combines deterministic static analysis with optional LLM suggestions.
- Backend: FastAPI (versioned endpoints under
/v2and mirrored under/api/v2for frontend compatibility) - Frontend: React + Vite + Tailwind
- Checks: flake8 + bandit (with a small built-in fallback)
- Prompt optimization: context compression + optional ScaleDown compression
- Frontend (Vercel): https://coderagent.vercel.app/
- Backend (Render): https://code-review-agent-api.onrender.com/
Tip: If you call the API from a browser, the backend must allow your frontend origin via
CODE_REVIEW_CORS_ORIGINS(example:https://coderagent.vercel.app).
- Overview
- Features
- Supported Languages
- How It Works
- High-Level Architecture
- Live Deployment Links
- API Quick Start
- Request Format
- Response Format
- Severity Levels
- Error Handling
- Configuration & Environment Variables
- Offline Mode (No LLM)
- Limits & Constraints
- Security & Privacy
- Versioning & Changelog
- Contribution Guide
CRA takes a source file (or raw code), compresses it for efficient prompting, runs static analysis when applicable, and returns a structured, ranked set of issues.
You can run CRA in:
- Offline mode (no secrets, no network): compression + static analysis only
- LLM mode: adds LLM-backed findings via an OpenAI-compatible Chat Completions API
- FastAPI backend with JSON + file upload endpoints
- Ranked, structured issues with category + severity
- Optional strict mode output (fixed-format human-readable findings)
- Optional Firebase token verification (with safe fallback)
- Modern UI (React + Tailwind) and optional Streamlit UI (
ui.py)
The UI supports multiple languages for editor highlighting. Static analysis and deeper tooling depend on server runtime availability.
- Python
- JavaScript
- TypeScript
- Java
- C#
- Go
- Rust
Pipeline (see app/ai_agent.py):
- Compress code (
app/compressor.py) - Run static analysis (
app/static_checks.py) - Build a review prompt (compressed context + static results)
- Optionally compress the prompt via ScaleDown (
app/scaledown_compression.py) - Call the LLM (
app/llm_client.py) and parse/validate structured JSON - Rank issues (
app/ranker.py) and return the response
High-level pipeline overview:
curl https://code-review-agent-api.onrender.com/healthzcurl -X POST https://code-review-agent-api.onrender.com/v2/review/file \
-H "Content-Type: application/json" \
-d '{"filename":"input.py","code":"def add(a,b):\n return a+b\n","language":"python","enabled_checks":{"security":true,"style":true,"performance":true}}'Note: Versioned routes are under
/v2/...and mirrored under/api/v2/...for frontend compatibility.
Recommended endpoint used by the frontend:
POST /v2/review/file
Example JSON payload:
{
"filename": "input.py",
"code": "def add(a, b):\n return a + b\n",
"language": "python",
"enabled_checks": {
"security": true,
"style": true,
"performance": true
}
}High-level response includes:
issues[]— ranked findingsscore— normalized score + countsstatic_analysis— tool output summaries
Example (trimmed):
{
"issues": [
{
"file": "input.py",
"line": 12,
"category": "security",
"severity": "high",
"description": "User-controlled input is used in SQL query",
"suggestion": "Use parameterized queries / prepared statements",
"source": "bandit"
}
],
"score": {
"score": 92,
"counts_by_severity": { "critical": 0, "high": 1, "medium": 0, "low": 0, "info": 0 }
},
"static_analysis": {
"flake8": { "issues": [] },
"bandit": { "result": { "results": [] } }
}
}Severities are normalized to:
criticalhighmediumlowinfo
400— invalid input (missing code, invalid URL, etc.)422— validation error from request parsing502— upstream issues (LLM provider/network/runtime errors)
VITE_API_BASE_URL— point the UI at your backend- Example:
https://code-review-agent-api.onrender.com
- Example:
CORS:
CODE_REVIEW_CORS_ORIGINS— comma-separated browser origins- Example:
https://coderagent.vercel.app
- Example:
CODE_REVIEW_CORS_ORIGIN_REGEX— optional regex (useful for Vercel previews)
LLM:
LLM_PROVIDER— defaults toopenai; setnonefor offline modeLLM_API_KEYLLM_BASE_URL(default:https://openrouter.ai/api/v1)LLM_MODEL(default:qwen/qwen3-coder:free)LLM_TIMEOUT_SECONDS— request timeout (default: 30)
ScaleDown (optional):
SCALEDOWN_API_KEY
Set:
LLM_PROVIDER=none
Result:
- No network calls
- Deterministic runs
- Static analysis still executes
- Large files may be compressed/truncated to fit prompt limits.
- Some language-specific checks depend on runtime availability.
- In offline mode, LLM-backed findings are disabled.
- Do not commit real API keys.
- Do not commit
.envor Firebase Admin SDK service account JSON. /configznever returns secret values.
- API routes are versioned under
/v2. - The app version is exposed via
GET /healthz(versionfield).
- Fork the repo and create a feature branch.
- Run backend tests (
pytest) and frontend build checks. - Open a PR describing the change and validation steps.
set LLM_PROVIDER=nonepython -m venv .venv
.venv\Scripts\activate
pip install -r requirements.txtuvicorn app.main:app --reloadStart everything:
start_app.batOr run Streamlit:
streamlit run ui.pyBackend tests:
pytestInteractive test runner:
run_tests.batruff check .
ruff format .
pyright
