An LLM-first Cybersecurity Analyzer that inspects Python code for security vulnerabilities. Rather than being just a Semgrep UI, it showcases how to build a reliable, tool-using LLM agent for security-critical workloads. It combines:
-
OpenAI-compatible LLMs (via OpenRouter/LiteLLM and
openai-agents) for structured reasoning and tool-calling -
Semgrep for static security scanning, invoked as a tool by the LLM
-
A modern Next.js frontend
-
A FastAPI backend
-
Containerized deployment to Google Cloud Run (and optionally Azure Container Apps) using Terraform
-
Live Demo: https://projects.kaushikpaul.co.in/cyber-security-agent
-
Backend: FastAPI running in a single container on Cloud Run/Azure
-
Frontend: Next.js 15 UI served by FastAPI as static files
-
Security Engine:
- LLM agent orchestrated with
openai-agents - Semgrep MCP server (
semgrep-mcp) for deep static analysis
- LLM agent orchestrated with
- Upload Python files (
.py) and run a full security analysis - LLM-first analysis pipeline:
- The LLM agent always calls Semgrep once, then performs its own reasoning on top of the findings.
- Produces a single, consolidated security report you can trust.
- Hybrid scanning:
- Semgrep static analysis via MCP
- LLM-powered reasoning that adds missing findings, context, and prioritized fixes
- Structured, typed security reports:
- Executive summary
- Per-issue title, description, vulnerable snippet, recommended fix
- CVSS score and severity (critical / high / medium / low), validated via Pydantic models
- Production-ready deployment:
- Single Docker image
- Terraform modules for:
- Azure Container Apps
- Google Cloud Run (used in production at https://projects.kaushikpaul.co.in/cyber-security-agent)
- Cloud-friendly defaults:
- Scales to zero on Cloud Run and Azure
- 1 vCPU / 2GiB RAM tuned for Semgrep and the LLM tooling
-
Frontend (
frontend/)- Next.js 15 (App Router), React 19, TypeScript
- Main page:
src/app/page.tsx- Handles file upload, calls backend
/api/analyze, renders results
- Handles file upload, calls backend
- Components:
CodeInput– file picker + code viewerAnalysisResults– summary & issues table with severity badges
-
Backend (
backend/)- FastAPI app in
server.py - Key endpoints:
POST /api/analyze— analyze Python code and return aSecurityReportGET /health— basic health checkGET /network-test— checks Semgrep API reachabilityGET /semgrep-test— verifies Semgrep CLI can be installed and run
- Loads
.envwithpython-dotenv - Mounts built frontend under
/from thestatic/folder in production
- FastAPI app in
-
Agents & Semgrep MCP (
backend/context.py,backend/mcp_servers.py)- Security agent configured with
SECURITY_RESEARCHER_INSTRUCTIONS - Uses
openai-agents+ LiteLLM to drive a tool-using LLM that talks tosemgrep-mcpover MCP - Enforces:
- Single Semgrep scan per request
config: "auto"for all scans
- LLM outputs are parsed into the
SecurityReportPydantic model, ensuring stable, strongly typed responses
- Security agent configured with
-
Infrastructure & Deployment (
terraform/)- Dockerfile at repo root:
- Builds frontend (
npm ci && npm run build) - Installs backend with
uvfrompyproject.toml/uv.lock - Serves app via
uv run uvicorn server:app --host 0.0.0.0 --port 8000
- Builds frontend (
- Azure (
terraform/azure)- Azure Container Registry (ACR)
- Azure Container Apps environment
- Container App with public ingress on port 8000
- GCP (
terraform/gcp)- Artifact Registry
- Cloud Run service with public URL
- 1 vCPU / 2GiB RAM, minScale 0, maxScale 1
- Dockerfile at repo root:
- POST
/api/analyze- Request body:
{ "code": "Python source as a string" }
- Response (simplified):
summary: stringissues: Array<{ title, description, code, fix, cvss_score, severity }>
- Request body:
- GET
/health- Returns
{ "message": "Cybersecurity Analyzer API" }
- Returns
- GET
/network-test- Connectivity check to
https://semgrep.dev/api/v1/
- Connectivity check to
- GET
/semgrep-test- Verifies Semgrep installation and version inside the running container
For full, step-by-step setup (including Docker, Azure, and GCP), see INSTALLATION.md.
Below is a minimal local workflow.
-
Clone the repo
git clone <this-repo-url> cd cyber-security-agent
-
Create
.envin the project root# .env (do NOT commit this file) OPENROUTER_API_KEY=your-openrouter-api-key OPENAI_API_KEY=your-openai-api-key-optional SEMGREP_APP_TOKEN=your-semgrep-app-token -
Start the backend (FastAPI)
cd backend uv run server.py # Backend will listen on http://localhost:8000
-
Start the frontend (Next.js)
cd frontend npm install npm run dev # Frontend at http://localhost:3000
-
Use the app
- Open
http://localhost:3000 - Click “Choose File” and select a Python file (e.g.
airline.pyin the repo root) - Click “Analyze Code” to see security findings
- Open
All secrets are loaded from a .env file in the project root:
OPENROUTER_API_KEY— required; used by the LLM agentOPENAI_API_KEY— optional; for OpenAI-compatible backendsSEMGREP_APP_TOKEN— required Semgrep token for the MCP serverENVIRONMENT— set toproductionin cloud deploymentsPYTHONUNBUFFERED— set to1in containers for unbuffered logs
Security note:
Never commit.envto version control. Keep your keys private and rotate them if they are ever exposed.
The production instance of this project is deployed to Google Cloud Run and available at:
This repo includes Terraform configurations for both Azure and GCP:
- Azure Container Apps —
terraform/azure - Google Cloud Run —
terraform/gcp(used forprojects.kaushikpaul.co.in/cyber-security-agent)
High-level workflow:
- Build and push the Docker image using Terraform (Azure or GCP module)
- Provision the container runtime (Container Apps / Cloud Run)
- Inject environment variables (
OPENROUTER_API_KEY,OPENAI_API_KEY,SEMGREP_APP_TOKEN) - Point your domain (optional) at the generated app URL
See INSTALLATION.md for full setup and deployment commands.
Running this project in your own cloud or on-prem environment means you are solely responsible for:
- Provisioning and paying for all underlying infrastructure and related services
- Implementing appropriate network security controls, including DDoS protection and rate limiting
The project owner and contributors are not responsible or liable for any infrastructure charges, overages, or other financial losses incurred as a result of abusive traffic, including but not limited to DDoS attacks, misconfiguration, or misuse of this project.
| Category | Technologies |
|---|---|
| Frontend | Next.js 15, React 19, TypeScript, Tailwind-style utility classes |
| Backend | FastAPI, Uvicorn, openai-agents, LiteLLM, MCP |
| Security | Semgrep MCP server (semgrep-mcp), CVSS scoring |
| Infrastructure | Docker, Terraform, Google Cloud Run, Azure Container Apps |
| Tooling | uv for Python env management, Node.js/npm for frontend |
This project is used in the AI in Production course. All required setup and deployment steps are documented in this README and in INSTALLATION.md.
This project is licensed under the MIT License — see the LICENSE file for details.