A lightweight platform that analyzes code changes and posts risk assessments to pull requests. It consists of three main components:
- Backend (
backend/DeploymentRisk.Api): .NET 10 Web API that handles GitHub App authentication, webhooks, model management, and risk scoring - Frontend (
frontend/deployment-risk-ui): Angular UI for browsing repositories, models, and the dashboard - ML service (
ml-service): FastAPI Python service used to host, train, and serve ML models for risk scoring
- Create neat personal PR assistant
- Learn about GitHub webhooks and run risk assessments on PRs
- Allow local and containerized development of the backend, frontend, and ML service through docker-compose
- Train ML model for Risk Score analysis/prediction
- Learn about Angular + ASP.NET Core in the process
- More specific readme's can be found in frontend, backend, and ml-service folders of app
- Dependency conflicts may occur for newer packages
- LLM configuration through API key has not been thoroughly tested
backend/DeploymentRisk.Api— .NET 10 API project (Program.cs, Controllers, Services, Data)frontend/deployment-risk-ui— Angular front-end appml-service— FastAPI app with training scripts and model managementdocker-compose.yml— compose file to run components togethersecrets/— local secrets directory (not checked in by default)
- .NET 10 SDK (dotnet) for backend development
- Python 3.10+ with pip for the ML service
- Node.js 18+ and npm for the frontend
- Docker & Docker Compose (optional, recommended for running full stack)
- Angular framework installed
- ngrok for local testing by tunneling your localhost -> online web
-
Copy or create a
.envfile at the repository root. See.env.examplefor keys used by the app. At a minimum you should set up a GitHub app for repositories you want to give the app access to and a github oauth app for a secure login. More information on hooking them up is further down. -
Populate
./secretsin the root directory with your github app private .pem key file -
Backend: run from its folder
cd backend/DeploymentRisk.Api
dotnet build
dotnet run The backend uses DotNetEnv to load .env values (if present). Configuration keys follow the double-underscore mapping (for example GitHub__PrivateKeyPath).
- ML service: create a virtual environment, install requirements, and run
cd ml-service
python -m venv .venv
.\.venv\Scripts\Activate.ps1
pip install -r requirements.txt
uvicorn main:app --reload --port 8000- Frontend: start the Angular app
cd frontend/deployment-risk-ui
npm install
npm startOpen the frontend (usually at http://localhost:4200) and the backend Swagger for api endpoints at http://localhost:5000/swagger.
-
Populate
./secretswith yourgithub-app.pemand/or create a.envfile in the repo root with the required variables. -
Start services
docker-compose up --build- Services will be available at the ports defined in
docker-compose.yml(by default backend:5000, ml-service:8000, frontend:4200 when built into a container, so using ngrok here if you have no spare domain name is a good idea).
GitHub__AppId— your GitHub App IDGitHub__InstallationId— installation id for the appGitHub__PrivateKeyPath— path to the PEM private key. When running locally, this is common:./secrets/github-app.pemML__ServiceUrl— URL for ML service (defaulthttp://localhost:8000)
When you run the stack with docker compose, each service runs inside its own container and they communicate over an internal Docker network. That means:
- From your host machine, you may connect to the database at
localhost:1433. - From a container (for example the
backendcontainer) the database is not reachable atlocalhost—localhostinside the container refers to the container itself.
Therefore, when using docker compose you should point your connection string at the Compose service name sqlserver (the service defined in docker-compose.yml). Example:
Database__ConnectionString=Server=sqlserver,1433;Database=DeploymentRiskDb;User Id=sa;Password=YOUR_SA_PASSWORD;TrustServerCertificate=True;
If you previously had Server=localhost,1433 in your .env, replace it before starting the Compose stack. The PowerShell command below will back up your .env and perform the replacement:
Copy-Item .env .env.bak
(gc .env) -replace 'Server=localhost,1433','Server=sqlserver,1433' | Set-Content .envIf instead you want containers to reach a database running on your host (not managed by Compose) use host.docker.internal on Windows:
Database__ConnectionString=Server=host.docker.internal,1433;Database=DeploymentRiskDb;User Id=sa;Password=YOUR_SA_PASSWORD;TrustServerCertificate=True;
git rm --cached backend/DeploymentRisk.Api/appsettings.json
git commit -m "Remove secrets from tracked appsettings"If secrets were pushed to a remote, rotate them immediately and consider a history rewrite (BFG or git-filter-repo).
- Create a GitHub Oauth App in your organization or user account.
- Urls:
Homepage URL:http://localhost:4200or whatever port your frontend is listening toAuthorization callback URL:http://localhost:4200/auth/callback(default)
- Make note of clientId and clientsecret to paste into your own
appsettings.jsonand.env.
- Create a GitHub App in your organization or user account. Set callback/webhook URLs to your backend's public address (or use
ngrokfor local testing if you cannot use localhost). - Urls:
Homepage URL:http://localhost:4200Callback URL: Leave EmptySetup URL (optional): Can put tohttp://localhost:4200/settingsWebhook URL:your-ngrok-domain/api/github/webhook(need to use ngrok not localhost)
- Generate and download the private key (
*.pem) and place it in./secrets/github-app.pem(or pointGitHub__PrivateKeyPathto its path). - Note the App ID and Installation ID and put them in your
.envor environment. - Paste in the webhook secret from
appsettings.jsoninto github app. - Give the App the following permissions for full feature support (adjust minimal permissions for production):
- Pull requests: Read & write (to post comments)
- Contents: Read
- Code scanning alerts: Read (if using CodeQL/code-scanning features)
- Install the App on target repositories or the organization.
CAUTION: If you get Octokit.ForbiddenException: Resource not accessible by integration when fetching code-scanning alerts, it's usually a permissions issue — re-check the App permissions and reinstall the app.
The backend uses EF Core with a SQL Server provider by default. For local development you can:
- Provide
Database__ConnectionStringvia.envto point at a SQL Server instance (Docker, Azure, or local) - If migrations fail at startup, the app falls back to
EnsureCreated()to create the database schema automatically for development
-
Private key not found / parse errors:
- Ensure
GitHub__PrivateKeyPathpoints to a valid PEM file and the process can read it - Verify the file is PEM format (starts with
-----BEGIN PRIVATE KEY-----) - Can be because of no CRLF (carriage return line feeds) separating header, body, and footer
- Ensure
-
ML service errors:
- Start
uvicornwithout--reloadfor stable testing. - Check
ml-service/requirements.txtmatches installed packages.
- Start
-
Code scanning 403 (CodeQL):
- Ensure the GitHub App has Code scanning read permissions, then reinstall.
-
Overview: : Use Kubernetes for production deployments with Docker Compose. This project includes a sample manifest for a cluster under
linode/full-deploy.yamlwhich is intentionally sanitized of secrets in the repo and built for linode platform. Put production secrets into Kubernetes Secrets or a secret manager and never commit them. -
Secrets (recommended): : create secrets with
kubectlor your cloud provider's secret store. Example (kubectl, createsrisk-secretsin namespacerisk):
# create namespace
kubectl create namespace risk
# create secret from literals (recommended for CI):
kubectl create secret generic risk-secrets \
--from-literal=SQL_SA_PASSWORD='YourStrong!Passw0rd' \
--from-literal=GITHUB_APP_ID='2540120' \
--from-literal=GITHUB_WEBHOOK_SECRET='your-webhook-secret-here' \
--from-literal=GITHUB_TOKEN='ghp_your_token_here' \
--namespace risk- Apply sanitized manifests: : after creating secrets and confirming your
kubeconfigpoints at the target cluster, apply the manifest:
# from repo root
kubectl apply -f linode/full-deploy.yaml -n risk- Image pull secrets for private registries: : if images are stored in GHCR/AWS ECR/ACR, create an image pull secret and reference it in the Deployment
imagePullSecrets(the sample already referencesghcr-secret). Example for GHCR with a personal access token:
# create docker-registry secret for GHCR
kubectl create secret docker-registry ghcr-secret \
--docker-server=ghcr.io \
--docker-username=YOUR_GH_USERNAME \
--docker-password='YOUR_GH_PERSONAL_ACCESS_TOKEN' \
--namespace risk- Linode (LKE) quick steps: :
- Create a Linode Kubernetes Engine (LKE) cluster from the Linode console.
- Download the kubeconfig from the Linode UI and merge it into
~/.kube/configor setupKUBECONFIGaccordingly. - Create
risknamespace and secrets as shown above, then applylinode/full-deploy.yaml. - Use
kubectl get svc -n riskto find public LoadBalancer IPs (frontend service in the sample isLoadBalancer).
If you've applied linode/full-deploy.yaml to an LKE cluster, use these Linode-specific steps to finish setup and expose services securely.
- Get the frontend LoadBalancer IP (may take a few minutes to provision or may come already setup on the
NodeBalancertab):
kubectl get svc frontend -n risk
# look for EXTERNAL-IP column-
Create a DNS A record in Linode DNS pointing your domain (e.g.
risk.example.com) to the external IP. You can do this in Linode Cloud Manager → Networking → Domains or vialinode-cli. -
Configure HTTPS with cert-manager (recommended). Example install and ClusterIssuer (LetsEncrypt staging for testing):
# install cert-manager
kubectl apply --validate=false -f https://github.com/cert-manager/cert-manager/releases/download/v1.12.0/cert-manager.yaml
# create a ClusterIssuer for Let's Encrypt staging (replace email)
cat <<EOF | kubectl apply -f -
apiVersion: cert-manager.io/v1
kind: ClusterIssuer
metadata:
name: letsencrypt-staging
spec:
acme:
server: https://acme-staging-v02.api.letsencrypt.org/directory
email: you@example.com
privateKeySecretRef:
name: letsencrypt-staging
solvers:
- http01:
ingress:
class: nginx
EOF- Example Ingress (using nginx ingress controller) to front the frontend and backend and request TLS certificate (replace host and issuer):
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: risk-ingress
namespace: risk
annotations:
cert-manager.io/cluster-issuer: letsencrypt-staging
spec:
rules:
- host: risk.example.com
http:
paths:
- path: /
pathType: Prefix
backend:
service:
name: frontend
port:
number: 80
tls:
- hosts:
- risk.example.com
secretName: risk-tls-
GitHub webhook: set the webhook
Payload URLtohttps://risk.example.com/api/github/webhookand ensureGitHub__WebhookSecretin yourrisk-secretsmatches the webhook secret you configure in GitHub. Errors logging in with github oauth are mainly because of this. -
Useful commands:
# view logs
kubectl logs -f deploy/backend -n risk
# check deployments
kubectl get deploy -n risk
# rollout a new image
kubectl set image deployment/backend backend=ghcr.io/your/repo-backend:tag -n risk- Notes:
- Setting this app up on Linode is much, much easier than setting it up on AWS EKS.
- LKE LoadBalancers are billed and IP allocation can take a minute. I used NodeBalancer on linode console instead.
- If you prefer a stable hostname without managing certs yourself, use a cloud Load Balancer + DNS + managed certs or a platform ingress add-on that provisions TLS automatically.
- Added a Jenkinsfile for CI integration
- More work needed for docker-compose to work, therefore I added it as an optional stage (by default will not run when building with parameters)