This guide provides a complete walkthrough for deploying the Mentingo application. The infrastructure will be hosted on Hetzner Cloud, while AWS will be used for DNS, container image storage (ECR), and user management (IAM).
-
In the Hetzner Cloud Console, create a new Project.
-
Navigate to Security -> Firewalls within your new project.
-
Create a new firewall. Add an inbound rule to allow traffic on port 22 (SSH). For enhanced security, restrict the Source IPs to your public IP address.
-
For outbound rules, allow all IPv4 traffic to any destination.
-
In the Hetzner Console, navigate to Storage -> Object Storage and create a new bucket. This will be used for file uploads.
-
After the bucket is created, generate credentials (access key and secret key) for it.
-
🔐 Important: Securely save these credentials. You will need them for the application's environment variables later.
-
Navigate to Servers and click Add server.
-
Select your desired server location.
-
Choose the Ubuntu image (e.g., 22.04). A server with 4 vCPU and 8 GB of RAM is recommended.
-
In the Networking section, ensure Public IPv4 is enabled.
-
In the SSH keys section, generate a new key on your local machine if you don't have one already. Replace
<client-name>with an appropriate identifier.ssh-keygen -t ed25519 -o -a 100 -C "<client-name>" -
Copy the content of your public key (e.g.,
~/.ssh/id_ed25519.pub) and add it to the configuration. -
🔐 Important: Store your private key safely!
-
Before creating the server, under Firewall, select the firewall you created in step 1.1.
-
Enable Backups.
-
Once the server is running, navigate to Networking -> Floating IPs and assign a new Floating IP to your server. This provides a static IP that can be re-assigned if you ever need to replace the server.
-
To make the floating IP persistent on the server, first connect to the server using its public IP (not the floating IP yet) and your SSH key.
-
Once logged in, follow the official Hetzner documentation to create a persistent configuration file for the floating IP.
-
Apply the new network configuration with the command:
sudo netplan apply
-
You can now disconnect and reconnect to the server using the new Floating IP.
-
In the AWS Console, go to Route 53 and create a new Hosted zone for your client's domain.
-
Route 53 will provide a set of Name Server (NS) records. These must be added to the client's domain registrar's DNS settings.
Alternative: If you cannot delegate NS records to AWS, create an A record in your own DNS provider that points to the Hetzner server's floating IP.
-
Once the NS records have propagated, create an A Record in your new hosted zone. Point it to the Floating IP of your Hetzner server.
-
In the AWS Console, go to Elastic Container Registry (ECR).
-
Create two new private repositories. Replace
<client>with the actual client's name.
-
Navigate to IAM in the AWS Console.
-
Create two IAM policies.
🚨 Note: In the JSON below, replace
<client>with the client's name and123456789012with your AWS Account ID.Policy 1:
tenant-client-ci(Allows pushing images to ECR){ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowEcrAuth", "Effect": "Allow", "Action": "ecr:GetAuthorizationToken", "Resource": "*" }, { "Sid": "AllowReadAccessToRepos", "Effect": "Allow", "Action": ["ecr:ListImages", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage"], "Resource": [ "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/api", "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/ui" ] }, { "Sid": "AllowWriteAccessToApiRepo", "Effect": "Allow", "Action": [ "ecr:UploadLayerPart", "ecr:PutImage", "ecr:InitiateLayerUpload", "ecr:CompleteLayerUpload", "ecr:BatchCheckLayerAvailability" ], "Resource": "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/api" }, { "Sid": "AllowWriteAccessToUiRepo", "Effect": "Allow", "Action": [ "ecr:UploadLayerPart", "ecr:PutImage", "ecr:InitiateLayerUpload", "ecr:CompleteLayerUpload", "ecr:BatchCheckLayerAvailability" ], "Resource": "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/ui" } ] }Policy 2:
tenant-client-docker(Allows pulling images from ECR){ "Version": "2012-10-17", "Statement": [ { "Sid": "AllowEcrAuth", "Effect": "Allow", "Action": "ecr:GetAuthorizationToken", "Resource": "*" }, { "Sid": "AllowReadAccessToRepos", "Effect": "Allow", "Action": ["ecr:ListImages", "ecr:GetDownloadUrlForLayer", "ecr:BatchGetImage"], "Resource": [ "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/api", "arn:aws:ecr:eu-central-1:123456789012:repository/tenant/<client>/ui" ] } ] } -
Now, create two IAM users.
User 1:
tenant-<client>-ci-
Create a user with this name.
-
Attach the
tenant-client-cipolicy you just created. -
Go to the user's Security credentials tab and create an Access key.
-
Select Command Line Interface (CLI) as the use case.
-
🔐 Important: Save the generated Access Key ID and Secret Access Key. You will use these as GitHub Actions secrets.
User 2:
tenant-<client>-docker- Repeat the process to create this second user.
- Attach the
tenant-client-dockerpolicy. - Generate and save the access keys for this user as well. You will use these on the Hetzner server.
-
-
Init a new GitHub repository as a single commit containing the copy of
https://github.com/Selleo/mentingo -
Single commit is preffered to clearly seperate IP boundry between open source and custom changes and simplifies appling patches in the future.
-
In your GitHub repository, go to Settings -> Secrets and variables -> Actions.
-
Add the following secrets using the credentials from your
tenant-<client>-ciIAM user and the ECR repository URIs.AWS_ACCESS_KEY_ID: Access key ID for the CI user.AWS_SECRET_ACCESS_KEY: Secret access key for the CI user.AWS_REGION: e.g.,eu-central-1.AWS_ECR_REGISTRY: The full URI for thetenant/<client>/apiECR repository.AWS_ECR_REGISTRY_WEB: The full URI for thetenant/<client>/uiECR repository.- (Add other secrets like
VITE_STRIPE_PUBLISHABLE_KEY,POSTHOG_KEY,POSTHOG_HOSTand Sentry keys as needed).
-
If you want your E2E tests to work, create environment called
e2eand then add these secrets:MASTER_KEY: 32 byte base64 (can be generated usingopenssl rand -base64 32)STRIPE_PUBLISHABLE_KEYSTRIPE_SECRET_KEYSTRIPE_WEBHOOK_SECRET
Note: You can use Stripe API keys from your Stripe test mode (sandbox) - these are safe to use for development and CI environments.
-
In your repository, create a
.github/workflows/directory. -
Create the two deployment files below inside that directory.
File 1:
deploy-api.ymlname: Deploy API env: HUSKY: 0 on: push: branches: - "main" jobs: build-api: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 fetch-tags: true - name: Save version in apps/api run: | # Try to get latest local tag TAG_VERSION=$(git describe --tags --abbrev=0 --match "v*" 2>/dev/null || true) # If no tag found, fetch from external repo if [ -z "$TAG_VERSION" ]; then echo "⚠️ No local tags found. Fetching from Selleo/mentingo..." TAG_VERSION=$(git ls-remote --tags https://github.com/Selleo/mentingo \ | awk -F/ '{print $3}' \ | sed 's/\^{}//' \ | grep '^v' \ | sort -V \ | tail -n1) fi echo "{ \"version\": \"$TAG_VERSION\" }" > apps/api/version.json echo "✅ Wrote version $TAG_VERSION to apps/api/version.json" - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1-node16 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build, tag, and push image to Amazon ECR env: ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY }} IMAGE_TAG: ${{ github.sha }} run: | docker build -f ./api.Dockerfile --build-arg VERSION=$IMAGE_TAG -t $ECR_REGISTRY:$IMAGE_TAG . docker tag $ECR_REGISTRY:$IMAGE_TAG $ECR_REGISTRY:latest docker push $ECR_REGISTRY:$IMAGE_TAG docker push $ECR_REGISTRY:latest
File 2:
deploy-ui.ymlname: Deploy WEBAPP env: HUSKY: 0 on: push: branches: - "main" jobs: build-web: runs-on: ubuntu-latest steps: - uses: actions/checkout@v4 with: fetch-depth: 0 fetch-tags: true - name: Save version in apps/web run: | # Try to get latest local tag TAG_VERSION=$(git describe --tags --abbrev=0 --match "v*" 2>/dev/null || true) # If no tag found, fetch from external repo if [ -z "$TAG_VERSION" ]; then echo "⚠️ No local tags found. Fetching from Selleo/mentingo..." TAG_VERSION=$(git ls-remote --tags https://github.com/Selleo/mentingo \ | awk -F/ '{print $3}' \ | sed 's/\^{}//' \ | grep '^v' \ | sort -V \ | tail -n1) fi echo "{ \"version\": \"$TAG_VERSION\" }" > apps/web/version.json echo "✅ Wrote version $TAG_VERSION to apps/web/version.json" - name: Configure AWS credentials uses: aws-actions/configure-aws-credentials@v1-node16 with: aws-access-key-id: ${{ secrets.AWS_ACCESS_KEY_ID }} aws-secret-access-key: ${{ secrets.AWS_SECRET_ACCESS_KEY }} aws-region: ${{ secrets.AWS_REGION }} - name: Login to Amazon ECR id: login-ecr uses: aws-actions/amazon-ecr-login@v1 - name: Build, tag, and push image to Amazon ECR env: ECR_REGISTRY: ${{ secrets.AWS_ECR_REGISTRY_WEB }} IMAGE_TAG: ${{ github.sha }} VITE_STRIPE_PUBLISHABLE_KEY: ${{secrets.VITE_STRIPE_PUBLISHABLE_KEY}} VITE_SENTRY_DSN: ${{secrets.SENTRY_DSN}} VITE_POSTHOG_KEY: ${{ secrets.POSTHOG_KEY }} VITE_POSTHOG_HOST: ${{ secrets.POSTHOG_HOST }} run: | docker build -f ./web.Dockerfile \ --build-arg VERSION=$IMAGE_TAG \ --build-arg VITE_STRIPE_PUBLISHABLE_KEY=$VITE_STRIPE_PUBLISHABLE_KEY \ --build-arg VITE_SENTRY_DSN=$VITE_SENTRY_DSN \ --build-arg VITE_POSTHOG_KEY=$VITE_POSTHOG_KEY \ --build-arg VITE_POSTHOG_HOST=$VITE_POSTHOG_HOST \ -t $ECR_REGISTRY:$IMAGE_TAG . docker tag $ECR_REGISTRY:$IMAGE_TAG $ECR_REGISTRY:latest docker push $ECR_REGISTRY:$IMAGE_TAG docker push $ECR_REGISTRY:latest
-
Push your code changes. The GitHub Actions should run automatically and push your first images to ECR.
-
SSH into your Hetzner server.
-
Save the following installation script as
install_packages.sh.#!/bin/sh set -e # Update and install prerequisite packages apt update && \ apt -y install wget gpg coreutils jq ca-certificates curl gnupg debian-keyring debian-archive-keyring apt-transport-https # Add Docker repository install -m 0755 -d /etc/apt/keyrings curl -fsSL https://download.docker.com/linux/ubuntu/gpg | gpg --dearmor -o /etc/apt/keyrings/docker.gpg chmod a+r /etc/apt/keyrings/docker.gpg echo \ "deb [arch="$(dpkg --print-architecture)" signed-by=/etc/apt/keyrings/docker.gpg] https://download.docker.com/linux/ubuntu \ "$(. /etc/os-release && echo "$VERSION_CODENAME")" stable" | \ tee /etc/apt/sources.list.d/docker.list > /dev/null # Add Caddy repository curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/gpg.key' | gpg --dearmor -o /usr/share/keyrings/caddy-stable-archive-keyring.gpg curl -1sLf 'https://dl.cloudsmith.io/public/caddy/stable/debian.deb.txt' | tee /etc/apt/sources.list.d/caddy-stable.list # Install main packages apt update && apt -y install docker-ce docker-ce-cli containerd.io docker-buildx-plugin docker-compose-plugin caddy # Add helpful bash aliases cat <<EOF >> /root/.bash_aliases alias sc=systemctl alias jc=journalctl EOF echo "Installation complete."
-
Run the script:
chmod +x install_packages.sh ./install_packages.sh
-
You may need to start a new shell session (
exec bash) for the aliases to take effect.
-
Install the ECR credential helper (see full instructions here).
sudo apt install amazon-ecr-credential-helper
-
Create mentingo user, add it to group docker and switch to it
sudo add user mentingo
sudo user mod -aG docker mentingo
su mentingo
-
Create the AWS config directory:
mkdir -p ~/.aws -
Create a credentials file at
~/.aws/credentials.[default] aws_access_key_id = <ACCESS_KEY_FOR_TENANT_CLIENT_DOCKER_USER> aws_secret_access_key = <SECRET_KEY_FOR_TENANT_CLIENT_DOCKER_USER>
-
Configure Docker to use the ECR helper by editing
~/.docker/config.json.{ "credsStore": "ecr-login" }
-
You can now pull images directly from ECR.
docker pull <ecr_uri_for_api>:latest docker pull <ecr_uri_for_ui>:latest
-
List your images to see them locally.
docker images
-
Tag the images with simpler names for easier use in
docker-compose.yml.docker tag <ecr_uri_for_api>:latest app:latest docker tag <ecr_uri_for_ui>:latest ui:latest
-
Create a new directory for your application (e.g.,
/opt/mentingo) andcdinto it. -
Create a
docker-compose.ymlfile with the following content:services: app: image: app:latest container_name: app restart: unless-stopped env_file: .env.prd.api command: server ports: - "3000:3000" volumes: - /home/app/uploads:/app/apps/api/uploads depends_on: - db db: image: pgvector/pgvector:pg16 container_name: db restart: unless-stopped environment: POSTGRES_PASSWORD: guidebook # Change this password POSTGRES_DB: guidebook volumes: - lms-db-data:/var/lib/postgresql/data ports: - "5434:5432" frontend: image: ui:latest container_name: frontend restart: unless-stopped env_file: .env.prd.ui ports: - "3080:8080" redis: image: "redis:latest" container_name: redis restart: unless-stopped ports: - "6379:6379" volumes: - "lms-redis-data:/data" volumes: lms-db-data: driver: local lms-redis-data: driver: local
-
Create two environment files.
File 1:
.env.prd.api# GENERAL CORS_ORIGIN="https://<client-domain>" EMAIL_ADAPTER="smtp" DEBUG=false PASSWORD=<default_password_for_seeded_accounts> # DATABASE DATABASE_URL="postgres://postgres:guidebook@db:5432/guidebook" # Use the password from docker-compose # REDIS REDIS_URL="redis://redis:6379" # JWT JWT_SECRET="<generate_a_strong_random_secret>" JWT_REFRESH_SECRET="<generate_another_strong_random_secret>" JWT_EXPIRATION_TIME="15m" # MAILS (Example for AWS SES) SMTP_HOST="email-smtp.eu-central-1.amazonaws.com" SMTP_PORT="2465" SMTP_USER="<smtp_user_access_key>" SMTP_PASSWORD="<smtp_user_secret_key>" SES_EMAIL="noreply@<client-domain>" # S3 (Hetzner Object Storage) S3_ENDPOINT="https://<region>.your-objectstorage.com" # e.g., fsn1 S3_REGION="<region>" # e.g., fsn1 S3_ACCESS_KEY_ID="<hetzner_storage_access_key>" S3_SECRET_ACCESS_KEY="<hetzner_storage_secret_key>" S3_BUCKET_NAME="<bucket-name>" # STRIPE & SENTRY STRIPE_SECRET_KEY= STRIPE_PUBLISHABLE_KEY= STRIPE_WEBHOOK_SECRET= SENTRY_ENVIRONMENT=production SENTRY_DSN= # 32 byte base64 key (can be generate using openssl rand -base64 32) MASTER_KEY=
File 2:
.env.prd.uiVITE_API_URL='https://<client-domain>/api' VITE_APP_URL='https://<client-domain>' # STRIPE & SENTRY VITE_STRIPE_PUBLISHABLE_KEY= SENTRY_AUTH_TOKEN= SENTRY_ORG= SENTRY_PROJECT= VITE_SENTRY_DSN=
- Start the containers in detached mode.
docker compose up -d
- Check that all containers are running.
docker ps
- Connect to the running API container.
docker exec -it app sh - Inside the container, run the database migrations and seed the data.
# Inside the 'app' container npm run db:migrate npm run db:seed-prod - Type
exitto leave the container shell.
-
Edit the Caddy configuration file.
vim /etc/caddy/Caddyfile
-
Replace the entire file content with the following, updating
<client-domain.com>with your actual domain. Caddy will automatically handle HTTPS.<client-domain.com> { # API traffic @api path /api/* handle @api { reverse_proxy http://localhost:3000 } # All other traffic goes to the frontend handle { reverse_proxy http://localhost:3080 } }
If you're having issues with saving the file try to do it as root user.
-
Go back to the Hetzner Cloud firewall settings.
-
Add two new inbound rules to allow public web traffic:
-
Back in the server's console, reload Caddy to apply the new configuration.
systemctl reload caddy
Your Mentingo instance should now be live and accessible at your domain.




























