A web application for managing LXC container creation, configuration, and lifecycle on Proxmox VE infrastructure. Provides a user-friendly interface and REST API for container management with automated database tracking and nginx reverse proxy configuration generation.
erDiagram
Node ||--o{ Container : "hosts"
Container ||--o{ Service : "exposes"
Node {
int id PK
string name UK "Proxmox node name"
string apiUrl "Proxmox API URL"
boolean tlsVerify "Verify TLS certificates"
datetime createdAt
datetime updatedAt
}
Container {
int id PK
string hostname UK "FQDN hostname"
string username "Owner username"
string status "pending,creating,running,failed"
string template "Template name"
int creationJobId FK "References Job"
int nodeId FK "References Node"
int containerId UK "Proxmox VMID"
string macAddress UK "MAC address (nullable)"
string ipv4Address UK "IPv4 address (nullable)"
string aiContainer "Node type flag"
datetime createdAt
datetime updatedAt
}
Service {
int id PK
int containerId FK "References Container"
enum type "tcp, udp, or http"
int internalPort "Port inside container"
int externalPort "External port (tcp/udp only)"
boolean tls "TLS enabled (tcp only)"
string externalHostname UK "Public hostname (http only)"
datetime createdAt
datetime updatedAt
}
Key Constraints:
(Node.name)- Unique(Container.hostname)- Unique(Container.nodeId, Container.containerId)- Unique (same VMID can exist on different nodes)(Service.externalHostname)- Unique when type='http'(Service.type, Service.externalPort)- Unique when type='tcp' or type='udp'
- User Authentication - Proxmox VE authentication integration
- Container Management - Create, list, and track LXC containers
- Docker/OCI Support - Pull and deploy containers from Docker Hub, GHCR, or any OCI registry
- Service Registry - Track HTTP/TCP/UDP services running on containers
- Dynamic Nginx Config - Generate nginx reverse proxy configurations on-demand
- Real-time Progress - SSE (Server-Sent Events) for container creation progress
- User Registration - Self-service account request system with email notifications
- Rate Limiting - Protection against abuse (100 requests per 15 minutes)
- Node.js 18.x or higher
- PostgreSQL 16 or higher
- Proxmox VE cluster with API access
- SMTP server for email notifications (optional)
# Install Node.js (Debian/Ubuntu)
curl -fsSL https://deb.nodesource.com/setup_20.x | sudo -E bash -
sudo apt-get install -y nodejs
# Install PostgreSQL
sudo apt-get install postgresql -ycd /opt
sudo git clone https://github.com/mieweb/opensource-server.git
cd opensource-server/create-a-containernpm installCREATE DATABASE opensource_containers CHARACTER SET utf8mb4 COLLATE utf8mb4_unicode_ci;
CREATE USER 'container_manager'@'localhost' IDENTIFIED BY 'secure_password_here';
GRANT ALL PRIVILEGES ON opensource_containers.* TO 'container_manager'@'localhost';
FLUSH PRIVILEGES;npm run db:migrateThis creates the following tables:
Containers- Container records (hostname, IP, MAC, OS, etc.)Services- Service mappings (ports, protocols, hostnames)
Create a .env file in the create-a-container directory:
# Database Configuration
POSTGRES_HOST=localhost
POSTGRES_USER=cluster_manager
POSTGRES_PASSWORD=secure_password_here
POSTGRES_DATABASE=cluster_manager
DATABASE_DIALECT=postgres
# Session Configuration
SESSION_SECRET=generate_random_secret_here
# Application
NODE_ENV=productionnode -e "console.log(require('crypto').randomBytes(32).toString('hex'))"npm run devnode server.jsCreate /etc/systemd/system/create-a-container.service:
[Unit]
Description=Create-a-Container Service
After=network.target mariadb.service
[Service]
Type=simple
User=www-data
WorkingDirectory=/opt/opensource-server/create-a-container
Environment=NODE_ENV=production
ExecStart=/usr/bin/node server.js
Restart=always
RestartSec=10
[Install]
WantedBy=multi-user.targetEnable and start:
sudo systemctl daemon-reload
sudo systemctl enable create-a-container
sudo systemctl start create-a-container
sudo systemctl status create-a-containerDisplay login page
Authenticate user with Proxmox VE credentials
- Body:
{ username, password } - Returns:
{ success: true, redirect: "/" }
End user session
Redirect to /containers
List all containers for authenticated user
- Returns: HTML page with container list
Display container creation form
Create a container asynchronously via a background job
- Body:
{ hostname, template, customTemplate, services }where:hostname: Container hostnametemplate: Template selection in format "nodeName,vmid" OR "custom" for Docker imagescustomTemplate: Docker image reference when template="custom" (e.g.,nginx,nginx:alpine,myorg/myapp:v1,ghcr.io/org/image:tag)services: Object of service definitions
- Returns: Redirect to containers list with flash message
- Process: Creates pending container, services, and job in a single transaction. Docker image references are normalized to full format (
host/org/image:tag). The job-runner executes the actual Proxmox operations.
Delete a container from both Proxmox and the database
- Path Parameter:
id- Container database ID - Authorization: User can only delete their own containers
- Process:
- Verifies container ownership
- Deletes container from Proxmox via API
- On success, removes container record from database (cascades to services)
- Returns:
{ success: true, message: "Container deleted successfully" } - Errors:
404- Container not found403- User doesn't own the container500- Proxmox API deletion failed or node not configured
View container creation progress page
SSE stream for real-time container creation progress
- Returns: Server-Sent Events stream
Enqueue a job for background execution
- Body:
{ "command": "<shell command>" } - Response:
201 { id, status } - Authorization: Admin only (prevents arbitrary command execution)
- Behavior: Admin's username is recorded in
createdBycolumn for audit trail
Fetch job metadata (command, status, timestamps)
- Response:
{ id, command, status, createdAt, updatedAt, createdBy } - Authorization: Only the job owner or admins may view
- Returns:
404if unauthorized (prevents information leakage)
Fetch job output rows with offset/limit pagination
- Query Params:
offset(optional, default 0) - Skip first N rowslimit(optional, max 1000) - Return up to N rows
- Response: Array of JobStatus objects
[{ id, jobId, output, createdAt, updatedAt }, ...] - Authorization: Only the job owner or admins may view
- Returns:
404if unauthorized
The job runner (job-runner.js) is a background Node.js process that:
- Polls the
Jobstable forpendingstatus records - Claims a job transactionally (sets status to
runningand acquires row lock) - Spawns the job command in a shell subprocess
- Streams stdout/stderr into the
JobStatusestable in real-time - Updates job status to
successorfailureon process exit - Gracefully cancels running jobs on shutdown (SIGTERM/SIGINT) and marks them
cancelled
Job Model (models/job.js)
id INT PRIMARY KEY AUTO_INCREMENT
command VARCHAR(2000) NOT NULL - shell command to execute
createdBy VARCHAR(255) - username of admin who enqueued (nullable for legacy jobs)
status ENUM('pending', 'running', 'success', 'failure', 'cancelled')
createdAt DATETIME
updatedAt DATETIME
JobStatus Model (models/jobstatus.js)
id INT PRIMARY KEY AUTO_INCREMENT
jobId INT NOT NULL (FK → Jobs.id, CASCADE delete)
output TEXT - chunk of stdout/stderr from the job
createdAt DATETIME
updatedAt DATETIME
Migrations
migrations/20251117120000-create-jobs.jsmigrations/20251117120001-create-jobstatuses.js(includesupdatedAt)migrations/20251117120002-add-job-createdby.js(adds nullablecreatedBycolumn + index)
Development (foreground, logs to stdout)
cd create-a-container
npm run job-runnerProduction (systemd service)
Copy systemd/job-runner.service to /etc/systemd/system/job-runner.service:
sudo cp systemd/job-runner.service /etc/systemd/system/
sudo systemctl daemon-reload
sudo systemctl enable --now job-runner.service
sudo systemctl status job-runner.serviceDatabase (via .env)
POSTGRES_HOST,POSTGRES_USER,POSTGRES_PASSWORD,POSTGRES_DATABASE,DATABASE_DIALECT
Runner Behavior (environment variables)
JOB_RUNNER_POLL_MS(default 2000) - Polling interval in millisecondsJOB_RUNNER_CWD(default cwd) - Working directory for spawned commandsNODE_ENV=production- Recommended for production
Systemd Setup (recommended for production)
Create /etc/default/container-creator with DB credentials:
POSTGRES_HOST=localhost
POSTGRES_USER=cluster_manager
POSTGRES_PASSWORD=secure_password_here
POSTGRES_DATABASE=cluster_manager
DATABASE_DIALECT=postgresUpdate job-runner.service to include:
EnvironmentFile=/etc/default/container-creator- Command Injection Risk: The runner spawns commands via shell. Only admins can enqueue jobs via the API. Do not expose
POST /jobsto untrusted users. - Job Ownership: Jobs are scoped by
createdBy. Only the admin who created the job (or other admins) can view its metadata and output. Non-owners receive404(not403) to prevent information leakage. - Legacy Jobs: Jobs created before the
createdBymigration will havecreatedBy = NULLand are visible only to admins. - Graceful Shutdown: On SIGTERM/SIGINT, the runner kills all running child processes and marks their jobs as
cancelled.
Insert a test job (SQL)
INSERT INTO Jobs (command, status, createdAt, updatedAt)
VALUES ('echo "Hello" && sleep 5 && echo "World"', 'pending', NOW(), NOW());Inspect job status
SELECT id, status, updatedAt FROM Jobs ORDER BY id DESC LIMIT 10;View job output
SELECT id, output, createdAt FROM JobStatuses WHERE jobId = 1 ORDER BY id ASC;Long-running test (5 minutes)
- Stop runner to keep job pending
sudo systemctl stop job-runner.service- Insert job
psql -c "INSERT INTO \"Jobs\" (command, status, \"createdAt\", \"updatedAt\") VALUES ('for i in \$(seq 1 300); do echo \"line \$i\"; sleep 1; done', 'pending', NOW(), NOW()) RETURNING id;"- Start runner and monitor
node job-runner.js
# In another terminal:
while sleep 15; do
psql -c "SELECT id, output FROM \"JobStatuses\" WHERE \"jobId\"=<ID> ORDER BY id ASC;"
done- Check final status
SELECT id, status FROM Jobs WHERE id = <ID>;- Run migrations:
npm run db:migrate - Deploy
job-runner.jsto target host (e.g.,/opt/container-creator/) - Copy
systemd/job-runner.serviceto/etc/systemd/system/ - Create
/etc/default/container-creatorwith DB env vars - Reload systemd:
sudo systemctl daemon-reload - Enable and start:
sudo systemctl enable --now job-runner.service - Verify runner is running:
sudo systemctl status job-runner.service - Test API by creating a job via
POST /jobs(admin user)
- Replace raw
commandAPI with safe task names and parameter mapping - Add SSE or WebSocket streaming endpoint (
/jobs/:id/stream) to push log lines to frontend in real-time - Add batching or file-based logs for high-volume output to reduce DB pressure
- Implement job timeout/deadline and automatic cancellation
Generate nginx configuration for all registered services
- Returns:
text/plain- Complete nginx configuration with all server blocks
Display account request form
Submit account request (sends email to admins)
- Body:
{ name, email, username, reason } - Returns: Success message
Test email configuration (development/testing)
id INT PRIMARY KEY AUTO_INCREMENT
hostname VARCHAR(255) UNIQUE NOT NULL
username VARCHAR(255) NOT NULL
status VARCHAR(20) NOT NULL DEFAULT 'pending'
template VARCHAR(255)
creationJobId INT FOREIGN KEY REFERENCES Jobs(id)
nodeId INT FOREIGN KEY REFERENCES Nodes(id)
containerId INT UNSIGNED NOT NULL
macAddress VARCHAR(17) UNIQUE
ipv4Address VARCHAR(45) UNIQUE
aiContainer VARCHAR(50) DEFAULT 'N'
createdAt DATETIME
updatedAt DATETIMEid INT PRIMARY KEY AUTO_INCREMENT
containerId INT FOREIGN KEY REFERENCES Containers(id)
type ENUM('tcp', 'udp', 'http') NOT NULL
internalPort INT NOT NULL
externalPort INT
tls BOOLEAN DEFAULT FALSE
externalHostname VARCHAR(255)
createdAt DATETIME
updatedAt DATETIMESequelize database configuration (reads from .env)
container.js- Container model definitionservice.js- Service model definitionindex.js- Sequelize initialization
Service type definitions and port mappings
login.html- Login formform.html- Container creation formrequest-account.html- Account request formstatus.html- Container creation progress viewercontainers.ejs- Container list (EJS template)nginx-conf.ejs- Nginx config generator (EJS template)
style.css- Application styles
Database migration files for schema management
POSTGRES_HOST- Database host (default: localhost)POSTGRES_USER- Database usernamePOSTGRES_PASSWORD- Database passwordPOSTGRES_DATABASE- Database nameSESSION_SECRET- Express session secret (cryptographically random string)
NODE_ENV- Environment (development/production, default: development)
- Proxmox VE integration via API
- Session-based authentication with secure cookies
- Per-route authentication middleware
- 100 requests per 15-minute window per IP
- Protects against brute force and abuse
- Session secret required for cookie signing
- Secure cookie flag enabled
- Session data server-side only
- URL encoding for all parameters
- Sequelize ORM prevents SQL injection
- Form data validation
# Test database connection
psql -h localhost -U cluster_manager -d cluster_manager
# Check if migrations ran
npm run db:migrate
# Verify tables exist
psql -h localhost -U cluster_manager -d cluster_manager -c "\dt"# Check Node.js version
node --version # Should be 18.x or higher
# Verify .env file exists and is readable
cat .env
# Check for syntax errors
node -c server.js
# Run with verbose logging
NODE_ENV=development node server.js# Verify Proxmox API is accessible
curl -k https://10.15.0.4:8006/api2/json/version
# Check if certificate validation is working
# Edit server.js if using self-signed certs# Test SMTP connection
telnet mail.example.com 25
# Test route (development only)
curl http://localhost:3000/send-test-email# Find process using port 3000
sudo lsof -i :3000
# Change port in .env or kill conflicting process
kill -9 <PID># Create new migration
npx sequelize-cli migration:generate --name description-here
# Run migrations
npm run db:migrate
# Undo last migration
npx sequelize-cli db:migrate:undocreate-a-container/
├── server.js # Main Express application
├── package.json # Dependencies and scripts
├── .env # Environment configuration (gitignored)
├── config/ # Sequelize configuration
├── models/ # Database models
├── migrations/ # Database migrations
├── views/ # HTML templates
├── public/ # Static assets
├── data/ # JSON data files
└── bin/ # Utility scripts
This application generates nginx configurations consumed by the nginx-reverse-proxy component:
- Containers register their services in the database
- The
/sites/:siteId/nginxendpoint generates complete nginx configs - The reverse proxy polls this endpoint via cron
- Nginx automatically reloads with updated configurations
See ../nginx-reverse-proxy/README.md for reverse proxy setup.
See the main repository LICENSE file.
For issues, questions, or contributions, see the main opensource-server repository.