Skip to content

Update staging compose for demo/preview deployments#1181

Merged
mihow merged 19 commits intomainfrom
feat/update-staging-compose
Apr 1, 2026
Merged

Update staging compose for demo/preview deployments#1181
mihow merged 19 commits intomainfrom
feat/update-staging-compose

Conversation

@mihow
Copy link
Copy Markdown
Collaborator

@mihow mihow commented Mar 18, 2026

Summary

Update docker-compose.staging.yml to serve as the standard config for single-box deployments (demos, previews, testing). Local Redis, RabbitMQ, and NATS containers; external database only.

Changes

Staging compose (docker-compose.staging.yml)

  • Remove local Postgres service — DB is always external via DATABASE_IP in .envs/.production/.compose
  • Add RabbitMQ 3.13 container as Celery broker
  • Enable NATS container in depends_on (was commented out)
  • Add restart: always to all services
  • Remove hardcoded container_name on NATS (allows multiple instances on same host)
  • Remove awscli service (backups handled externally)
  • Internal services (Redis, RabbitMQ, NATS) do not publish host ports — no conflicts between instances

New files

  • compose/staging/docker-compose.db.yml — optional local PostgreSQL container
  • compose/staging/redis.conf — Redis config (disables bgsave, sets maxmemory/eviction)
  • compose/staging/deploy.sh — deploy script with branch/host safety echo, uses git pull --ff-only
  • compose/staging/README.md — setup guide, env reference, multi-instance instructions, reverse proxy example
  • .envs/.production/.compose-example — documents required DATABASE_IP variable

Settings (config/settings/base.py)

  • Separate Redis DB 0 (cache) and DB 1 (Celery results) — _celery_result_backend_url derives DB 1 URL from REDIS_URL using urllib.parse. Environments with CELERY_RESULT_BACKEND explicitly set (e.g. production) are unaffected.
  • Increase DATA_UPLOAD_MAX_MEMORY_SIZE to 100 MB for ML worker result payloads
  • Upgrade gunicorn 20.1.0 → 23.0.0 (fixes pkg_resources removal in Python 3.12+)

Production impact

  • The Redis DB split only applies if CELERY_RESULT_BACKEND is not explicitly set in the env. Production sets it explicitly, so no change until you opt in.
  • DATA_UPLOAD_MAX_MEMORY_SIZE increase allows larger ML result payloads (already needed).
  • Gunicorn upgrade is required for Python 3.12+ Docker images.
  • docker-compose.production.yml is not modified.

Required env setup for staging/demo

.envs/.production/.django must include:

  • NATS_URL=nats://nats:4222 — without it, the app defaults to 127.0.0.1:4222 which doesn't resolve inside Docker containers
  • CELERY_BROKER_URL=amqp://user:pass@rabbitmq:5672/ — RabbitMQ credentials
  • See compose/staging/README.md for the full variable reference

Reverse proxy

The staging README includes an example nginx config. Key requirement: client_max_body_size 100M — ML workers POST large result payloads that exceed the default 1M/10M limits.

Usage

# Deploy (pull, build, migrate):
./compose/staging/deploy.sh

# Or manually:
docker compose -f docker-compose.staging.yml --env-file .envs/.production/.compose up -d --build

# Optional: local database
docker compose -f compose/staging/docker-compose.db.yml up -d

Environments tested

Environment Database Redis/RabbitMQ/NATS
Demo (Arbutus) External DB server Local containers
Staging (Beluga) External DB server Local containers

Relates to RolnickLab/ami-devops#1, RolnickLab/ami-admin#66
Closes #1180

Test plan

  • docker compose config validates without errors
  • Full stack starts: Django, Celery worker/beat, Flower, Redis, RabbitMQ, NATS
  • Migrations applied successfully
  • API responds on configured port
  • Deployed to staging (Beluga) — sync and async jobs both working
  • Deployed to demo (Arbutus) — working
  • CI linting passes (isort fix applied)
  • CI tests pass

🤖 Generated with Claude Code

mihow and others added 2 commits March 18, 2026 14:04
Update docker-compose.staging.yml to serve as the standard config for
staging, demo, and branch preview environments:

- Remove local Postgres (DB is always external via DATABASE_IP)
- Add RabbitMQ container for Celery task broker
- Add NATS container (was present but commented out in depends_on)
- Add restart:always to all services
- Switch from .envs/.local/.postgres to .envs/.production/.postgres
- Remove hardcoded container_name on NATS (allows multiple instances)
- Remove awscli service (backups handled by TeamCity)
- RabbitMQ credentials configured via .envs/.production/.django, not
  hardcoded in compose

Add compose/staging/docker-compose.db.yml as an optional convenience
for running a local PostgreSQL container when no external DB is
available (e.g., ood environment, local testing).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
gunicorn 20.x requires pkg_resources from setuptools, which was
removed in setuptools 82+. Fresh Docker image builds fail with
ModuleNotFoundError on startup.

gunicorn 23 drops the pkg_resources dependency entirely.

Closes #1180

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copilot AI review requested due to automatic review settings March 18, 2026 21:52
@netlify
Copy link
Copy Markdown

netlify bot commented Mar 18, 2026

Deploy Preview for antenna-ssec canceled.

Name Link
🔨 Latest commit d0a8f1d
🔍 Latest deploy log https://app.netlify.com/projects/antenna-ssec/deploys/69cd9d77b6c36c0009615542

@netlify
Copy link
Copy Markdown

netlify bot commented Mar 18, 2026

Deploy Preview for antenna-preview canceled.

Name Link
🔨 Latest commit d0a8f1d
🔍 Latest deploy log https://app.netlify.com/projects/antenna-preview/deploys/69cd9d77bbe9ca0008203c40

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Mar 18, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Adds a staging compose DB file and Redis config, changes staging compose to run Redis, RabbitMQ, and NATS locally while Django connects to Postgres via DATABASE_IP; derives Celery result backend from Redis, adds CELERY_RESULT_EXPIRES, raises upload limit to 100MB, bumps gunicorn to 23.0.0, and adds staging docs and deploy script.

Changes

Cohort / File(s) Summary
Staging Compose core
docker-compose.staging.yml
Removes local postgres; runs local redis, rabbitmq, and nats; updates django depends_on, env_file to production, adds extra_hosts: db:${DATABASE_IP}, parameterizes Django/Flower ports, adds restart: always to celery/flower and a Flower data volume.
Optional local DB compose
compose/staging/docker-compose.db.yml
New optional Compose file providing a local Postgres service, named volume staging_postgres_data, builds from repo Dockerfile, loads ../../.envs/.production/.postgres, mounts backups/data, exposes 5432, restart: always.
Staging Redis config
compose/staging/redis.conf
New Redis config: maxmemory 8gb, maxmemory-policy allkeys-lru, disables RDB snapshots (save ""), sets tcp-keepalive 60 and timeout 120.
Staging docs & scripts
compose/staging/README.md, compose/staging/deploy.sh, .envs/.production/.compose-example
Added README with env requirements and run/cleanup guidance; deploy script for docker compose up + migrate; example compose env with DATABASE_IP placeholder.
Django settings
config/settings/base.py
Adds helper _celery_result_backend_url(redis_url), introduces CELERY_RESULT_BACKEND_URL/CELERY_RESULT_EXPIRES, sets CELERY_RESULT_BACKEND to derived URL or rpc://, and sets DATA_UPLOAD_MAX_MEMORY_SIZE = 100 * 1024 * 1024.
Dependencies
requirements/base.txt
Bumps gunicorn from 20.1.0 to 23.0.0.

Sequence Diagram(s)

sequenceDiagram
    participant Dev as Developer / CI
    participant Compose as docker-compose (staging)
    participant Django as Django container
    participant Postgres as Postgres (external or compose.db)
    participant Redis as Redis
    participant RabbitMQ as RabbitMQ
    participant NATS as NATS
    participant Celery as Celery workers

    Dev->>Compose: docker compose --env-file .envs/.production/.compose up -d --build
    Compose->>Redis: start (uses compose/staging/redis.conf)
    Compose->>RabbitMQ: start
    Compose->>NATS: start (JetStream)
    Compose->>Django: start (depends_on: redis, rabbitmq, nats)
    Django->>Postgres: connect via DATABASE_IP (or optional compose.db)
    Django->>Redis: cache/sessions and derive CELERY_RESULT_BACKEND_URL
    Django->>RabbitMQ: publish tasks
    RabbitMQ->>Celery: deliver tasks to workers
    Django->>NATS: telemetry / pub-sub
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~25 minutes

Suggested reviewers

  • annavik

Poem

🐇 I hopped into compose with a tiny drum,
Redis, RabbitMQ, NATS — here they come,
Postgres optional, envs all aligned,
Gunicorn leapt, uploads grew in kind,
Staging hums along — a carrot-sized find. 🥕

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 0.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately describes the main change: updating the staging Docker Compose configuration for demo/preview deployments with local containers and external database support.
Linked Issues check ✅ Passed The PR fulfills issue #1180 by upgrading gunicorn from 20.1.0 to 23.0.0, removing the pkg_resources dependency and gaining ASGI improvements. Staging compose changes support the infrastructure needed for the new setup.
Out of Scope Changes check ✅ Passed All changes directly support the stated objectives: updating staging compose for demo/preview deployments, adding optional local DB container, and upgrading gunicorn to fix pkg_resources issues. No unrelated changes detected.
Description check ✅ Passed The pull request description is comprehensive and well-structured, covering summary, list of changes, detailed description, test plan, deployment notes, and related issues.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/update-staging-compose

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 3

🧹 Nitpick comments (1)
docker-compose.staging.yml (1)

68-70: Bind RabbitMQ management UI to localhost by default.

Line 69 exposes 15672 broadly; in staging/demo this is safer as localhost-bound unless remote admin access is explicitly needed.

Proposed hardening
     ports:
-      - "15672:15672"
+      - "127.0.0.1:15672:15672"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.staging.yml` around lines 68 - 70, The ports mapping currently
exposes the RabbitMQ management UI publicly via the line with "15672:15672";
change the ports entry so the management port is bound to localhost only (e.g.,
use a host IP prefix like 127.0.0.1:15672:15672) under the same ports block so
the service still restarts as configured (restart: always) but the management UI
is only accessible from the host machine.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@compose/staging/docker-compose.db.yml`:
- Around line 5-17: The comments are ambiguous about DB connectivity modes
(Docker network vs host bridge). Update the header comments in
compose/staging/docker-compose.db.yml to clearly state which mode the stack
expects: if the app uses POSTGRES_HOST=db (container-to-container via Docker
network) remove or reword the DATABASE_IP/host-bridge instructions;
alternatively, if the intended workflow requires the host bridge and
DATABASE_IP, change the POSTGRES_HOST guidance accordingly and explain when to
start the DB with docker compose -f compose/staging/docker-compose.db.yml up -d
vs when to set DATABASE_IP for the app compose. Ensure you reference
POSTGRES_HOST and DATABASE_IP and mention the two compose files
(compose/staging/docker-compose.db.yml and docker-compose.staging.yml) so
readers know which mode each file supports.
- Around line 35-36: The compose file currently publishes PostgreSQL on all
interfaces via the ports mapping `ports: - "5432:5432"`; change this to bind to
localhost by replacing it with `ports: - "127.0.0.1:5432:5432"` (or remove the
ports mapping entirely and rely on an internal network) so Postgres only listens
on loopback for staging/demo unless external access is explicitly required;
update any documentation or scripts that expect an externally accessible port
accordingly.

In `@docker-compose.staging.yml`:
- Around line 14-15: The rabbitmq service is missing an env_file so it doesn't
pick up RABBITMQ_DEFAULT_USER/RABBITMQ_DEFAULT_PASS from
.envs/.production/.django; update the rabbitmq service definition to add an
env_file pointing to .envs/.production/.django (so the RABBITMQ_DEFAULT_USER and
RABBITMQ_DEFAULT_PASS values are loaded) and remove or override any conflicting
environment: entries if present; ensure the service name "rabbitmq" and the
variables RABBITMQ_DEFAULT_USER / RABBITMQ_DEFAULT_PASS are used consistently
with Django's CELERY_BROKER_URL.

---

Nitpick comments:
In `@docker-compose.staging.yml`:
- Around line 68-70: The ports mapping currently exposes the RabbitMQ management
UI publicly via the line with "15672:15672"; change the ports entry so the
management port is bound to localhost only (e.g., use a host IP prefix like
127.0.0.1:15672:15672) under the same ports block so the service still restarts
as configured (restart: always) but the management UI is only accessible from
the host machine.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 3cbcc483-c4ef-463f-8d20-ba97a05053c7

📥 Commits

Reviewing files that changed from the base of the PR and between 59d2c21 and b3d9771.

📒 Files selected for processing (3)
  • compose/staging/docker-compose.db.yml
  • docker-compose.staging.yml
  • requirements/base.txt

Copy link
Copy Markdown
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the staging Docker Compose setup to be a shared baseline for staging/demo/branch-preview deployments by running Redis/RabbitMQ/NATS locally while connecting to an external Postgres via DATABASE_IP, and upgrades Gunicorn to avoid fresh-build failures on slim Python images.

Changes:

  • Upgrade gunicorn to 23.0.0.
  • Revise docker-compose.staging.yml to remove the local Postgres service and add local RabbitMQ + NATS (with restarts and updated env-file usage).
  • Add an optional compose/staging/docker-compose.db.yml for running a local Postgres container.

Reviewed changes

Copilot reviewed 3 out of 3 changed files in this pull request and generated 5 comments.

File Description
requirements/base.txt Bumps Gunicorn to 23.0.0.
docker-compose.staging.yml Reworks staging compose to use external DB + local Redis/RabbitMQ/NATS.
compose/staging/docker-compose.db.yml Adds an optional local Postgres compose for staging-like setups.

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

mihow and others added 5 commits March 18, 2026 15:09
- Add env_file to rabbitmq service so it picks up
  RABBITMQ_DEFAULT_USER/RABBITMQ_DEFAULT_PASS from .django env
- Use ${DATABASE_IP:?} required-variable syntax for fail-fast on
  missing config
- Bind local Postgres to 127.0.0.1 instead of 0.0.0.0
- Clarify DB compose comments: document host-bridge connectivity
  via DATABASE_IP, remove ambiguous "Docker network" wording

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Internal services (Redis, RabbitMQ, NATS) don't need host port
exposure — only the app containers talk to them via the Docker
network. Removing host ports means multiple instances (branch
previews, worktrees) never conflict on these ports.

Django and Flower ports are now configurable via DJANGO_PORT and
FLOWER_PORT env vars (default 5001 and 5550).

Also use host-gateway (works on all platforms) instead of
platform-specific Docker bridge IPs in DB compose docs.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Setup instructions for single and multi-instance staging deployments,
covering environment configuration, database options, migrations,
sample data, and port management for running multiple instances on
the same host.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
ML workers post classification results with up to 29K categories per
image, easily exceeding Django's default 2.5MB request body limit.
This caused 413 errors on the demo environment.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The default CELERY_RESULT_BACKEND was "rpc://" which uses RabbitMQ for
results. This caused channel exhaustion (65,535 limit), connection
resets, and worker crashes on the demo environment.

Changes:
- Derive CELERY_RESULT_BACKEND from REDIS_URL using DB 1 instead of
  the cache DB 0. This keeps cache and task results isolated so they
  can be flushed and monitored independently.
- Add maxmemory config to staging Redis (8gb, allkeys-lru)
- Falls back to rpc:// only if no REDIS_URL is configured
- Env var CELERY_RESULT_BACKEND still overrides if explicitly set

Redis DB layout:
  DB 0: Django cache (disposable, allkeys-lru eviction)
  DB 1: Celery task result metadata (TTL-based via CELERY_RESULT_EXPIRES)

Relates to #1189

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Document what CELERY_RESULT_EXTENDED does, why it's expensive (~19KB per
task vs ~200B), and note that bulk tasks like process_nats_pipeline_result
could use ignore_result=True to avoid storing large ML result JSON in the
result backend.

Relates to #1189

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mihow mihow force-pushed the feat/update-staging-compose branch 3 times, most recently from aaab7c9 to 1162e6a Compare March 26, 2026 18:49
- Move Redis config to compose/staging/redis.conf for clarity
- Disable RDB persistence (save "") — bgsave of large datasets saturates
  disk I/O on small volumes, hanging NATS and other services
- Add CELERY_RESULT_EXPIRES=3600 default in base.py to auto-expire task
  results after 1 hour, preventing unbounded Redis memory growth
- Keep maxmemory 8gb and allkeys-lru eviction policy

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mihow mihow force-pushed the feat/update-staging-compose branch from 1162e6a to 14c26f5 Compare March 26, 2026 18:50
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
docker-compose.staging.yml (1)

70-71: Flower volume mount for persistent data.

Docker will create ./data/flower/ if it doesn't exist. If Flower writes with a non-root UID, ensure the directory has appropriate permissions or use named volumes.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@docker-compose.staging.yml` around lines 70 - 71, The volume mount "-
./data/flower/:/data/" can lead to permission issues if Flower runs as a
non-root UID; either ensure the host directory exists and is chowned to the same
UID/GID Flower runs as (create the directory and run chown to the container
user), or replace the bind mount with a named Docker volume (e.g., declare a
named volume like "flower_data" and mount it to /data) and add the named volume
under the compose "volumes" section so Docker manages ownership; alternatively
set the "user" field on the Flower service to match the host directory ownership
if you must keep the bind mount.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@config/settings/base.py`:
- Around line 273-277: The _celery_result_backend_url function fails for Redis
URLs with query strings or trailing slashes because the current regex only
matches a numeric DB segment at the end; fix it by using urllib.parse to parse
the URL, inspect and modify the path component: use
urllib.parse.urlparse(redis_url), if path is empty or "/" set path = "/1",
otherwise split the path by "/" and if the last non-empty segment is a digit
replace it with "1" else append "1" as a new segment; then rebuild the URL with
urllib.parse.urlunparse to preserve scheme, netloc, params, query, and fragment
and return that string. Ensure this logic is implemented inside
_celery_result_backend_url and only runs when redis_url is truthy.

---

Nitpick comments:
In `@docker-compose.staging.yml`:
- Around line 70-71: The volume mount "- ./data/flower/:/data/" can lead to
permission issues if Flower runs as a non-root UID; either ensure the host
directory exists and is chowned to the same UID/GID Flower runs as (create the
directory and run chown to the container user), or replace the bind mount with a
named Docker volume (e.g., declare a named volume like "flower_data" and mount
it to /data) and add the named volume under the compose "volumes" section so
Docker manages ownership; alternatively set the "user" field on the Flower
service to match the host directory ownership if you must keep the bind mount.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 2c640d04-e86c-4687-826a-0a633a5f3539

📥 Commits

Reviewing files that changed from the base of the PR and between 6b699b7 and bd649a9.

📒 Files selected for processing (3)
  • compose/staging/redis.conf
  • config/settings/base.py
  • docker-compose.staging.yml
✅ Files skipped from review due to trivial changes (1)
  • compose/staging/redis.conf

mihow and others added 2 commits April 1, 2026 21:20
Add .envs/.production/.compose-example documenting required DATABASE_IP variable.
Add compose/staging/deploy.sh as the canonical deploy script (fetch, build, migrate).

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
The regex-based approach failed for URLs with query strings or trailing
slashes. Use urlparse/urlunparse to properly handle the path component.
Also clarifies the Redis DB numbering convention in comments.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🧹 Nitpick comments (1)
compose/staging/deploy.sh (1)

5-6: Consider adding set -o pipefail for robust error handling.

With only errexit, failures in piped commands (except the last) are silently ignored. Adding pipefail ensures the script exits on any command failure within a pipeline.

♻️ Proposed fix
 set -o errexit
+set -o pipefail
 set -o xtrace
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@compose/staging/deploy.sh` around lines 5 - 6, Add robust pipeline failure
handling by enabling pipefail in the shell setup: update the script's shell
options where set -o errexit and set -o xtrace are configured to also include
set -o pipefail so that any failing command in a pipeline causes the script to
exit; modify the block containing "set -o errexit" and "set -o xtrace" to
include "set -o pipefail".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@compose/staging/deploy.sh`:
- Line 10: The script currently runs git fetch origin but never updates the
working tree, so update deploy.sh to actually pull or reset to the intended
branch after the fetch (e.g., run git pull --ff-only origin <branch> or git
reset --hard origin/<branch>); locate the git fetch origin line in deploy.sh and
replace or follow it with the chosen update command and ensure any local changes
are handled (stash/abort) so the Docker build uses the freshly deployed code.

---

Nitpick comments:
In `@compose/staging/deploy.sh`:
- Around line 5-6: Add robust pipeline failure handling by enabling pipefail in
the shell setup: update the script's shell options where set -o errexit and set
-o xtrace are configured to also include set -o pipefail so that any failing
command in a pipeline causes the script to exit; modify the block containing
"set -o errexit" and "set -o xtrace" to include "set -o pipefail".
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 9784f456-18c6-4b87-b89f-21ec651d0176

📥 Commits

Reviewing files that changed from the base of the PR and between bd649a9 and f12610f.

📒 Files selected for processing (2)
  • .envs/.production/.compose-example
  • compose/staging/deploy.sh
✅ Files skipped from review due to trivial changes (1)
  • .envs/.production/.compose-example

mihow and others added 5 commits April 1, 2026 22:08
Document client_max_body_size 100M requirement for ML worker payloads,
proxy_read_timeout for long API operations, and example nginx config
for SSL termination. Also fix deploy.sh symlink resolution.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
git fetch updates remote refs but does not update the working tree,
so the Docker build was using stale code. Use git pull --ff-only to
actually update the checked-out branch.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
…on convention

Staging means single-box deployment (demo, preview, testing), not a
pre-production environment. The .envs/.production/ directory is a
cookiecutter-django convention for non-local-dev config.

Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
mihow and others added 2 commits April 1, 2026 22:25
Co-Authored-By: Claude Opus 4.6 (1M context) <noreply@anthropic.com>
@mihow mihow merged commit 0b52b8d into main Apr 1, 2026
7 checks passed
@mihow mihow deleted the feat/update-staging-compose branch April 1, 2026 22:45
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Upgrade gunicorn from 20.1.0 to 23.x

2 participants