Skip to content

Commit

Permalink
Add full steps workflow for CI/CD process
Browse files Browse the repository at this point in the history
  • Loading branch information
Cdaprod committed Oct 29, 2024
1 parent 799725e commit 969ae8e
Show file tree
Hide file tree
Showing 3 changed files with 570 additions and 0 deletions.
95 changes: 95 additions & 0 deletions .github/workflows/full_step_workflow.yml.dev
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
# .github/workflows/main.yml
name: Full Steps Workflow

on:
push:
branches:
- '**'
pull_request:
branches:
- '**'

jobs:
build-test-tag-push-deploy:
runs-on: ubuntu-latest
strategy:
matrix:
service:
- name: 'app-frontend'
context: './app-frontend'
image: 'cdaprod/app-frontend'
port: '3000'
- name: 'middleware-registry'
context: './middleware-registry'
image: 'cdaprod/middleware-registry'
port: '8080'
- name: 'middleware-infrastructure'
context: './middleware-infrastructure'
image: 'cdaprod/middleware-infrastructure'
port: '8081'
steps:
- name: Checkout Code
uses: actions/checkout@v3

- name: Set up Docker Buildx
uses: docker/setup-buildx-action@v2

- name: Log in to Docker Hub
uses: docker/login-action@v2
with:
username: ${{ secrets.DOCKERHUB_USERNAME }}
password: ${{ secrets.DOCKERHUB_TOKEN }}

- name: Build Docker Image
run: |
docker build -t ${{ matrix.service.image }}:latest ${{ matrix.service.context }}

- name: Run Tests
run: |
# Replace with your test command
docker run --rm ${{ matrix.service.image }}:latest sh -c "npm test || go test ./... || echo 'No tests specified'"

- name: Get Git Version
id: get_version
run: |
git_version=$(git rev-parse --short HEAD)
echo "git_version=${git_version}" >> $GITHUB_OUTPUT

- name: Tag Docker Image
run: |
docker tag ${{ matrix.service.image }}:latest ${{ matrix.service.image }}:${{ steps.get_version.outputs.git_version }}

- name: Push Docker Images
run: |
docker push ${{ matrix.service.image }}:latest
docker push ${{ matrix.service.image }}:${{ steps.get_version.outputs.git_version }}

- name: Deploy to Host
run: |
ssh -o StrictHostKeyChecking=no ${{ secrets.SSH_USERNAME }}@${{ secrets.SSH_HOST }} \
"docker service update --image ${{ matrix.service.image }}:${{ steps.get_version.outputs.git_version }} ${ matrix.service.name }_service"

- name: Notify on Failure
if: ${{ failure() }}
run: |
echo "An error occurred while processing service: ${{ matrix.service.name }}"

- name: Take Screenshot
if: ${{ success() && matrix.service.name == 'app-frontend' }}
uses: mxschmitt/action-take-screenshot@v1
with:
url: 'http://your-app-url.com'
output: screenshot_${{ matrix.service.name }}.png

- name: Upload Screenshot
if: ${{ success() && matrix.service.name == 'app-frontend' }}
uses: actions/upload-artifact@v3
with:
name: screenshot_${{ matrix.service.name }}
path: screenshot_${{ matrix.service.name }}.png

- name: Clean up Docker resources
if: always()
run: |
docker system prune -af
docker builder prune -af
243 changes: 243 additions & 0 deletions docs/00-CLUSTERING-MINIO.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,243 @@
To optimize the Docker Swarm setup for improved storage management while integrating the recommended MinIO setup, we need to combine elements from your current `docker-compose.yaml` and the recommended MinIO configuration. Additionally, we'll ensure that storage-intensive services utilize the NVMe SSD or expanded storage effectively.

### Combined and Optimized `docker-compose.yaml`

#### Key Changes:
1. **MinIO Configuration**: Set up MinIO in a distributed mode across multiple nodes.
2. **Storage Optimization**: Ensure data volumes are placed on nodes with the most storage capacity (e.g., NVMe SSD).
3. **Service Placement**: Use Docker placement constraints to control where services run, leveraging nodes with additional storage.

Here’s the optimized `docker-compose.yaml`:

```yaml
version: '3.8'

services:
web:
image: cdaprod/cda-minio-control:latest
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
MINIO_ENDPOINT: minio1:9000
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
ports:
- "8000:8000"
networks:
- app_network
depends_on:
- weaviate
- minio1
deploy:
replicas: 3
update_config:
parallelism: 2
delay: 10s
restart_policy:
condition: on-failure

minio1:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server http://minio{1...4}/data
volumes:
- minio_data1:/data
ports:
- "9000:9000"
- "9001:9001"
networks:
- app_network
- minio_net
deploy:
placement:
constraints:
- node.labels.storage == nvme

minio2:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server http://minio{1...4}/data
volumes:
- minio_data2:/data
networks:
- app_network
- minio_net
deploy:
placement:
constraints:
- node.labels.storage == expanded

minio3:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server http://minio{1...4}/data
volumes:
- minio_data3:/data
networks:
- app_network
- minio_net
deploy:
placement:
constraints:
- node.labels.storage == default

minio4:
image: minio/minio:latest
environment:
MINIO_ROOT_USER: ${MINIO_ROOT_USER}
MINIO_ROOT_PASSWORD: ${MINIO_ROOT_PASSWORD}
command: server http://minio{1...4}/data
volumes:
- minio_data4:/data
networks:
- app_network
- minio_net
deploy:
placement:
constraints:
- node.labels.storage == default

weaviate:
image: cdaprod/cda-weaviate:latest
environment:
OPENAI_API_KEY: ${OPENAI_API_KEY}
AUTHENTICATION_ANONYMOUS_ACCESS_ENABLED: 'true'
PERSISTENCE_DATA_PATH: '/var/lib/weaviate'
QUERY_DEFAULTS_LIMIT: 25
DEFAULT_VECTORIZER_MODULE: 'text2vec-openai'
ENABLE_MODULES: 'backup-s3, text2vec-openai'
BACKUP_S3_BUCKET: 'weaviate-backups'
BACKUP_S3_ENDPOINT: 'minio1:9000'
BACKUP_S3_ACCESS_KEY_ID: ${MINIO_ROOT_USER}
BACKUP_S3_SECRET_ACCESS_KEY: ${MINIO_ROOT_PASSWORD}
BACKUP_S3_USE_SSL: 'false'
CLUSTER_HOSTNAME: 'node1'
volumes:
- weaviate_data:/var/lib/weaviate
networks:
- app_network
depends_on:
- minio1

nginx:
image: cdaprod/cda-nginx:latest
ports:
- "80:80"
- "443:443"
volumes:
- ./nginx/nginx.conf:/etc/nginx/nginx.conf
networks:
- app_network

jupyterlab:
build:
context: ./jupyter
image: cdaprod/cda-jupyterlab:latest
ports:
- "8888:8888"
volumes:
- ../usb/001/002:/dev/bus/usb/001/002
- jupyter_data:/home/jovyan/work
networks:
- app_network
environment:
- JUPYTER_ENABLE_LAB=yes
privileged: true
deploy:
replicas: 1
restart_policy:
condition: on-failure
command: ["jupyter", "lab", "--ip=0.0.0.0", "--allow-root", "--NotebookApp.token=''", "--NotebookApp.password=''"]

tailscale:
image: tailscale/tailscale
container_name: tailscale
cap_add:
- NET_ADMIN
- SYS_MODULE
security_opt:
- apparmor:unconfined
volumes:
- /dev/net/tun:/dev/net/tun
- tailscale_data:/var/lib/tailscale
environment:
- TS_AUTH_KEY=${TS_AUTH_KEY}
networks:
- app_network
command: tailscaled

networks:
app_network:
driver: overlay
minio_net:
driver: overlay

volumes:
tailscale_data:
driver: local
minio_data1:
driver: local
driver_opts:
type: none
device: /mnt/nvme/data
o: bind
minio_data2:
driver: local
driver_opts:
type: none
device: /mnt/expanded/data
o: bind
minio_data3:
driver: local
minio_data4:
driver: local
weaviate_data:
driver: local
jupyter_data:
driver: local
driver_opts:
type: none
device: /opt/jupyter_data
o: bind

secrets:
minio_root_user:
external: true
minio_root_password:
external: true
```
### Explanation and Optimization:
1. **MinIO Distributed Setup**:
- The MinIO service is configured to run in a distributed mode across four nodes (`minio1`, `minio2`, `minio3`, `minio4`).
- Each MinIO instance is given specific placement constraints to ensure they run on the appropriate nodes.

2. **Storage Placement**:
- `minio1` is configured to use the NVMe storage on the leader node.
- `minio2` is set to use the expanded storage on another Raspberry Pi 5 node.
- `minio3` and `minio4` use default local storage on the Raspberry Pi 4 nodes.

3. **Service Placement Constraints**:
- Using `deploy.placement.constraints`, we control where each MinIO instance runs based on node labels.
- Ensure you label your nodes accordingly:
```sh
docker node update --label-add storage=nvme <nvme-node-id>
docker node update --label-add storage=expanded <expanded-node-id>
docker node update --label-add storage=default <default-node-id>
```

4. **Volumes**:
- Specific volumes are configured to use the NVMe storage (`/mnt/nvme/data`) and the expanded storage (`/mnt/expanded/data`).

5. **Networks**:
- `app_network` is used for general service communication.
- `minio_net` is specifically for MinIO services to communicate in a distributed setup.

### Summary
This optimized `docker-compose.yaml` setup ensures efficient storage management across your Docker Swarm, leveraging the NVMe SSD and any additional storage nodes you have. This configuration also maintains your current services and adds robustness to your swarm infrastructure.
Loading

0 comments on commit 969ae8e

Please sign in to comment.