[wip] Add new chart and docs #61
Closed
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
This can be tested with a local helm repo
Launch the local helm repo
mkdir build/helm/helm-repo cd build/helm/helm-repo python -m http.server 8000Create the charts in the repo
When this is ready, you can do
helm search repo firecrestv2Deploy the demo environment
To deploy the demo environment, begin by installing the necessary subcharts that provide these services. These include:
Keycloak: for authentication and identity management.
MinIO: as an S3-compatible storage backend.
Slurm: the Slurm cluster that FirecREST will target.
The
firecrestv2/secretschart is used to deploy a Kubernetes Secret containing user SSH keys.These keys are mounted into the pod where FirecREST runs, allowing it to authenticate with the underlying HPC system.
Make sure all the deployments are dene within a same Kubernetes namespace (in this example,
fcv2-demo).If it doesn’t exist yet, it will be created with the first command.
the values files are
values-keycloak.yamlvalues-minio.yamlOnce these components are deployed, FirecREST itself can be installed
At this point, all the core services should be up and running.
You can verify the deployment by checking the status of the pods in the namespace: