-
Notifications
You must be signed in to change notification settings - Fork 489
Description
Hi guys, I hope you're doing great.
I'm deploying the MinIO operator
with a Tenant
using terraform in my GKE
cluster like this:
resource "helm_release" "minio_operator" {
provider = helm.minio
name = "minio-operator"
repository = "https://operator.min.io"
chart = "operator"
version = "7.1.1"
create_namespace = "true"
namespace = "minio-operator"
values = [
file("${path.module}/values/minio-operator.yaml")
]
}
resource "helm_release" "minio_tenant" {
provider = helm.minio
name = "minio-tenant"
repository = "https://operator.min.io"
chart = "tenant"
version = "7.1.1"
namespace = kubernetes_namespace.minio.metadata[0].name
values = [
templatefile("${path.module}/values/minio-tenant.tftpl", {
# We pass the dynamic values from our Vault resources into the template.
role_id = vault_approle_auth_backend_role.kes_approle.role_id
secret_id = vault_approle_auth_backend_role_secret_id.kes_secret_id.secret_id
})
]
}
This is the Tenant
values with KES
in it.
tenant:
name: my-tenant-minio
image:
repository: quay.io/minio/minio
tag: RELEASE.2025-06-13T11-33-47Z
pullPolicy: IfNotPresent
configSecret:
name: minio-secret
existingSecret: true
# Expand https://min.io/docs/minio/kubernetes/gke/operations/install-deploy-manage/expand-minio-tenant.html#minio-k8s-expand-minio-tenant
pools:
- servers: 4 # minimum 4 servers required for erasure coding
name: pool-0
volumesPerServer: 4
size: 10Gi
resources:
requests:
cpu: 500m
memory: 1Gi
limits:
cpu: 1
memory: 2Gi
kes:
image:
repository: quay.io/minio/kes
tag: 2025-03-12T09-35-18Z
pullPolicy: IfNotPresent
configuration: |-
address: 0.0.0.0:7373
admin:
identity: disabled
tls:
# The Operator automatically generates certs and mounts them here.
key: /tmp/kes/server.key
cert: /tmp/kes/server.crt
log:
error: on
audit: on
policy:
minio:
allow:
- /v1/key/create/*
- /v1/key/generate/*
- /v1/key/decrypt/*
- /v1/key/bulk/decrypt
- /v1/key/list/*
- /v1/status
- /v1/metrics
keystore:
vault:
endpoint: "https://vault.vault.svc.cluster.local:8200"
prefix: "minio"
engine: kv-v2
path: secret
approle:
# Template placeholders
id: "${role_id}"
secret: "${secret_id}"
retry: 15s
replicas: 2
env:
- name: VAULT_SKIP_VERIFY
value: "true"
keyName: "minio-master-key"
resources:
requests:
cpu: 100m
memory: 128Mi
limits:
cpu: 200m
memory: 256Mi
mountPath: /export
subPath: /data
metrics:
enabled: true
port: 9000
protocol: http
env:
- name: MINIO_ILM_SCANNER_INTERVAL
value: "1h"
ingress:
api:
enabled: false
ingressClassName: ""
labels: { }
annotations: { }
tls: [ ]
host: minio.local
path: /
pathType: Prefix
console:
enabled: false
ingressClassName: ""
labels: { }
annotations: { }
tls: [ ]
host: minio-console.local
path: /
pathType: Prefix
With this setup, everything is properly deployed, KES
is up, and the pool is up. However, the console login is not working, regardless of the values I use for the config secret
or whether I leave the Tenant
to create the secret for me. I've checked the /tmp/minio/config.env
inside the containers and the values are there, including the ones needed for KES
.
Now, if I comment out the KES
section in Tenant
values without making any other changes, the login works without issues. I've compared the statefulset
and the pods
; the only difference is the secret holding the minio-tls
for KES, while the rest is exactly the same.
Any advice would be much appreciated. Thank you.
Best regards,