There are some properties that have been removed and are now considered incompatible. Remove them from your resource:
.status.ownedResourceRefs.status.observedGeneration
These new properties have been added that you may want to add to your CMCC spec/status.
.spec.version-> Denotes a "version" of your whole CMCC instance, allows releasing with "Upgrade-Path".spec.with.jsonLogging-> Activate/configure structured JSON logs.spec.with.solrBasicAuthEnabled-> Activates SOLR credential usage (cannot yet be automatically generated by OP).spec.with.defaultAffinityRules-> Set some simple host scoped affinity rules (you may prefer to setup more complex affinities).spec.with.delivery.minHeadless/minHeadless-> Analogous to minCae/maxCae (see Scaling).spec.with.responseTimeout-> Timeouts for ingress resources
Note: Please notice the new CRD handling (HELM 3 approach), see Introduction "Breaking changes"
Note: If you want to make use of affinity rules and you are using "caches as PVC", you need to make sure that PVCs
are created with the correct StorageClass. Its storage class MUST have a volumeBindingMode=WaitForFirstConsumer.
Otherwise you will encounter PODs to be located next to their (randomly located) PVC and disregarding any affinity rule.
This is the simplest approach.
- uninstall all your CMCC objects on your cluster
helm uninstall --namespace <namespace> <cmcc-release-name> - uninstall the operator
helm uninstall --namespace cmcc-operator cmcc-operator cmcc-operator/cmcc-operator - remove the CRD(v1) from your cluster
kubectl get crd | grep coremediacontentclouds | cut -d ' ' -f 1 | xargs -I {} echo kubectl delete {} - install the latest operator
helm upgrade --install --namespace cmcc-operator cmcc-operator cmcc-operator/cmcc-operator - install your CMCCs again
helm install --namespace <namespace> <cmcc-release-name>
This approach allows you to keep all your CMCC deployments and convert them to v2.
Please note that there is a risk with this approach: Though the deployments are kept it will probably update all existing PODs with the new version reference. Also worst case it may happen that your CMCC deployment is removed completely from the cluster. Or you may be stuck during the migration with a CMCC object that contains incompatible properties against v2 and you need to find and fix them for yourself before you can proceed.
- Shutdown (scale to 0) your operator
- Replace CRD with a "Migration" CRD that contains both version v1 and v2
- Change the "stored versions" status property of all your CMCC objects to "v2"
- Trigger conversion to v2: Make changes to your CMCC spec
- Deploy and start latest operator version
- Replace CRD with a latest v2 CRD
- Fix owner references. See below.
Here some exemplary cli commands (make sure you are in the directory of this file or that you can reference the neccessary CRD yaml files):
kubectl --namespace cmcc-operator scale --replicas=0 cmcc-operator/cmcc-operatorkubectl apply -f ./cmcc-crd-v2+v1.yamlkubectl patch crds coremediacontentclouds.cmcc.tsystemsmms.com --subresource=status --type=merge -p '{"status":{"storedVersions":["v2"]}}'kubectl --namespace <namespace> edit cmcc <release-name># make some simple, irrelevant changes, i.e. remove/change 'creationTimestamp' and save.helm upgrade --install --namespace cmcc-operator cmcc-operator cmcc-operator/cmcc-operatorkubectl apply -f ../k8s/cmcc-crd.yaml# apply v2 CRD yaml
These steps ensure that all owner references and other related data are referring to v2 and not v1 anymore. I.e. helm keeps a reference to the previous release with labels that include the CRD version. Owner references from the CMCC object to dependent objects also contain the version as a label. After migrating the CMCC object you need to clean up these references as well.
In order to list and fix all owner references installed kubectl-check-ownerreferences on your local machine. It allows finding "invalid" owner references.
In order to fix the owner references go into the directory of this README (or make sure you have the file ./fix-owners.sh in your current PATH) and do this:
kubectl-check-ownerreferences | grep -v GROUP | tr -s ' ' | cut -d ' ' -f 2,3,4 | xargs -n3 sh -c './fix-owners.sh $2 $1 $3' sh
In order to get rid of v1 references in the helm metadata you need to do another helm deployment of your CMCC as a v2.
kubectl apply -f cmcc-crd-v2+v1.yaml- Redeploy your CMCC (
helm install --upgrade...) kubectl apply -f cmcc-crd-v2.yaml- Redeploy again (
helm install --upgrade...)
This way even the previous "rollback" version is a v2.