diff --git a/antora.yml b/antora.yml index c94abfc..f1b3a13 100644 --- a/antora.yml +++ b/antora.yml @@ -1,6 +1,6 @@ name: operator title: Kubernetes Operator -version: '2.8' +version: '2.9' prerelease: false start_page: ROOT:overview.adoc nav: diff --git a/drawio-charts/cb-server-compat.drawio b/drawio-charts/cb-server-compat.drawio index fe05111..a5eb331 100644 --- a/drawio-charts/cb-server-compat.drawio +++ b/drawio-charts/cb-server-compat.drawio @@ -1,38 +1,44 @@ - + - + - + - + - + - + - + - + - + - + - + - - + + + + + + + + diff --git a/drawio-charts/k8s-compat.drawio b/drawio-charts/k8s-compat.drawio index 1de99b7..2fb14f6 100644 --- a/drawio-charts/k8s-compat.drawio +++ b/drawio-charts/k8s-compat.drawio @@ -4,128 +4,95 @@ - - - - - - - - - - + - + - - - - - - - + - + - + - + - + - + - + - + - - - - - - - + - + - - + + - + - + - + - + - + - + - + - - - - - - - - - - + - + - - - - + - - + + - - + + - - + + - - + + - - + + - - + + - - + + diff --git a/modules/ROOT/assets/images/compatibility-kubernetes.png b/modules/ROOT/assets/images/compatibility-kubernetes.png index 0ebd149..1b16df2 100644 Binary files a/modules/ROOT/assets/images/compatibility-kubernetes.png and b/modules/ROOT/assets/images/compatibility-kubernetes.png differ diff --git a/modules/ROOT/assets/images/compatibility-server.png b/modules/ROOT/assets/images/compatibility-server.png index 497f59f..0429a65 100644 Binary files a/modules/ROOT/assets/images/compatibility-server.png and b/modules/ROOT/assets/images/compatibility-server.png differ diff --git a/modules/ROOT/nav.adoc b/modules/ROOT/nav.adoc index 10fa530..d5f0122 100644 --- a/modules/ROOT/nav.adoc +++ b/modules/ROOT/nav.adoc @@ -37,6 +37,8 @@ *** xref:concept-data-save-restore.adoc[Data Topology Save, Restore and Synchronization] *** xref:howto-guide-save-restore.adoc[How-to Guide: Data Topology Save and Restore] *** xref:howto-guide-data-topology-sync.adoc[How-to Guide: Data Topology Synchronization] +** Data Encryption + *** xref:concept-encryption-at-rest.adoc[Encryption at Rest] ** Hibernation *** xref:concept-hibernation.adoc[Couchbase Cluster Hibernation] ** Logging @@ -89,6 +91,7 @@ *** xref:howto-manage-couchbase-logging.adoc[Manage Couchbase Logging] *** xref:howto-couchbase-log-forwarding.adoc[Configure Log Forwarding] *** xref:howto-non-root-install.adoc[Configure Non-Root Installs] + *** xref:howto-encryption-at-rest.adoc[Configure Encryption at Rest] ** Connect *** xref:howto-ui.adoc[Access the Couchbase User Interface] *** xref:howto-client-sdks.adoc[Configure Client SDKs] @@ -132,9 +135,12 @@ include::partial$autogen-reference.adoc[] ** xref:tutorial-autoscale-query.adoc[Autoscaling Couchbase Query Service] * Backup ** xref:tutorial-velero-backup.adoc[Backup with VMware Velero] +* Encryption at Rest + ** xref:tutorial-encryption-at-rest.adoc[Encryption at Rest] * Logging ** xref:tutorial-couchbase-log-forwarding.adoc[] * Monitoring + ** xref:tutorial-mirwatchdog.adoc[Monitor for Manual Intervention Scenarios] ** xref:tutorial-prometheus.adoc[Quick Start with Prometheus Monitoring] * Networking ** xref:tutorial-remote-dns.adoc[Inter-Kubernetes Networking with Forwarded DNS] @@ -142,6 +148,8 @@ include::partial$autogen-reference.adoc[] ** xref:tutorial-kubernetes-network-policy.adoc[Kubernetes Network Policies Using Deny-All Default] * Persistent Volumes ** xref:tutorial-volume-expansion.adoc[Persistent Volume Expansion] +* Scheduling + ** xref:tutorial-avx2-scheduling.adoc[AVX2-Aware Scheduling] * Sync Gateway ** xref:tutorial-sync-gateway.adoc[Connecting Sync-Gateway to a Couchbase Cluster] ** xref:tutorial-sync-gateway-clients.adoc[Exposing Sync-Gateway to Couchbase Lite Clients] diff --git a/modules/ROOT/pages/concept-encryption-at-rest.adoc b/modules/ROOT/pages/concept-encryption-at-rest.adoc new file mode 100644 index 0000000..730d504 --- /dev/null +++ b/modules/ROOT/pages/concept-encryption-at-rest.adoc @@ -0,0 +1,169 @@ += Encryption At Rest +:description: Understand encryption at rest in Couchbase Server and how to configure it using the Autonomous Operator. + +[abstract] +{description} + +== Overview + +Encryption at rest is a security feature introduced in Couchbase Server 8.0.0 that protects your data by encrypting it on disk. When enabled, sensitive data stored on the Couchbase nodes is encrypted, ensuring that even if the underlying storage is compromised, the data remains secure. + +== What Data Can Be Encrypted? + +Encryption at rest supports encrypting multiple types of data within your Couchbase deployment: + +* *Data in buckets* - The actual documents and data stored in your buckets +* *Cluster configuration* - Sensitive cluster settings and configurations +* *Logs* - Server log files (note: encrypting logs will break fluent-bit log streaming) +* *Audit logs* - Security audit trail data + +== Key Types + +Couchbase offers flexibility in how encryption keys are managed through three different key types: + +=== Couchbase Server Managed Keys + +Also called AutoGenerated keys, these are the simplest option. Couchbase Server automatically generates and manages these keys without requiring external services. This is ideal for: + +* Environments without external key management infrastructure +* Use cases where key management can be handled within Couchbase + +=== AWS KMS Keys + +AWS Key Management Service (KMS) integration allows you to use AWS-managed encryption keys. This is recommended when: + +* Running Couchbase in AWS (EKS or EC2) +* Your organization uses AWS KMS for centralized key management +* You need compliance with AWS security standards + +=== KMIP Keys + +Key Management Interoperability Protocol (KMIP) is an industry standard that works with enterprise key management systems from vendors like Thales, IBM, or HashiCorp Vault. Choose KMIP when: + +* You have an existing enterprise key management system +* You need vendor-neutral key management +* Compliance requires external key management + +== Key Concepts + +=== Key Encryption Keys (KEK) and Data Encryption Keys (DEK) + +Couchbase uses a two-tier key hierarchy: + +* *Key Encryption Keys (KEK)* - The master keys you define through `CouchbaseEncryptionKey` resources. These encrypt other keys or data. +* *Data Encryption Keys (DEK)* - Temporary keys generated by Couchbase to encrypt actual data. These are encrypted by KEKs. + +=== Key Rotation + +Key rotation is an important security practice. With encryption at rest: + +* KEK rotation can be scheduled through the `CouchbaseEncryptionKey` resource +* DEK rotation happens automatically based on the `rotationInterval` setting +* When a key rotates, new data is encrypted with the new key while old data remains accessible + +=== Key Usage Restrictions + +You can restrict what each key encrypts by setting usage parameters: + +* `configuration` - Cluster configuration data +* `key` - Other encryption keys +* `log` - Log files +* `audit` - Audit logs +* `allBuckets` - All bucket data + +By default, keys can encrypt anything. Restricting usage improves security through separation of concerns. + +== How to Enable Encryption At Rest + +Enabling encryption at rest with the Autonomous Operator involves three main steps: + +=== Step 1: Enable Encryption Management + +First, enable encryption at rest management on your `CouchbaseCluster` resource: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true +---- + +=== Step 2: Create Encryption Keys + +Create one or more `CouchbaseEncryptionKey` resources. Here's a simple example with an auto-generated key: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-key +spec: + keyType: AutoGenerated +---- + +For AWS KMS or KMIP keys, additional configuration is required (see xref:tutorial-encryption-at-rest.adoc[]). + +=== Step 3: Apply Encryption to Data + +Configure which data should be encrypted on your cluster or buckets: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true + configuration: + enabled: true + keyName: "my-key" + audit: + enabled: true + keyName: "my-key" +---- + +For bucket-level encryption: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseBucket +metadata: + name: secure-bucket +spec: + name: secure-bucket + memoryQuota: 512Mi + encryptionAtRest: + keyName: "my-key" +---- + +== Security Considerations + +When implementing encryption at rest: + +* *Key Protection* - Consider encrypting your data keys with a dedicated Key Encryption Key (KEK) rather than using the cluster master password +* *Key Rotation* - Implement regular key rotation schedules appropriate for your security requirements +* *External Key Management* - For sensitive environments, consider using AWS KMS or KMIP instead of auto-generated keys +* *Log Encryption Trade-offs* - Be aware that encrypting logs prevents log streaming to monitoring systems + +== Next Steps + +For detailed configuration instructions and advanced features, see: + +* xref:tutorial-encryption-at-rest.adoc[How to Configure Encryption At Rest] - Complete configuration guide with all options + +== Related Information + +* xref:concept-security.adoc[Security Concepts] +* xref:howto-manage-buckets.adoc[Managing Buckets] +* xref:howto-manage-cluster.adoc[Managing Clusters] + diff --git a/modules/ROOT/pages/concept-platform-certification.adoc b/modules/ROOT/pages/concept-platform-certification.adoc index 6be074f..718bc75 100644 --- a/modules/ROOT/pages/concept-platform-certification.adoc +++ b/modules/ROOT/pages/concept-platform-certification.adoc @@ -127,9 +127,9 @@ To submit the self-certification results to Couchbase, follow these steps: . Capture the Kubernetes platform's version information and other platform-specific components such as storage and networking. -. To upload the results to Couchbase, you will need a JIRA account for Couchbase; you can request a JIRA account here: https://issues.couchbase.com/secure/ContactAdministrators!default.jspa. +. If you are an existing customer of Couchbase, create a support ticket for instructions on how to submit your certification archive. -. Create a new JIRA ticket, project - Couchbase Kubernetes (K8S), and Summary - [Operator Self-Certification Lifecycle]. +. If you are a new customer of Couchbase, contact your Couchbase Account Team or use our general https://www.couchbase.com/contact/[contact page]. == Platform Requirements diff --git a/modules/ROOT/pages/concept-upgrade.adoc b/modules/ROOT/pages/concept-upgrade.adoc index 2358c4d..5f880e9 100644 --- a/modules/ROOT/pages/concept-upgrade.adoc +++ b/modules/ROOT/pages/concept-upgrade.adoc @@ -7,340 +7,148 @@ This includes upgrading the Couchbase Server version and also related Kubernetes == Upgrading Couchbase Server The Couchbase Server version can be xref:howto-couchbase-upgrade.adoc[upgraded] by the Operator. +To upgrade a Couchbase Server cluster, in the CouchbaseCluster manifest, change the `couchbasecluster.spec.image` field to the upgrade version you want. +The Operator upgrades the pods that run Couchbase Server in the Couchbase cluster. +If necessary, you can roll back the cluster to the previous version by reverting changes made to the image field. +The Operator then performs the rollback by reversing the upgrade process and applying the configured upgrade controls to the pods running the earlier cluster version. -Upgrades may be performed using one of a number of different strategies as specified by xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-upgradestrategy[`couchbaseclusters.spec.upgradeStrategy`] in the `CouchbaseCluster` resource: +For more granular control, use `spec.upgrade` in the CouchbaseCluster manifest resource to manage the upgrade process. -* InPlaceUpgrade performs an in-place upgrade on one, or more, pods at a time -* Rolling upgrades upgrade one, or more, pods at a time -* Immediate upgrades upgrade all pods at the same time +[#spec-upgrade-example] +=== spec.upgrade Example -For rolling and immediate upgrades, the process is as follows: - -* One or more candidate pods are selected -* New pods are created for each of the candidates with the same Couchbase configuration as the existing ones -* Data is rebalanced from the old pods to the new ones -* The candidate pods are deleted - -For in place upgrades, the process is as follows: - -* One or more candidate pods are selected -* Each candidate is failed over and updated with the new version -* Each PVC associated with the candidate is updated with the new version -* Operator performs an in place upgrade on the candidate - -When using rolling upgrades, performing the operation one pod at a time, risk is limited in the event of an issue, and can be rolled back. -A rolling upgrade requires the least network and compute overhead, so will affect client operation less. - -When using immediate upgrades, there is a greater risk, and requires greater resource during the upgrade, however the operation itself is significantly faster. -Immediate upgrades may have an undesired effect on client performance as all pods in the cluster are undergoing upgrade at the same time. - -When using in-place upgrade, the pods and PVCs are updated to use the new version. This is quicker than a rolling upgrade and will retain the same PVCs (data retained). -In-place upgrade cannot be used in conjunction with immediate upgrades. The validator will fail the request if immediate upgrade strategy and in-place upgrade process are both specified. -The same pod names and networking settings will be retained when the pod is restarted by the operator. - -You can balance time against risk by tailoring rolling updates with the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-rollingupgrade[`couchbaseclusters.spec.rollingUpgrade`] configuration parameter. -This allows rolling upgrades to upgrade 2 pods at a time, or 20% of the cluster at a time, for example. - -[NOTE] -==== -In-place upgrades are a volatile process that can lead to interruptions in service. It should not be used when there are less than 2 data nodes defined or data loss -may occur. -==== - -=== How to Upgrade a cluster - -To upgrade a cluster, you need to modify the CouchbaseCluster resource to specify the new version of Couchbase Server that you want to upgrade to. -This can be done by updating the `spec.image` field in the CouchbaseCluster manifest to the new version. - -For example, if you were upgrading from version 7.2.4 to 7.6.0 then the CouchbaseCluster resource needs to be updated as follows: +The following is a `spec.upgrade` example in the CouchbaseCluster manifest for an upgrade process: [source,yaml] ---- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: my-couchbase-cluster -spec: - image: couchbase/server:7.6.0 +upgrade: + upgradeProcess: SwapRebalance # <.> + upgradeStrategy: RollingUpgrade # <.> + rollingUpgrade: + maxUpgradable: 1 + maxUpgradablePercent: 100% + stabilizationPeriod: 10s # <.> + previousVersionPodCount: 0 # <.> + upgradeOrderType: Nodes # <.> + upgradeOrder: + - node-1 + - node-2 ---- -=== Couchbase Server Upgrade Constraints - -Not all upgrade paths are supported by Couchbase Server. -The Operator enforces the following constraints on Couchbase Server version upgrades. +The upgrade example explanation is as follows: -* Upgrades must be an upgrade, downgrades are not supported due to potential protocol incompatibility, for example: -** 5.5.0 to 5.5.3 is allowed -** 5.5.3 to 5.5.0 is not allowed -* Upgrades must not cross multiple major version boundaries, for example: -** 5.x.x to 6.x.x is allowed -** 5.x.x to 7.x.x is not allowed -* Couchbase Server versions cannot be changed during an upgrade +<.> `upgradeProcess` supports `SwapRebalance` or `InPlaceUpgrade`, and defaults to `SwapRebalance` when you don't specify a value. +The selected `upgradeStrategy` determines how `SwapRebalance` creates pods running the upgraded version. +To use `InPlaceUpgrade`, set the strategy to `rollingUpgrade`. -Refer to the Couchbase Server xref:server:install:upgrade.adoc[upgrade documentation] for more information about direct upgrade paths. +** During a `SwapRebalance`, the Operator selects one or more candidate pods and creates new pods for each candidate. +The Operator rebalances data from the existing pods to the new pods and then deletes the candidate pods. -[NOTE] -==== -Modifying the Couchbase Server version during an upgrade is permitted only if the end result is a roll-back to the previous version. -==== +** During an `InPlaceUpgrade`, the Operator selects one or more candidate pods and fails them over. +The Operator detaches the volumes from those pods, replaces them with pods running the new cluster version, and rebinds the existing volumes to the new pods. +This process completes faster than a `SwapRebalance`, and the Operator retains the pod names and network settings for the updated pods. -=== Rollback +<.> `upgradeStrategy` determines how `SwapRebalance` creates new pods by using either `RollingUpgrade` or `ImmediateUpgrade`. +To use `InPlaceUpgrade`, set the strategy to `rollingUpgrade`. -The Couchbase Operator provides the capability to roll back while it is in progress. A rollback can only be performed if the upgrade is in progress and has not yet completed. Once a cluster has fully upgraded to the new version the cluster can no longer be downgraded to a previous version. A rollback will simply replace the nodes on the new version and replace them with nodes with the version they had before an upgrade was started. +** `rollingUpgrade` upgrades a predetermined number of pods at a time, which limits risk and simplifies rollback. +Because this strategy uses the least network and compute resources, it minimizes impact on client operations. -==== How to Roll Back an Upgrade +** `ImmediateUpgrade` increases risk and resource utilization during the upgrade but completes the operation faster. +Because the strategy upgrades all pods at the same time, it can cause an adverse impact on client performance. -To initiate a rollback, you need to modify the CouchbaseCluster resource to specify the previous version of Couchbase Server that was running before the upgrade started. This can be done by updating the `spec.image` field in the CouchbaseCluster manifest to the previous version. +<.> `stabilizationPeriod` lets you control how long the Operator waits between upgrade cycles and gives you time to check cluster health and service availability. -For example, if you were upgrading from version 7.2.4 to 7.6.0 and encountered issues, you can roll back to version 7.2.4 by updating the manifest as follows: - -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: my-couchbase-cluster -spec: - image: couchbase/server:7.2.4 ----- - -After applying this change, the operator will begin the rollback process. - -=== Controlled Upgrades - -The Operator provides the capability to control the upgrade process by upgrading specific server classes at a time, this gives the ability to control the upgrade process and ensure that the upgrade process is controlled. For example, it is possible to upgrade the data nodes first, then the query nodes, and finally the index nodes. - -Whilst performing a controlled upgrade the cluster will be in a mixed state with different versions of Couchbase Server running at the same time. This time should be kept to a minimum to avoid potential issues with the cluster. - -==== How to Perform a Controlled Upgrade -To perform a controlled upgrade you need to first modify the CouchbaseCluster Image to specify the image that you want to upgrade to. This can be done by updating the `spec.image` field in the CouchbaseCluster manifest. You can then specify the server classes that you want to upgrade first by updating the `spec.servers.image` to the older image for the server classes you do not want to be upgraded first. You can explicitly update `spec.servers.image` for the server classes that you want to upgrade first, alternatively if the image is not specified for a server class then the image specified in `spec.image` will be used. - -.Example `CouchbaseCluster` Resource with three Server Classes -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: cb-example -spec: - image: couchbase/server:7.2.4 - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - services: - - index - - size: 3 - name: query - services: - - query ----- +<.> `previousVersionPodCount` instructs the Operator to keep a fixed number of pods running the previous cluster version. +This setting supports rollbacks or an extended `StabilizationPeriod` for the final pods during a cluster upgrade. +Couchbase Server considers the upgrade process complete only when all pods are running the new cluster version. +During this time, the Operator marks the cluster as Mixed Mode, which may restrict some features. -For the example above if you want to upgrade the data nodes first, then the query nodes, and finally the index nodes you can update the `CouchbaseCluster` resource as follows: +<.> `upgradeOrderType` defines the sequence the Operator uses to upgrade pods to the new cluster version. +It determines how the Operator interprets `upgradeOrder` and must be set to Nodes, ServerGroups, ServerClasses, or Services. +The Operator follows the sequence specified in `upgradeOrder` and applies the default ordering to any items not listed. -.Example `CouchbaseCluster` Resource with Controlled Upgrade data nodes first -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: cb-example -spec: - image: couchbase/server:7.6.0 # <.> - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - image: couchbase/server:7.2.4 # <.> - services: - - index - - size: 3 - image: couchbase/server:7.2.4 # <.> - name: query - services: - - query ----- +NOTE: `InPlaceUpgrade` can interrupt service and increase the risk of data loss. +Use this process only when the cluster has at least two nodes running the Data Service. -<.> The cluster image needs to be the latest version that you want to upgrade to. +=== How to Upgrade a Cluster -<.> The image for the index server class is set to the older version to ensure that the index nodes are not upgraded yet. +To upgrade a cluster, modify the CouchbaseCluster manifest by setting the `spec.image` field to the Couchbase Server version you want. -<.> The image for the query server class is set to the older version to ensure that the query nodes are not upgraded yet. +For example, if you are upgrading Couchbase Server version from 7.6.7 to 8.0.0, then update the manifest resource as follows: -.Example `CouchbaseCluster` Resource with Controlled Upgrade Upgrading query nodes second [source,yaml] ----- +--- apiVersion: couchbase.com/v2 kind: CouchbaseCluster metadata: - name: cb-example + name: my-couchbase-cluster spec: - image: couchbase/server:7.6.0 - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - image: couchbase/server:7.2.4 # <.> - services: - - index - - size: 3 - name: query - services: - - query ----- + image: couchbase/server:8.0.0 +--- -<.> The image for the index server class is set to the older version to ensure that the index nodes are not upgraded yet. +During an upgrade or when the cluster runs in Mixed Mode through `PreviousVersionPodCount`, Couchbase Server disables bucket storage backend migrations. +Couchbase Server also disables changes to sidecar containers such as Cloud Native Gateway and Fluent Bit. -.Example `CouchbaseCluster` Resource with Controlled Upgrade Upgrading index nodes last -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: cb-example -spec: - image: couchbase/server:7.6.0 - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - services: - - index - - size: 3 - name: query - services: - - query ----- +See xref:server:install/upgrade.adoc#supported-upgrade-paths[Upgrade Paths] for more information about the permitted upgrade paths in Couchbase Server. -==== How to Perform a Controlled Rollback +NOTE: You can modify the Couchbase Server version during an upgrade only to roll back to the previous cluster version. -The Operator also allows you to perform a controlled rollback. To perform a controlled rollback you would performed the controlled rollback steps in reverse. The following examples shows how to perform a controlled rollback on a example cluster where all the server classes but one have upgraded to the new version. Note that once a cluster is fully upgraded to the new version a rollback is no longer possible. +=== Rollback -.Example Starting `CouchbaseCluster` Resource for Controlled Rollback -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: cb-example -spec: - image: couchbase/server:7.6.0 # <.> - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - image: couchbase/server:7.2.4 # <.> - services: - - index - - size: 3 - name: query - services: - - query ----- +The Operator supports rolling back an upgrade that's in progress. +You can perform a rollback only while some pods still run the previous cluster version, which occurs only during an upgrade or when the cluster runs in mixed mode. +After the cluster upgrade to new version is complete, it cannot be rolled back to its old version. +To roll back, the Operator replaces the newly created pods with pods running the previous version and follows the strategy, process, and ordering defined in the <<#spec-upgrade-example,spec.upgrade example>>. -<.> The cluster image is the latest version that is being upgraded to. +==== How to Rollback a Cluster -<.> The image for the index server class is set to the older version indicating that the index nodes are not upgraded yet. +To initiate a rollback, modify the CouchbaseCluster manifest by setting the `spec.image` field to the Couchbase Server version that was running before the upgrade started. +For example, you were upgrading Couchbase Server version from 7.6.7 to 8.0.0, and encountered issues. +If these issues require rollback, update the `spec.image` field to change the version back to 7.6.7 in the manifest as follows: -.Example `CouchbaseCluster` Resource Controlled Rollback Query Nodes [source,yaml] ----- +--- apiVersion: couchbase.com/v2 kind: CouchbaseCluster metadata: - name: cb-example + name: my-couchbase-cluster spec: - image: couchbase/server:7.6.0 # <.> - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - image: couchbase/server:7.2.4 - services: - - index - - size: 3 - name: query - image: couchbase/server:7.2.4 # <.> - services: - - query ----- + image: couchbase/server:7.6.7 +--- -<.> During a controlled rollback the cluster image should stay as the version that was being upgraded to. If this is changed to the older version then the cluster will all be rolled back to the older version without control over the server class order. - -<.> The image for the query server class is set to the older version indicating that the query nodes should be rolled back next. +=== Controlled Upgrades -.Example `CouchbaseCluster` Resource Controlled Rollback Remaining Data Nodes -[source,yaml] ----- -apiVersion: couchbase.com/v2 -kind: CouchbaseCluster -metadata: - name: cb-example -spec: - image: couchbase/server:7.2.4 # <.> - ... - servers: - - size: 3 - name: data - services: - - data - - size: 3 - name: index - services: - - index - - size: 3 - name: query - services: - - query ----- +The Operator lets you control the upgrade process by using the fields in `couchbasecluster.spec.upgrade`. +For example, as an Administrator, you can use these fields to upgrade pods based on their availability region or the Couchbase Server services they run. +Also, you can upgrade pods running the Data Service before those running the Query Service, or upgrade specific pods in a defined order. -<.> As there is only one server class left to rollback the cluster image can be set to the older version and the remaining server nodes in the data class will be rolled back. +NOTE: While the cluster runs pods on two versions, the Operator marks the cluster as Mixed Mode and restricts features such as bucket migration and sidecar changes. +Keep the time spent in this mode to a minimum. -== Upgrading Pods +=== Upgrading Pods -In Kubernetes pods are immutable -- they cannot be modified once they are created. -If a `CouchbaseCluster` configuration value is modified that would also modify the underlying pod, then the Operator must create a new pod to replace the old one that does not match the required specification. -Pod upgrades work in exactly the same way as an upgrade to the Couchbase Server version; in fact upgrading the Couchbase Server image is just a subset of modifying any other pod specification parameter. +Kubernetes pods are immutable. +When you modify the CouchbaseCluster manifest in a way that requires pod replacement, the Operator creates new pods by using the same process as upgrading or rolling back pods to replace pods running the previous version. +Upgrading Couchbase Server is only a subset of modifying any other pod specification parameter. -The Operator compares the required pod specification with the one used to create the original pod, a candidate is selected if the specifications differ. -The Operator therefore can perform the following tasks: +The Operator builds pod specifications from the manifest and compares them with the existing pods in the cluster. +The Operator selects candidate pods when their specifications differ and can then perform the following tasks: +* Modification of scheduling constraints * Modification of environment variables -* xref:concept-scheduling.adoc[Modification of scheduling constraints] -* xref:concept-memory-allocation.adoc[Modification of memory constraints] -* xref:concept-tls.adoc[Enabling and disabling of TLS] -* xref:concept-persistent-volumes.adoc[Enabling and disabling of persistent storage] +* Modification of memory constraints +* Modification of Couchbase services +* Enabling and disabling of TLS +* Enabling and disabling of persistent storage -This mechanism allows a cluster to be used from evaluation right up to production, with features enabled as they are required, without service disruption. +This mechanism lets you use a cluster from evaluation through production and enable features as needed without service disruption. -== Upgrading Persistent Volumes +=== Upgrading Persistent Volumes -Online persistent volume resizing support is not yet available in all supported versions of Kubernetes. -As a result the Operator supports xref:howto-persistent-volume-resize.adoc[persistent volume resizing] using a similar mechanism to <>. -The Operator will detect a modification to the specification of a persistent volume template and schedule a pod upgrade in order to satisfy the request. +The Operator supports volume modifications by using the same mechanism as pod upgrades. +When you change the manifest in a way that requires updates to persistent volumes, the Operator upgrades the affected pods to apply those changes. +If the storage class and Kubernetes cluster support persistent volume resizing, you can expand in-use persistent volumes in place by setting `couchbasecluster.spec.enableOnlineVolumeExpansion` without upgrading the pods. -During an in-place upgrade, the PVC will be updated to include the updated image and couchbase server versions. The size of the PVC and data within it will not be edited. +During an `InPlaceUpgrade`, the Operator updates the PVCs to reflect the new image and Couchbase Server version. +The Operator changes the PVC size or stored data only when the cluster manifest requires those updates. \ No newline at end of file diff --git a/modules/ROOT/pages/howto-couchbase-upgrade.adoc b/modules/ROOT/pages/howto-couchbase-upgrade.adoc index bfe39fb..085734f 100644 --- a/modules/ROOT/pages/howto-couchbase-upgrade.adoc +++ b/modules/ROOT/pages/howto-couchbase-upgrade.adoc @@ -3,52 +3,227 @@ include::partial$constants.adoc[] [abstract] -How-to upgrade Couchbase Server to a newer version. +How-to upgrade Couchbase Server to a later version. -Given the existing configuration: +You can upgrade Couchbase Server by changing the `spec.image` field in the cluster manifest to a new version. +While the upgrade is in progress and some pods still run the previous version, you can roll back the cluster to that previous version. +The `spec.upgrade` section of the cluster manifest provides controls for granular management of the upgrade process. +Before you upgrade, review the xref:concept-upgrade.adoc[upgrade concepts] and the available controls. -[source,yaml,subs="attributes,verbatim"] +NOTE: During an upgrade or rollback, when the cluster runs 2 Couchbase Server versions, the Operator marks the cluster as Mixed Mode. +In this state, the Operator disables sidecar pod modifications and bucket storage backend migrations. + +== Upgrading a Cluster + +. This is the existing configuration: ++ +-- +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +spec: + image: couchbase/server:7.6.8 +---- + +** You can modify the image to any valid Couchbase Server image. + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +spec: + image: couchbase/server:8.0.0 +---- + +-- ++ + +. The modification triggers the Operator to compare existing pod specifications that use the old image with new pod specifications that use the new image. +Because the specifications differ, the Operator starts a `SwapRebalance` upgrade by using the `RollingUpgrade` strategy. +The Operator creates new pods, rebalances them into the cluster, and then ejects the old pods. + +=== In Place Upgrade + +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +spec: + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeProcess: InPlaceUpgrade # <.> +---- + +<.> The version to which you want to upgrade. +<.> Configure the Operator to perform in-place upgrades on the existing pods. +This process re-creates each pod in place by using the same name and persistent volume. + +=== Rolling Upgrade Controls + +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: + +[source,yaml] ---- apiVersion: couchbase.com/v2 kind: CouchbaseCluster spec: - image: couchbase/server:{couchbase-version-upgrade-from} # <.> + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeProcess: SwapRebalance + upgradeStrategy: RollingUpgrade + rollingUpgrade: # <.> + maxUpgradable: 3 # <.> + maxUpgradablePercent: 10 # <.> ---- -<.> xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-image[`couchbaseclusters.spec.image`] can be modified to any valid Couchbase Server image, in this example we want to upgrade the version only. +<.> The version to which you want to upgrade. +<.> `couchbasecluster.spec.upgrade.rollingUpgrade` has 2 options. +You can configure either or both, and the Operator upgrades the lower number of pods per cycle. +<.> Configure the Operator to only upgrade 3 pods at a time. +<.> Configure the Operator to upgrade a maximum of 10% of pods at a time, relative to the total cluster size, rounded down. +When you apply the preceding `couchbasecluster.spec.upgrade.rollingUpgrade` settings to a cluster with 10 pods, the Operator upgrades one pod at a time. +In this case, the percentage value resolves to a lower number and takes precedence. +For a cluster with 60 pods, the Operator upgrades three pods at a time. +In this case, the fixed number resolves to a lower value than the percentage. + +=== Granular Controls + +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: -[source,yaml,subs="attributes,verbatim"] +[source,yaml] ---- apiVersion: couchbase.com/v2 kind: CouchbaseCluster spec: - image: couchbase/server:{couchbase-version} # <.> + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeProcess: SwapRebalance + upgradeStrategy: RollingUpgrade + rollingUpgrade: + maxUpgradable: 1 + maxUpgradablePercent: 20 + stabilizationPeriod: 5m # <.> + previousVersionPodCount: 4 # <.> + upgradeOrderType: Nodes # <.> + upgradeOrder: # <.> + - cb-instance-3 + - cb-instance-1 + - cb-instance-2 ---- -<.> The modification will trigger the Operator to detect that existing pod specifications do not match the new pod specifications. -This will perform a xref:concept-upgrade.adoc#upgrading-couchbase-server[rolling upgrade] of Couchbase Server. +<.> The version to which you want to upgrade. +<.> The Operator waits for the specified period between each upgrade cycle. +In this example, the Operator waits five minutes before starting the next cycle. +During this time, the Operator continues normal operations except for functions disabled during upgrades. +<.> The Operator keeps four pods running the previous version and does not upgrade them. +You can use this setting to extend the stabilization period for a specific number of nodes. +Couchbase Server and the Operator consider the upgrade complete only when all pods run the same new version. +The features available only in the upgraded version remain unavailable until the upgrade completes. +<.> Define the sequence the Operator uses to upgrade pods by pod name (Couchbase Server cluster), server group, server class, or the services. +As an administrator, you can upgrade pods in a specific Availability Zone first, +or upgrade pods running the Data Service first and then the pods running the Index Service. +<.> Based on the configured `upgradeOrderType`, this field defines a non-exhaustive sequence that the Operator uses to upgrade pods. +In this example, the Operator upgrades pods by node name in the following order: `cb-instance-3`, `cb-instance-1`, and `cb-instance-2`. +The Operator upgrades any remaining pods not listed by using alphabetical order. -== In Place Upgrade -Given the existing configuration: +=== Order Upgrades by Server Group -[source,yaml,subs="attributes,verbatim"] +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: + +[source,yaml] ---- apiVersion: couchbase.com/v2 kind: CouchbaseCluster spec: - image: couchbase/server:{couchbase-version-upgrade-from} - upgradeProcess: InPlaceUpgrade # <.> + serverGroups: + - zone-1 + - zone-2 + - zone-3 + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeOrderType: ServerGroups # <.> + upgradeOrder # <.> + - zone-3 ---- -<.> This field will inform the operator that we want to perform an in-place upgrade of the existing pods. +<.> The version to which you want to upgrade. +<.> The order type you want the Operator to use. +<.> Upgrade pods running in server group `zone-3` first. +The Operator upgrades pods running in server groups not in this list, `zone-1` and `zone-2`, in the order defined in the `couchbasecluster.spec.serverGroups` list. + +=== Order Upgrades by Server Class + +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +spec: + servers: + - name: data_only + services: + - data + size: 2 + - name: idx_query + services: + - index + - query + size: 6 + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeOrderType: ServerClasses # <.> + upgradeOrder: # <.> + - idx_query + - data_only +---- + +<.> The version to which you want to upgrade. +<.> The order type you want the Operator to use. +<.> Upgrade pods running in the `idx_query` server class first, followed by pods running in `data_only`. +The Operator limits the number of pods upgraded in each cycle to the smaller value between `spec.upgrade.rollingUpgrade.maxUpgradable` and the number of pods in the current server class. +The Operator upgrades only one server class at a time. +If the number of pods in a server class is less than `spec.upgrade.rollingUpgrade.maxUpgradable`, the Operator does not upgrade pods from the next server class in the list. + +=== Order Upgrades by Service + +Assuming `spec.image` is equal to a version earlier than `couchbase/server:8.0.0`, update the manifest to: -[source,yaml,subs="attributes,verbatim"] +[source,yaml] ---- apiVersion: couchbase.com/v2 kind: CouchbaseCluster spec: - image: couchbase/server:{couchbase-version} # <.> - upgradeProcess: InPlaceUpgrade + servers: + - name: data_only + services: + - data + size: 2 + - name: idx_query + services: + - index + - query + size: 2 + - name: query_only + services: + - query + size: 2 + image: couchbase/server:8.0.0 # <.> + upgrade: + upgradeOrderType: Services # <.> + upgradeOrder: # <.> + - index + - data + - query ---- -<.> The Operator will detect that existing pod specifications do not match the new pod specifications and trigger an in-place upgrade. \ No newline at end of file +<.> The version to which you want to upgrade. +<.> The order type you want the Operator to use. +<.> Define the sequence the Operator uses to upgrade services. +The Operator orders pods that run multiple services by the first service listed in the sequence. +With this manifest, the Operator upgrades pods running the Index Service first, followed by the Data Service, and then the Query Service. +If the number of pods running a service is less than `spec.upgrade.rollingUpgrade.maxUpgradable`, the Operator does not upgrade pods from the next service in the list. +When the sequence does not include all services, the Operator upgrades pods running unlisted services by using the default order: data, query, index, search, analytics, and eventing. diff --git a/modules/ROOT/pages/howto-xdcr.adoc b/modules/ROOT/pages/howto-xdcr.adoc index 8dd5d59..0c9ece5 100644 --- a/modules/ROOT/pages/howto-xdcr.adoc +++ b/modules/ROOT/pages/howto-xdcr.adoc @@ -14,8 +14,8 @@ This page documents how to setup XDCR to replicate data to a different Kubernete In this scenario the remote cluster is accessible with Kubernetes based DNS. This applies to both xref:concept-couchbase-networking.adoc#intra-kubernetes-networking[intra-Kubernetes networking] and xref:concept-couchbase-networking.adoc#inter-kubernetes-networking-with-forwarded-dns[inter-Kubernetes networking with forwarded DNS]. -When using inter-Kubernetes networking, the local XDCR client must forward DNS requests to the remote cluster in order to resolve DNS names of the target Couchbase instances. -Refer to the xref:tutorial-remote-dns.adoc[Inter-Kubernetes Networking with Forwarded DNS] tutorial to understand how to configure forwarding DNS servers. +When using inter-Kubernetes networking, the local XDCR client must forward DNS requests to the remote cluster to resolve DNS names of the target Couchbase instances. +For more information, see the xref:tutorial-remote-dns.adoc[Inter-Kubernetes Networking with Forwarded DNS] tutorial to understand how to configure forwarding DNS servers. TLS is optional with this configuration, but shown for completeness. To configure without TLS, omit any TLS related attributes. @@ -56,10 +56,10 @@ spec: remoteBucket: destination ---- -<.> The resource is labeled with `replication:from-my-cluster-to-remote-cluster` to avoid any ambiguity because by default the Operator will select all `CouchbaseReplication` resources in the namespace and apply them to all remote clusters. -Thus the label is specific to the source cluster and target cluster. +<.> The resource is labeled with `replication:from-my-cluster-to-remote-cluster` to avoid any ambiguity because by default the Couchbase Autonomous Operator selects all `CouchbaseReplication` resources in the namespace and apply them to all remote clusters. +The label is specific to the source cluster and target cluster. -We define a remote cluster on our local resource: +Define a remote cluster on the local resource: [source,yaml] ---- @@ -101,12 +101,12 @@ spec: <.> The correct hostname to use is the remote cluster's console service to provide stable naming and service discovery. The hostname is calculated as per the xref:howto-client-sdks.adoc#dns-based-addressing[SDK configuration how-to]. -<.> As we are not using client certificate authentication we specify a secret containing a username and password on the remote system. +<.> As we're not using client certificate authentication, specify a secret containing a username and password on the remote system. <.> **TLS only:** For TLS connections you need to specify the remote cluster CA certificate in order to verify the remote cluster is trusted. xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-xdcr-remoteclusters-tls-secret[`couchbaseclusters.spec.xdcr.remoteClusters.tls.secret`] documents the secret format. -<.> Replications are selected that match the labels we specify, in this instance the ones that go from this cluster to the remote one. +<.> Replications are selected that match the labels specified, in this instance the ones that go from this cluster to the remote cluster. <.> **Inter-Kubernetes networking with forwarded DNS only:** the xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-servers-pod[`couchbaseclusters.spec.servers.pod.spec.dnsPolicy`] field tells Kubernetes to provide no default DNS configuration. @@ -184,7 +184,7 @@ spec: ---- <.> The resource is labeled with `replication:from-my-cluster-to-remote-cluster` to avoid any ambiguity because by default the Operator will select all `CouchbaseReplication` resources in the namespace and apply them to all remote clusters. -Thus the label is specific to the source cluster and target cluster. +The label is specific to the source cluster and target cluster. We define a remote cluster on our local resource: @@ -217,18 +217,32 @@ spec: <.> The correct hostname to use is the remote cluster's console service to provide stable naming and service discovery. The hostname is calculated as per the xref:howto-client-sdks.adoc#dns-based-addressing-with-external-dns[SDK configuration how-to]. -<.> As we are not using client certificate authentication we specify a secret containing a username and password on the remote system. +<.> As we're not using client certificate authentication, specify a secret containing a username and password on the remote system. <.> For TLS connections you need to specify the remote cluster CA certificate in order to verify the remote cluster is trusted. xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-xdcr-remoteclusters-tls-secret[`couchbaseclusters.spec.xdcr.remoteClusters.tls.secret`] documents the secret format. -<.> Replications are selected that match the labels we specify, in this instance the ones that go from this cluster to the remote one. +<.> Replications are selected that match the labels specified, in this instance the ones that go from this cluster to the remote cluster. == IP Based Addressing -In this discouraged scenario, there is no shared DNS between two Kubernetes clusters - we must use IP based addressing. +In this discouraged scenario, there is no shared DNS between 2 Kubernetes clusters - it must use IP based addressing. Pods are exposed by using Kubernetes `NodePort` type services. -As there is no DNS, TLS is not supported, so security must be maintained between the two clusters using a VPN. +As there is no DNS, TLS is not supported, so security must be maintained between the 2 clusters using a VPN. + +When you use NodePorts on a remote Couchbase cluster for XDCR connections, you risk interruptions if Kubernetes deletes and recreates the Service. +Kubernetes does not guarantee that a new Service reuses the same NodePort. +If the port number changes, XDCR connections that rely on the old IP:NodePort become invalid. + +The exact outcome depends on the Kubernetes CNI (Container Networking Interface) implementation. +In some cases, Kubernetes removes the old port before it creates the new service, which causes a brief loss of connectivity. +In other cases, Kubernetes creates the new NodePort first, which reduces or avoids downtime. + +- Single-node Couchbase cluster: When the node's NodePort changes, XDCR cannot reconnect automatically. +In this case, you must manually update the replication configuration at couchbaseclusters.spec.xdcr.remoteClusters.hostname with the new IP:NodePort of the Couchbase node. + +- Multi-node Couchbase cluster: When a node's NodePort changes, XDCR reconnects to another node that still exposes a valid NodePort. +However, if XDCR tries to reconnect through the updated node, you may still need to update couchbaseclusters.spec.xdcr.remoteClusters.hostname with the new port. [IMPORTANT] ==== @@ -287,7 +301,7 @@ spec: ---- <.> The resource is labeled with `replication:from-my-cluster-to-remote-cluster` to avoid any ambiguity because by default the Operator will select all `CouchbaseReplication` resources in the namespace and apply them to all remote clusters. -Thus the label is specific to the source cluster and target cluster. +The label is specific to the source cluster and target cluster. We define a remote cluster on our local resource: @@ -318,14 +332,14 @@ spec: <.> The correct hostname to use. The hostname is calculated as per the xref:howto-client-sdks.adoc#ip-based-addressing[SDK configuration how-to]. -<.> As we are not using client certificate authentication we specify a secret containing a username and password on the remote system. +<.> As we're not using client certificate authentication, specify a secret containing a username and password on the remote system. -<.> Finally we select replications that match the labels we specify, in this instance the ones that go from this cluster to the remote one. +<.> Finally, select replications that match the labels specified, in this instance the ones that go from this cluster to the remote cluster. == Scopes and collections support With Couchbase Server version 7 and greater, scope and collections support is now present for XDCR. -The Couchbase Kubernetes Operator fully supports the various options available to the Couchbase Server version it is running with, full details can be found in the xref:server:manage:manage-xdcr/replicate-using-scopes-and-collections.html[official documentation]. +The Couchbase Kubernetes Operator fully supports the various options available to the Couchbase Server version it's running with, full details can be found in the xref:server:manage:manage-xdcr/replicate-using-scopes-and-collections.html[official documentation]. [NOTE] ==== @@ -396,7 +410,7 @@ Eventual consistency rules apply so if the bucket is still being created then we <.> This is an example of replicating only a specific collection `collection1` in scope `scope1`. -<.> The target keyspace must be of identical size so as we are replicating from a collection we must also specify a target collection. +<.> The target keyspace must be of identical size; as replicating from a collection requires also specifying a target collection. <.> Deny rules can be used to prevent replication of specific keyspaces. This is useful if for example you have a scope with a large number of collections and you want to replicate all but a small number. diff --git a/modules/ROOT/pages/prerequisite-and-setup.adoc b/modules/ROOT/pages/prerequisite-and-setup.adoc index 1009cb3..7b87e01 100644 --- a/modules/ROOT/pages/prerequisite-and-setup.adoc +++ b/modules/ROOT/pages/prerequisite-and-setup.adoc @@ -26,11 +26,11 @@ This release supports the deployment of the following Couchbase software: | Software | Version | Couchbase Server Enterprise Edition -| 7.0 - 7.6^*^ +| 7.2 - 8.0^*^ |=== -*: Couchbase Server 7.6 is supported with Operator 2.6; -however, new features in 7.6 are only supported on Operator 2.7.x and 2.8.x +*: Couchbase Server 8.0 is supported with Operator 2.8.1; +however, new features in 8.0 are only supported on Operator 2.9.x The following diagram depicts Couchbase Server compatibility with the Operator for this, and previous releases, and can be used to calculate upgrade paths: @@ -109,10 +109,10 @@ This release supports the following Kubernetes platforms: | Platform | Version | Open Source Kubernetes -| 1.29 - 1.34 +| 1.31 - 1.34 | Red Hat OpenShift Container Platform -| 4.16 - 4.19 +| 4.18 - 4.19 |=== The following diagrams depict Couchbase Operator compatibility with Kubernetes and OpenShift platforms, and can be used to calculate upgrade paths: @@ -146,6 +146,7 @@ This release supports the following managed Kubernetes services and utilities: == Persistent Volume Compatibility Persistent volumes are mandatory for production deployments. +The Couchbase Operator is designed to work with any CSI-compliant storage driver and compatibility with different CSI implementations can be validated using the xref:concept-platform-certification.adoc[Operator Self-Certification Lifecycle] tooling. Review the Kubernetes Operator xref:best-practices.adoc#persistent-volumes-best-practices[best practices] for more information about cluster supportability requirements. == Hardware Requirements @@ -176,6 +177,9 @@ The architecture of each node must be uniform across the cluster as the use of m NOTE: The official Couchbase docker repository contains multi-arch images which do not require explicit references to architecture tags when being pulled and deployed. However, when pulling from a private repository, or performing intermediate processing on a machine with a different architecture than the deployed cluster, the use of explicit tags may be required to ensure the correct images are deployed. +IMPORTANT: For optimal performance with Couchbase Server 8.0 and later versions, in particular for vector search (FTS and GSI) workloads, use nodes that support AVX2 CPU instructions (x86-64-v3 Microarchitecture). +For guidance on detecting AVX2 support and scheduling pods on AVX2-capable nodes, see xref:tutorial-avx2-scheduling.adoc[AVX2-Aware Scheduling for Couchbase Server]. + == RBAC and Networking Requirements Preparing the Kubernetes cluster to run the Operator may require setting up proper RBAC and network settings in your Kubernetes cluster. diff --git a/modules/ROOT/pages/reference-annotations.adoc b/modules/ROOT/pages/reference-annotations.adoc index 759318a..2eaff74 100644 --- a/modules/ROOT/pages/reference-annotations.adoc +++ b/modules/ROOT/pages/reference-annotations.adoc @@ -70,4 +70,30 @@ When set to true, the pods will be initialised with the node name as the hostnam == Cloud Native Gateway === OTLP Endpoint ==== `cao.couchbase.com/networking.cloudNativeGateway.otlp.endpoint` -Used to set a custom OTLP endpoint for on Cloud Native Gateway. This annotation is applied to the cluster, and takes a string value (e.g. "https://otel:1234"). The value is passed directly to the Cloud Native Gateway container. \ No newline at end of file + +Use this annotation to set a custom OTLP endpoint for the Cloud Native Gateway. +Apply the annotation to the cluster with a string value such as `https://otel:1234`. +The value is passed directly to the Cloud Native Gateway container. + +== Backup +=== Additional Args +==== `cao.couchbase.com/full.additionalArgs` + +Use this annotation to set additional arguments for `cbbackupmgr` in full backup jobs. +Provide a string that includes any values supported by xref:server:backup-restore/cbbackupmgr-backup.html[`cbbackupmgr`]. + +==== `cao.couchbase.com/incremental.additionalArgs` + +Use this annotation to set additional arguments for `cbbackupmgr` in incremental backup jobs. +Provide a string that includes any values supported by xref:server:backup-restore/cbbackupmgr-backup.html[`cbbackupmgr`]. + +==== `cao.couchbase.com/merge.additionalArgs` + +Use this annotation to set additional arguments for `cbbackupmgr` in merge backup jobs. +Provide a string that includes any values supported by xref:server:backup-restore/cbbackupmgr-backup.html[`cbbackupmgr`]. + +=== Additional Operator Backup Args +==== `cao.couchbase.com/additionalOperatorBackupArgs` + +Use this annotation to set additional arguments for the backup container that are not used by `cbbackupmgr`, for example, `--force-delete-lockfile`. +Provide a string value with the flags to set. \ No newline at end of file diff --git a/modules/ROOT/pages/release-notes.adoc b/modules/ROOT/pages/release-notes.adoc index c76b663..bff6aec 100644 --- a/modules/ROOT/pages/release-notes.adoc +++ b/modules/ROOT/pages/release-notes.adoc @@ -1,101 +1,338 @@ -= Release Notes for Couchbase Kubernetes Operator {operator-version-minor} -include::partial$constants.adoc[] += Release Notes for Couchbase Kubernetes Operator 2.9 +:page-toclevels: 2 -Autonomous Operator {operator-version-minor} introduces our new Cluster Migration functionality well as a number of other improvements and minor fixes. +This page summarizes the fixes and known issues in Couchbase Kubernetes Operator 2.9, and links to the associated issues. -Take a look at the xref:whats-new.adoc[What's New] page for a list of new features and improvements that are available in this release. +== New Features -== Installation +For information about new features and major improvements made in Couchbase Kubernetes Operator 2.9, see xref:whats-new.adoc[What's New]. -For installation instructions, refer to: +[#release-290] +== Release 2.9 (December 2025) -* xref:install-kubernetes.adoc[] -* xref:install-openshift.adoc[] +Couchbase Kubernetes Operator 2.9 was released in December 2025. +This release contains fixes to issues and known issues. -== Upgrading to Kubernetes Operator {operator-version-minor} +[#fixed-issues-v290] +=== Fixed Issues in 2.9 -The necessary steps needed to upgrade to this release depend on which version of the Kubernetes Operator you are upgrading from. +For Couchbase Kubernetes Operator 2.9 released in December 2025, these are the fixed issues. -=== Upgrading from 1.x, 2.0, or 2.1 +*https://jira.issues.couchbase.com/browse/K8S-1537/[K8S-1537]*:: -There is no direct upgrade path from versions prior to 2.2.0. -To upgrade from a 1.x, 2.0.x, or 2.1.x release, you must first upgrade to 2.4.x, paying particular attention to supported Kubernetes platforms and Couchbase Server versions. -Refer to the xref:2.4@operator::howto-operator-upgrade.adoc[Operator 2.4 upgrade steps] if upgrading from a pre-2.2 release. +The cluster UUID is no longer required when creating remote cluster connections. -=== Upgrading from 2.2, 2.3, 2.4, 2.5, 2.6, or 2.7 +*https://jira.issues.couchbase.com/browse/K8S-2829/[K8S-2829]*:: -There are no additional upgrade steps when upgrading from these versions, and you may follow the xref:howto-operator-upgrade.adoc[standard upgrade process]. +You can now specify the `cao.couchbase.com/additionalArgs` annotation on CouchbaseBackup and CouchbaseRestore resources to pass additional `cbbackupmgr` arguments to the container. -For further information read the xref:concept-upgrade.adoc[Couchbase Upgrade] concepts page. +*https://jira.issues.couchbase.com/browse/K8S-3016/[K8S-3016]*:: -include::partial$couchbase-operator-release-notes-2.8.1.adoc[] +You can now specify the Couchbase Server password policy using the CouchbaseCluster resource. -[#release-v280] -== Release 2.8.0 +*https://jira.issues.couchbase.com/browse/K8S-3121/[K8S-3121]*:: -Couchbase Kubernetes Operator 2.8.0 was released in March 2025. +You can now specify to preserve the CouchbaseBackupRestore resource after the restore completes. -[#changes-in-behavior-v280] -=== Changes in Behaviour +*https://jira.issues.couchbase.com/browse/K8S-3153/[K8S-3153]*:: -==== Admission Controller Changes +New TCP tunables (`tcpKeepAliveIdle`, `tcpKeepAliveInterval`, `tcpKeepAliveProbes`, `tcpUserTimeout`) are now available through the CouchbaseCluster resource when using Couchbase Server 8.0. -The Dynamic Admission Controller (DAC) will now warn if any cluster settings don't match our xref:best-practices.adoc#production-deployments[Best Practices for Production Deployments]. +*https://jira.issues.couchbase.com/browse/K8S-3258/[K8S-3258]*:: -The DAC will now prevent changes to the `CouchbaseCluster` spec while a hibernation is taking place. -If hibernation is enabled while a cluster is migrating, upgrading, scaling, or rebalancing, that process will conclude before the cluster enters hibernation. The DAC will warn when this is the case, and it will be visible in the operator logs. +Added the `logging.configNameReleasePrefix` boolean to the Helm chart. +The default value is `false`. +When set to `true`, the Operator prefixes the Fluent Bit configuration with the release name. ++ +Couchbase recommends enabling this setting only on new clusters because enabling it on existing clusters triggers recreation of all pods. -To prevent any invalid resources failing to reconcile (i.e. if the DAC is not deployed in the current environment), the DAC Validation is now run at the beginning of the reconciliation loop. -Any invalid resources will be skipped for reconciliation, marked as `NotValid`, and logged. +*https://jira.issues.couchbase.com/browse/K8S-3371/[K8S-3371]*:: -==== Bucket and Index Service Settings +You can now specify environment variables for the CouchbaseBackup and CouchbaseBackupRestore pods to allow `cbbackupmgr` tuning. -In a previous version of the Operator, `enablePageBloomFilter` was unfortunately missed from the Index Service settings. -This has been addressed in CAO 2.8.0, and it is now available as xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-cluster-indexer-enablepagebloomfilter[`couchbaseclusters.spec.cluster.indexer.enablePageBloomFilter`]. +*https://jira.issues.couchbase.com/browse/K8S-3434/[K8S-3434]*:: -Until CAO 2.8.0, Bucket Compaction settings were only available to be set in the xref:resource/couchbasecluster.adoc[`CouchbaseCluster`] resource, at xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-cluster-autocompaction[`couchbaseclusters.spec.cluster.autoCompaction`]. -These settings have now been added to the xref:resource/couchbasebucket.adoc[`CouchbaseBucket`] resource at xref:resource/couchbasebucket.adoc#couchbasebuckets-spec-autocompaction[`couchbasebuckets.spec.autoCompaction`]. +`spec.monitoring` is deprecated and no longer attaches an exporter sidecar to the Couchbase Server pod. -[IMPORTANT] -==== -Prior to Operator 2.8.0, the above settings could still be set directly on the cluster. +*https://jira.issues.couchbase.com/browse/K8S-3535/[K8S-3535]*:: -To avoid these being reset to default values during the CAO upgrade, any of the above settings that have been changed must be added to the appropriate resource during the upgrade. +If `couchbasecluster.spec.buckets.managed` is set to `false`, restoring from backup automatically creates buckets. -Specifically, this needs to be done _after_ updating the CRDs, and _before_ installing the new Operator +*https://jira.issues.couchbase.com/browse/K8S-3616/[K8S-3616]*:: -For further information see xref:howto-operator-upgrade.adoc#update-existing-resources[Update Existing Resources]. -==== +New REST API bucket settings for the Data Service are now available in Couchbase Server 8.0. -==== Metrics Changes +*https://jira.issues.couchbase.com/browse/K8S-3638/[K8S-3638]*:: -A number of new metrics have been added, see xref:reference-prometheus-metrics.adoc[Prometheus Metrics Reference] for details. +You can now specify a merge schedule on the CouchbaseBackup resource. -It is now possible to include the Couchbase Cluster UUID, or Cluster UUID and Cluster Name, as labels with any Operator metric that is related to a specific Couchbase Cluster. -This can be enabled by setting `optional-metric-labels` to either `uuid-only` or `uuid-and-name`, when using xref:tools/cao.adoc#cao-create-operator-flags[cao create operator] or xref:tools/cao.adoc#cao-generate-operator-flags[cao generate operator]. +*https://jira.issues.couchbase.com/browse/K8S-3646/[K8S-3646]*:: -While adding the Couchbase Cluster UUID and Cluster Name labels, it was discovered that there were inconsistencies regarding the Kubernetes Namespace and Cluster Resource Name labels in some of the existing metrics. -Some had separate labels for `namespace` and `name`, and some had a combined `namespace/name` label. -In order to provide consistency, all metrics by default now have separate `name` and `namespace` labels. -The previous behavior, where a small number of metrics had the combined form of the label, can be achieved by setting `separate-cluster-namespace-and-name` to `false`, when using xref:tools/cao.adoc#cao-create-operator-flags[cao create operator] or xref:tools/cao.adoc#cao-generate-operator-flags[cao generate operator]. +You can now set the Query Service `CompletedStreamSize` using the CouchbaseCluster resource. -==== Annotation Changes +*https://jira.issues.couchbase.com/browse/K8S-3650/[K8S-3650]*:: -===== Storage Backend Migration +When using Couchbase Server 8.0, you can no longer create Memcached buckets. -As an enhancement to the Couchstore/Magma migration functionality added in Operator 2.7, CAO 2.8.0 adds two new annotations: +*https://jira.issues.couchbase.com/browse/K8S-3715/[K8S-3715]*:: -* Bucket Migrations are now disabled by default, to prevent unexpected node rebalances. These can be enabled with xref:reference-annotations.adoc#cao-couchbase-combuckets-enablebucketmigrationroutines[`cao.couchbase.com/buckets.enableBucketMigrationRoutines`]. -* Similar to a maintenance upgrade, it is now possible to specify how many Pods can be migrated at a time with xref:reference-annotations.adoc#cao-couchbase-combuckets-maxconcurrentpodswaps[`cao.couchbase.com/buckets.maxConcurrentPodSwaps`]. +Added RBAC roles for users to match new roles added in Couchbase Server 8.0. -===== History Retention +*https://jira.issues.couchbase.com/browse/K8S-3786/[K8S-3786]*:: -The annotations related to History Retention, that were added in Operator 2.4.1, have now been added to the xref:resource/couchbasebucket.adoc[`CouchbaseBucket`], and xref:resource/couchbasecollection.adoc[`CouchbaseCollection`] resources, at xref:resource/couchbasebucket.adoc#couchbasebuckets-spec-historyretention[`couchbasebuckets.spec.historyRetention`], and xref:resource/couchbasecollection.adoc#couchbasecollections-spec-history[`couchbasecollections.spec.history`], respectively. +You can now specify `default` and `disk_io_optimized` for the Data Service reader threads. -The History Retention annotations should be considered deprecated, and it should be noted that if used, they will take precedence over the equivalent values in the resources. -Care should be taken to make sure that the annotations are removed as soon as the resources have been updated with the new attributes. +*https://jira.issues.couchbase.com/browse/K8S-3917/[K8S-3917]*:: -include::partial$couchbase-operator-release-notes-2.8.0.adoc[] +You can now set `overheadMemory` for `autoResourceAllocation` to specify a static overhead amount. + +*https://jira.issues.couchbase.com/browse/K8S-3951/[K8S-3951]*:: + +`cao.couchbase.com/autoCompaction.magmaFragmentationPercentage` has been replaced by a field in the CouchbaseCluster CRD. + +*https://jira.issues.couchbase.com/browse/K8S-4013/[K8S-4013]*:: + +You can now disable DNS resolution verification when creating pods before activating them in the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4016/[K8S-4016]*:: + +Fixed a bug that caused a panic when a member pod became unresponsive. + +*https://jira.issues.couchbase.com/browse/K8S-4028/[K8S-4028]*:: + +Added an upgrade stanza to the CouchbaseCluster resource to give users more control over upgrades. + +*https://jira.issues.couchbase.com/browse/K8S-4091/[K8S-4091]*:: + +Updated `spec.networking.addressFamily` to accept `IPv4Only`, `IPv4Priority`, `IPv6Only`, and `IPv6Priority`. +The existing `IPv4` and `IPv6` values retain the `IPv4Only` and `IPv6Only` behavior, so no change is required for existing configurations. ++ +These values are deprecated and will be removed in a future release. ++ +The priority or only option determines whether `addressFamilyOnly` is set to `true` or `false`. + +*https://jira.issues.couchbase.com/browse/K8S-4097/[K8S-4097]*:: + +The MirWatchdog is an out-of-band check that provides additional alerting. +It is used when the Operator cannot reconcile a cluster due to reasons outside its control and requires manual user intervention. +Scenarios include, but are not limited to, TLS expiration, Couchbase authentication errors, and loss of quorum. +This feature is disabled by default but can be enabled and configured by using the `mirWatchdog` field in the CouchbaseCluster CRD. +If the cluster enters this condition, it will: ++ +. Set the `cluster_manual_intervention` gauge metric to `1`. +. Add the `ManualInterventionRequired` condition to the cluster, where possible, with a message describing the reason for entering the MIR state. +. Raise a `ManualInterventionRequired` Kubernetes event, with the message describing the reason for entering manual intervention. +. Optionally, skips reconciliation until the manual intervention required state is resolved, that is, until the issue that caused the condition is fixed. + +*https://jira.issues.couchbase.com/browse/K8S-4101/[K8S-4101]*:: + +Added support for the Encryption at Rest feature of Couchbase Server 8.0. + +*https://jira.issues.couchbase.com/browse/K8S-4108/[K8S-4108]*:: + +The CouchbaseUser resource now includes an `enabled` flag to allow administrators to enable or disable user accounts. + +*https://jira.issues.couchbase.com/browse/K8S-4109/[K8S-4109]*:: + +The CouchbaseUser resource now allows the administrators to enforce password change on a user’s first login using the `couchbaseuser.spec.userPassword.requireInitialChange` field. + +*https://jira.issues.couchbase.com/browse/K8S-4111/[K8S-4111]*:: + +CouchbaseBucket resources now support `durabilityImpossibleFallback` with values `disabled` and `fallbackToActiveAck`. + +*https://jira.issues.couchbase.com/browse/K8S-4112/[K8S-4112]*:: + +Added multiple settings to CouchbaseBucket resources to configure XDCR Conflict Logging. + +*https://jira.issues.couchbase.com/browse/K8S-4114/[K8S-4114]*:: + +Added a CouchbaseCluster resource setting that enables auto‑failover of Ephemeral Buckets with no replicas in Couchbase Server 8.0 and later versions. + +*https://jira.issues.couchbase.com/browse/K8S-4117/[K8S-4117]*:: + +Added `data.diskUsageLimit` to the CouchbaseCluster resource to enable Disk Usage Guardrails. + +*https://jira.issues.couchbase.com/browse/K8S-4118/[K8S-4118]*:: + +Added support for SDK Telemetry settings in Couchbase Server 8.0 and later versions. + +*https://jira.issues.couchbase.com/browse/K8S-4120/[K8S-4120]*:: + +For CouchbaseBuckets, now the default storage engine is `magma` and the vBucketCount is `128`. + +*https://jira.issues.couchbase.com/browse/K8S-4144/[K8S-4144]*:: + +In the earlier versions of Couchbase Kubernetes Operator, the metrics port annotation `prometheus.io/port` was set to `8091`, even when TLS was enabled. +It now correctly sets to `18091`. + +*https://jira.issues.couchbase.com/browse/K8S-4158/[K8S-4158]*:: + +EvictionPolicy changes can now be applied to an online bucket during a swap rebalance. + +*https://jira.issues.couchbase.com/browse/K8S-4161/[K8S-4161]*:: + +Operator 2.9.0 allows you to set `spec.cluster.analytics.numReplicas`. +This feature is supported only on Couchbase Server 7.6 and later versions. + +*https://jira.issues.couchbase.com/browse/K8S-4203/[K8S-4203]*:: + +Fixed an issue where metrics scrapes became too large when the Operator managed many clusters. + +*https://jira.issues.couchbase.com/browse/K8S-4209/[K8S-4209]*:: + +Full backups can now be resumed. + +*https://jira.issues.couchbase.com/browse/K8S-4273/[K8S-4273]*:: + +Fixed an issue where the Operator failed to remove pods from the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4279/[K8S-4279]*:: + +Fixed an issue where log message tags were inconsistent. + +*https://jira.issues.couchbase.com/browse/K8S-4404/[K8S-4404]*:: + +Fixed an issue that caused upgrades to fail when image definitions used SHA256 digests. + +[#known-issues-29] +=== Known Issues in 2.9 + +For Couchbase Kubernetes Operator 2.9 released in December 2025, these are the known issues that aren’t yet resolved. + +*https://jira.issues.couchbase.com/browse/K8S-3839/[K8S-3839]*:: + +Removing a server group from a cluster that uses InPlaceUpgrades can cause the Operator to fail to reconcile the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4349/[K8S-4349]*:: + +The CouchbaseCluster CRD now exceeds the size limit for client-side apply. +Use the `--server-side` option with `kubectl apply` to apply the resource. + +*https://jira.issues.couchbase.com/browse/K8S-4433/[K8S-4433]*:: + +When running in mixed mode and before the upgrade to 8.0 completes, creating a Memcached bucket can cause the Operator to fail reconciliation. + +*https://jira.issues.couchbase.com/browse/K8S-4436/[K8S-4436]*:: + +Upgrading the Couchbase Kubernetes Operator while the cluster is not fully upgraded causes the Operator to complete the cluster upgrade immediately. + +*https://jira.issues.couchbase.com/browse/K8S-4445/[K8S-4445]*:: + +The CouchbaseRestore resource status may fail to update from Couchbase Backup containers. +This does not prevent the restore from occurring. + +*https://jira.issues.couchbase.com/browse/K8S-4448/[K8S-4448]*:: + +Changing the server groups applied to a server class while the Operator is recovering a pod can block reconciliation. + +*https://jira.issues.couchbase.com/browse/K8S-4456/[K8S-4456]*:: + +Operator must be paused when using `cao create pod`. +Otherwise, the Operator identifies the pod as foreign and removes it. + +*https://jira.issues.couchbase.com/browse/K8S-4469/[K8S-4469]*:: + +Editing a Couchbase bucket storage backend while enabling or disabling `BucketMigrationRoutines` on a CouchbaseCluster can trigger a race condition that leads to an unreconcilable state. + +*https://jira.issues.couchbase.com/browse/K8S-4471/[K8S-4471]*:: + +When `spec.networking.addressFamily` is set to `IPv6Priority` or `IPv6Only`, the Operator creates the cluster Service as IPv4 SingleStack, which causes pod launch failures. +Patching the Service to `PreferDualStack` (IPv4, IPv6) allows the cluster to be created. + +*https://jira.issues.couchbase.com/browse/K8S-4474/[K8S-4474]*:: + +Attempting to change bucket settings during a storage backend migration causes the Operator to fail to reconcile the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4477/[K8S-4477]*:: + +The admission controller allows you to set the `collectionHistoryDefault` value on Couchstore buckets. +This setting has no effect on Couchstore buckets. + +*https://jira.issues.couchbase.com/browse/K8S-4482/[K8S-4482]*:: + +Attempting to change the `evictionPolicy` setting during a storage backend migration while `OnlineEvictionPolicyChange` is `true` can lead to an unreconcilable state. + +*https://jira.issues.couchbase.com/browse/K8S-4485/[K8S-4485]*:: + +Manually editing a bucket’s storage backend with BucketMigrationRoutines disabled can prevent the Operator from reconciling the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4486/[K8S-4486]*:: + +You can start a Couchbase Server upgrade while a Bucket Storage Backend Migration is in progress. +This can result in an unreconcilable state. + +*https://jira.issues.couchbase.com/browse/K8S-4487/[K8S-4487]*:: + +Manual Intervention Watchdog does not clear the rebalancing condition when entering the Manual Intervention Required condition. + +*https://jira.issues.couchbase.com/browse/K8S-4488/[K8S-4488]*:: + +Reconciliation can begin before the Manual Intervention Watchdog is fully disabled. + +*https://jira.issues.couchbase.com/browse/K8S-4490/[K8S-4490]*:: + +During stabilization, the Operator reconciles some, but not all CouchbaseCluster settings. + +*https://jira.issues.couchbase.com/browse/K8S-4496/[K8S-4496]*:: + +It is possible to set replicas lower than the minimum required for a CouchbaseCluster when using an Ephemeral bucket. + +*https://jira.issues.couchbase.com/browse/K8S-4497/[K8S-4497]*:: + +The dynamic admission controller treats the default bucket storage backend as Couchstore, even on 8.0 clusters. + +*https://jira.issues.couchbase.com/browse/K8S-4498/[K8S-4498]*:: + +The order of password policy updates and user password changes can affect whether the Operator can make these changes successfully. + +*https://jira.issues.couchbase.com/browse/K8S-4499/[K8S-4499]*:: + +It's possible to rotate admin credentials to a password that does not meet the cluster password policy, which can prevent the Operator from reconciling the cluster. + +*https://jira.issues.couchbase.com/browse/K8S-4504/[K8S-4504]*:: + +Setting reader and writer threads to `balanced` during an upgrade from Couchbase Server 7.6 to 8.0 can cause memcached to crash. +Removing the reader and writer thread settings from the CRD resolves the issue. + +*https://jira.issues.couchbase.com/browse/K8S-4507/[K8S-4507]*:: + +Enabling shard affinity in mixed mode, before the cluster fully supports the setting, can cause the Operator to fail reconciliation. + +*https://jira.issues.couchbase.com/browse/K8S-4508/[K8S-4508]*:: + +The Operator may not clear the unreconcilable condition after the resource is fixed. + +*https://jira.issues.couchbase.com/browse/K8S-4509/[K8S-4509]*:: + +Holding a cluster in mixed mode between 7.2.x and 7.6.x can prevent the Operator from reconciling some cluster settings. + +*https://jira.issues.couchbase.com/browse/K8S-4510/[K8S-4510]*:: + +Remaining in mixed mode on older Couchbase Server versions can cause the Operator to log errors due to incorrect version checks. + +*https://jira.issues.couchbase.com/browse/K8S-4511/[K8S-4511]*:: + +You cannot currently specify Arbiter nodes in the services order during an upgrade. + +*https://jira.issues.couchbase.com/browse/K8S-4512/[K8S-4512]*:: + +The Operator can update Index Storage settings twice accidentally. + +*https://jira.issues.couchbase.com/browse/K8S-4514/[K8S-4514]*:: + +The Operator can update the Fluent Bit configuration accidentally. + +*https://jira.issues.couchbase.com/browse/K8S-4515/[K8S-4515]*:: + +While processing a bucket migration, the Operator does not send the correct online change flag for modifications to Couchbase Server when changing the Eviction Policy. + +*https://jira.issues.couchbase.com/browse/K8S-4520/[K8S-4520]*:: + +The admission controller allows encryption at rest to be enabled in mixed mode. + +*https://jira.issues.couchbase.com/browse/K8S-4521/[K8S-4521]*:: + +The admission controller allows use of the Couchbase User `spec.user` field for users that reference clusters not running version 8.0.0. == Feedback @@ -109,4 +346,5 @@ Couchbase is thankful to all of the individuals that have created these third-pa == More Information -* xref:server:release-notes:relnotes.adoc[Couchbase Server Release Notes] \ No newline at end of file +* xref:server:release-notes:relnotes.adoc[Couchbase Server Release Notes Version 8.0] +* xref:server:introduction:whats-new.adoc[What's New in Couchbase Server Version 8.0] \ No newline at end of file diff --git a/modules/ROOT/pages/tutorial-avx2-scheduling.adoc b/modules/ROOT/pages/tutorial-avx2-scheduling.adoc new file mode 100644 index 0000000..2fb0cc9 --- /dev/null +++ b/modules/ROOT/pages/tutorial-avx2-scheduling.adoc @@ -0,0 +1,592 @@ += AVX2-Aware Scheduling for Couchbase Server +:page-toclevels: 2 + +[abstract] +This tutorial explains how to detect the AVX2 CPU extension and x86-64-v3 Microarchitecture on Kubernetes nodes, label nodes accordingly, and configure CouchbaseCluster resources to schedule pods only on compatible nodes. + +include::partial$tutorial.adoc[] + +== Background + +Starting with Couchbase Server 8.0, Vector Search (FTS and GSI) performance benefits from AVX2-capable CPUs on x86-64 nodes. + +=== What is Advanced Vector Extensions 2 (AVX2) + +AVX2 is: + +* An SIMD instruction set available on modern Intel and AMD x86-64 CPUs. +* Required for high-performance vectorized operations. +* Part of the x86-64-v3 Microarchitecture level, along with BMI1, BMI2, and FMA. +* Not guaranteed on all cloud VM types. +* Not enforced by default in Kubernetes scheduling. + +IMPORTANT: Kubernetes clusters must explicitly detect CPU capabilities and restrict scheduling to make sure Couchbase Server pods run on AVX2-capable nodes. + +== AVX2-Aware Scheduling Approach + +This tutorial approaches the problem through the following layers: + +* <<#node-labeling-methods,*Node labeling*>>: Detect nodes that support AVX2. +* <<#pod-scheduling-with-nodeaffinity,*Scheduler constraints*>>: Schedule pods only on compatible nodes. +* <<#cloud-specific-node-provisioning,*Cloud provisioning*>>: Make sure node pools use AVX2-capable CPUs. + +[#node-labeling-methods] +== Node Labeling Methods + +Use one of the following methods to label Kubernetes nodes that support AVX2: + +* <<#node-labeling-via-nfd, *Node Feature Discovery (NFD)*>>: Recommended for production environments. +* <<#node-labeling-via-daemonset, *A custom DaemonSet*>>: Provides a direct, lightweight option with minimal dependencies. + +[#node-labeling-via-nfd] +=== Method 1: Node Feature Discovery (Recommended) + +Node Feature Discovery (NFD) is a Kubernetes SIG project that detects hardware features and labels nodes automatically. + +IMPORTANT: Couchbase recommends this method for production environments. + +Use the following steps to label Kubernetes nodes that support AVX2 using NFD: + +. <<#avx2-node-label-used-by-nfd, NFD to detect AVX2 support>> +. Install NFD by using your preferred method +** <<#install-nfd-kubectl, Install NFD by Using kubectl>> +** <<#install-nfd-helm, Install NFD by Using Helm>> +. <<#verify-nfd-node-labels, Verify NFD Node Labels>> + +[#avx2-node-label-used-by-nfd] +==== AVX2 Node Label Used by NFD + +NFD applies the following standardized node label to indicate AVX2 support. + +[source] +---- +feature.node.kubernetes.io/cpu-cpuid.AVX2=true +---- + +This label follows a standard format and is safe to use across environments. + +[#install-nfd-kubectl] +==== Install NFD by Using kubectl + +Install NFD on the cluster by using `kubectl`. +Replace `v0.18.3` with the latest release tag from the https://github.com/kubernetes-sigs/node-feature-discovery/releases[NFD releases page]. + +[source,console] +---- +kubectl apply -k "https://github.com/kubernetes-sigs/node-feature-discovery/deployment/overlays/default?ref=v0.18.3" +---- + +[#install-nfd-helm] +==== Install NFD by Using Helm + +Install NFD on the cluster by using Helm. +Replace `v0.18.3` with the latest release tag from the https://github.com/kubernetes-sigs/node-feature-discovery/releases[NFD releases page]. + +[source,console] +---- +helm install nfd \ + oci://registry.k8s.io/nfd/charts/node-feature-discovery \ + --version 0.18.3 \ + --namespace node-feature-discovery \ + --create-namespace + +---- + +[#verify-nfd-node-labels] +==== Verify NFD Node Labels + +Verify that NFD applies the AVX2 label to supported nodes. + +[source,console] +---- +kubectl get nodes -L feature.node.kubernetes.io/cpu-cpuid.AVX2 +---- + +[#node-labeling-via-daemonset] +=== Method 2: AVX2 Node Labeling via DaemonSet + +This approach provides a lightweight option when NFD is unavailable or when you want to limit dependencies. + +==== AVX2 Node Labeling Process + +The DaemonSet uses the following process to detect AVX2 support and label nodes: + +* Runs as a DaemonSet on every node. +* Reads `/proc/cpuinfo` from the host. +* Checks for the `avx2` flag. +* Labels the node when AVX2 support is present. + +Use the following steps to label Kubernetes nodes that support AVX2 by using a custom DaemonSet: + +. <<#define-avx2-label, Define the AVX2 node label>> +. <<#create-daemonset-manifest, Create the DaemonSet manifest>> +. <<#deploy-daemonset, Deploy the DaemonSet>> +. <<#verify-node-labels, Verify node labels>> + +[#define-avx2-label] +==== Define the AVX2 Node Label + +Define the AVX2 node label to identify nodes that support the AVX2 CPU extension. + +[source] +---- +cpu.feature/AVX2=true +---- + +[#create-daemonset-manifest] +==== Create the DaemonSet Manifest + +Create a DaemonSet manifest named `avx2-node-labeler.yaml` with the following content that detects AVX2 support and applies the node label. + +[source,yaml] +---- +apiVersion: v1 +kind: ServiceAccount +metadata: + name: avx2-labeler-sa + namespace: kube-system +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRole +metadata: + name: avx2-labeler-role +rules: +- apiGroups: [""] + resources: ["nodes"] + verbs: ["get", "patch", "update"] +--- +apiVersion: rbac.authorization.k8s.io/v1 +kind: ClusterRoleBinding +metadata: + name: avx2-labeler-binding +subjects: +- kind: ServiceAccount + name: avx2-labeler-sa + namespace: kube-system +roleRef: + kind: ClusterRole + name: avx2-labeler-role + apiGroup: rbac.authorization.k8s.io +--- +apiVersion: apps/v1 +kind: DaemonSet +metadata: + name: avx2-node-labeler + namespace: kube-system +spec: + selector: + matchLabels: + app: avx2-node-labeler + template: + metadata: + labels: + app: avx2-node-labeler + spec: + serviceAccountName: avx2-labeler-sa + containers: + - name: labeler + image: bitnami/kubectl:latest + command: + - /bin/bash + - -c + - | + if grep -qi "avx2" /host/proc/cpuinfo; then + kubectl label node "$NODE_NAME" cpu.feature/AVX2=true --overwrite + fi + sleep infinity + env: + - name: NODE_NAME + valueFrom: + fieldRef: + fieldPath: spec.nodeName + volumeMounts: + - name: host-proc + mountPath: /host/proc + readOnly: true + volumes: + - name: host-proc + hostPath: + path: /proc +---- + +[#deploy-daemonset] +==== Deploy the DaemonSet + +Deploy the DaemonSet to run the AVX2 detection process on all nodes. + +[source,console] +---- +kubectl apply -f avx2-node-labeler.yaml +---- + +[#verify-node-labels] +==== Verify Node Labels + +Verify that Kubernetes correctly applies the AVX2 label to supported nodes. + +[source,console] +---- +kubectl get nodes -L cpu.feature/AVX2 +---- + +[#pod-scheduling-with-nodeaffinity] +== Pod Scheduling by Using nodeAffinity + +After you label nodes, configure the CouchbaseCluster resource to restrict pod scheduling to AVX2-capable nodes in one of the following ways: + +* <<#enforce-avx2-scheduling, *Enforce AVX2 Scheduling*>>: Recommended. +* <<#prefer-avx2-scheduling, *Prefer AVX2 Scheduling*>>: Fallback allowed. + +[#enforce-avx2-scheduling] +=== Enforce AVX2 Scheduling (Recommended) + +Use `requiredDuringSchedulingIgnoredDuringExecution` to enforce AVX2 requirements during pod scheduling. + +[source,yaml] +---- +spec: + servers: + - name: data-nodes + size: 3 + services: + - data + - index + - query + pod: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: feature.node.kubernetes.io/cpu-cpuid.AVX2 + operator: In + values: + - "true" +---- + +[#prefer-avx2-scheduling] +=== Prefer AVX2 Scheduling (Fallback Allowed) + +Use `preferredDuringSchedulingIgnoredDuringExecution` to prefer AVX2-capable nodes while allowing scheduling on other nodes. + +[source,yaml] +---- +spec: + servers: + - name: data-nodes + size: 3 + services: + - data + pod: + spec: + affinity: + nodeAffinity: + preferredDuringSchedulingIgnoredDuringExecution: + - weight: 100 + preference: + matchExpressions: + - key: feature.node.kubernetes.io/cpu-cpuid.AVX2 + operator: In + values: + - "true" +---- + +[#cloud-specific-node-provisioning] +== Cloud-Specific Node Provisioning + +Cloud providers expose CPU capabilities and node selection options differently. +Use the following cloud platform-specific guidance to provision nodes with AVX2 support. + +[#google-gke] +=== Google Kubernetes Engine (GKE) + +GKE requires additional consideration because node pools can include mixed CPU generations and do not guarantee AVX2 support by default. + +[#gke-avx2-guarantees] +==== AVX2 Support Guarantees in GKE + +The following table summarizes how GKE guarantees AVX2 support under different configurations. + +[cols="1,1"] +|=== +|Guarantee |Status + +|AVX2 by machine type +|Not guaranteed + +|AVX2 by region +|Not guaranteed + +|AVX2 by default +|Not guaranteed + +|AVX2 via min CPU platform +|Guaranteed +|=== + +[#creating-gke-node-pool-with-avx2] +==== Create a GKE Node Pool with AVX2 Support + +Use the following steps to create a GKE node pool that guarantees AVX2 support. + +. Select a compatible machine family, such as `n2`, `c2`, `c3`, `n4`, `m2`, `m3`, and so on. + +. Enforce a minimum CPU platform that supports AVX2. +For example: ++ +-- +[source,console] +---- +gcloud container node-pools create avx2-pool \ + --cluster=my-cluster \ + --region=us-central1 \ + --machine-type=n2-standard-4 \ + --min-cpu-platform="Intel Cascade Lake" \ + --num-nodes=3 \ + --node-labels=cpu=avx2 +---- +-- + +. Set the minimum CPU platform (`min-cpu-platform`) to Intel Haswell or AMD Rome, or a newer generation. + +. Verify the selected VM series supports AVX2 by referring to the provider documentation. + +This configuration guarantees AVX2 support at the infrastructure level. + +[#gke-automatic-node-labels] +==== GKE Automatic Node Labels + +GKE automatically applies node labels that identify the node pool associated with each node. + +[source] +---- +cloud.google.com/gke-nodepool= +---- + +[#gke-node-affinity-pattern] +==== GKE nodeAffinity Pattern + +Use node affinity to restrict pod scheduling to a specific GKE node pool. + +[source,yaml] +---- +spec: + servers: + - name: data-nodes + size: 3 + services: + - data + - index + - query + pod: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: cloud.google.com/gke-nodepool + operator: In + values: + - avx2-pool + +---- + +[#amazon-eks] +=== Amazon Elastic Kubernetes Service (EKS) + +Use the following sections to provision AVX2-capable nodes and configure pod scheduling in Amazon Elastic Kubernetes Service (EKS). + +[#eks-avx2-capable-instance-types] +==== AVX2-Capable EC2 Instance Types + +The following EC2 instance families support AVX2 instructions: + +* *Intel*: M5, C5, R5, M6i, C6i, R6i, M7i, C7i and newer generations. +* *AMD*: M5a, C5a, R5a, M6a, C6a, R6a and newer generations. + +Verify the selected instance type supports AVX2 by referring to the provider documentation. + +[#creating-eks-node-group-with-avx2] +==== Create an EKS Node Group with AVX2 Support + +Create an EKS node group by using AVX2-capable instance types and apply a node label to identify supported nodes. + +[source,console] +---- +eksctl create nodegroup \ + --cluster my-cluster \ + --name avx2-ng \ + --node-type c6i.large \ + --nodes 3 \ + --node-labels cpu=avx2 +---- + +[#eks-node-affinity-configuration] +==== EKS nodeAffinity Configuration + +Use node affinity to restrict pod scheduling to AVX2-capable nodes. + +[source,yaml] +---- +spec: + servers: + - name: data-nodes + size: 3 + services: + - data + - index + - query + pod: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: cpu + operator: In + values: + - avx2 +---- + +You can also restrict scheduling by using the automatic instance type label: + +[source,yaml] +---- +- key: node.kubernetes.io/instance-type + operator: In + values: + - c6i.large + - c6i.xlarge +---- + +[#azure-aks] +=== Azure Kubernetes Service (AKS) + +Use the following sections to provision AVX2-capable nodes and configure pod scheduling in Azure AKS. + +[#aks-avx2-capable-vm-series] +==== AVX2-Capable Azure VM Series + +The following Azure VM series support AVX2 instructions: + +* Dv3 and Ev3 VM series, based on Intel Haswell and Broadwell processors. +* Dv4 and Ev4 VM series, based on Intel Cascade Lake processors. +* Dv5 and Ev5 VM series, based on Intel Ice Lake processors. + +Verify the selected VM series supports AVX2 by referring to the Azure documentation. + +[#creating-aks-node-pool-with-avx2] +==== Create an AKS Node Pool with AVX2 Support + +Create an AKS node pool by using an AVX2-capable VM series and apply a node label to identify supported nodes. + +[source,console] +---- +az aks nodepool add \ + --resource-group rg \ + --cluster-name my-aks \ + --name avx2pool \ + --node-vm-size Standard_D8s_v5 \ + --node-count 3 \ + --labels cpu=avx2 +---- + +[#aks-node-affinity-pattern] +==== AKS nodeAffinity Configuration + +Use node affinity to restrict pod scheduling to AVX2-capable nodes. + +[source,yaml] +---- +spec: + servers: + - name: data-nodes + size: 3 + services: + - data + - index + - query + pod: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: cpu + operator: In + values: + - avx2 +---- + +== A Complete CouchbaseCluster Example + +Here's a complete example combining all best practices. + +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: cb-example-auth +type: Opaque +data: + username: QWRtaW5pc3RyYXRvcg== + password: cGFzc3dvcmQ= +--- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: cb-example +spec: + image: couchbase/server:8.0.0 + security: + adminSecret: cb-example-auth + buckets: + managed: true + servers: + - name: data-nodes + size: 3 + services: + - data + - index + - query + pod: + spec: + affinity: + nodeAffinity: + requiredDuringSchedulingIgnoredDuringExecution: + nodeSelectorTerms: + - matchExpressions: + - key: feature.node.kubernetes.io/cpu-cpuid.AVX2 + operator: In + values: + - "true" + # Alternative using custom DaemonSet label: + # - key: cpu.feature/AVX2 + # operator: In + # values: + # - "true" +---- + +== Troubleshooting + +Use the following checks to confirm that Kubernetes applies AVX2 node labels as expected. + +=== Verify AVX2 Node Labels + +Verify that nodes expose the expected AVX2 labels, based on the labeling method you use. + +[source,console] +---- +# For NFD labels +kubectl get nodes -o custom-columns=\ +NAME:.metadata.name,\ +AVX2:.metadata.labels."feature\.node\.kubernetes\.io/cpu-cpuid\.AVX2" + +# For custom labels (Using the DaemonSet) +kubectl get nodes -L cpu.feature/AVX2 +---- diff --git a/modules/ROOT/pages/tutorial-encryption-at-rest.adoc b/modules/ROOT/pages/tutorial-encryption-at-rest.adoc new file mode 100644 index 0000000..67d7117 --- /dev/null +++ b/modules/ROOT/pages/tutorial-encryption-at-rest.adoc @@ -0,0 +1,423 @@ += Couchbase Encryption At Rest + +[abstract] +How to configure Couchbase Server with encryption at rest. This guide covers operator-managed keys, AWS KMS-backed keys, and KMIP-backed keys, + +Couchbase Server supports encryption at rest. +This is a feature that allows you to encrypt the data at rest on the disk. + +== Prerequisites +* Couchbase Server 8.0.0 or later + +== Overview + +In Couchbase 8.0.0 Encryption at Rest was introduced which allows data on the Couchbase Nodes to be encrypted at rest. The data that can be encrypted at rest includes: + +- Data in buckets +- Cluster configuration +- Logs +- Audit + +Couchbase offers three types of Keys that can be used to encrypt data: + +- Couchbase Server Managed Keys +- AWS KMS Keys +- KMIP Keys + +== Enabling Encryption at Rest Management + +To use any Encryption at Rest features through the Operator, you must first enable encryption at rest management on the Couchbase Cluster resource. + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true # <.> +---- +<.> Enable operator-managed encryption at rest. By default, this is disabled. + +Once enabled, the operator will manage encryption keys and apply encryption settings to your cluster. + +=== Selecting Encryption Keys + +By default, the operator will use all `CouchbaseEncryptionKey` resources in the same namespace as the cluster. You can use a label selector to control which keys the operator manages: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true + selector: # <.> + matchLabels: + cluster: my-cluster +---- +<.> Only encryption keys with the label `cluster: my-cluster` will be managed for this cluster. + +== Managing Couchbase Server Managed Keys + +Couchbase Server Managed Keys (also called AutoGenerated keys) are the simplest type of encryption key. These keys are generated and managed automatically by Couchbase Server without requiring external key management services. + +=== Basic Example + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-key +spec: + keyType: AutoGenerated +---- + +=== Usage + +Keys can be used to encrypt different types of data, and this usage can be enforced by setting the usage of the key. Setting the usage of the key restricts what it can be be used to encrypt, with the options being: + +- Keys +- Configuration +- Logs +- Audit +- Buckets + +By default keys can be used to encrypt anything. To restrict the usage of the key the `spec.usage` object on the key can be used to set the usage of the key. + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-key +spec: + keyType: AutoGenerated + usage: + configuration: true # <.> + key: true # <.> + log: true # <.> + audit: true # <.> + allBuckets: true # <.> +---- + +<.> The `spec.usage.configuration` field defines whether the key should be used for configurations. This is set to true by default. +<.> The `spec.usage.key` field defines whether the key should be used for keys. This is set to true by default. +<.> The `spec.usage.log` field defines whether the key should be used for logs. This is set to true by default. +<.> The `spec.usage.audit` field defines whether the key should be used for audit. This is set to true by default. +<.> The `spec.usage.allBuckets` field defines whether the key should be used for all buckets. This is set to true by default. + +=== Additional Options + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-key +spec: + keyType: AutoGenerated + autoGenerated: + rotation: + intervalDays: 30 # <.> + StartTime: 2025-01-01T00:00:00Z # <.> + canBeCached: true +---- + +<.> The `spec.autoGenerated.rotation.intervalDays` field defines the interval in days at which the key should be rotated. +<.> The `spec.autoGenerated.rotation.startTime` field defines the first time at which the key rotation will start. + +==== Key Encryption with Another Key + +For enhanced security, AutoGenerated keys can be encrypted with another encryption key instead of the cluster's master password: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: master-key +spec: + keyType: AutoGenerated + usage: # <.> + configuration: false + key: true # <.> + log: false + audit: false + allBuckets: false +--- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: bucket-key +spec: + keyType: AutoGenerated + autoGenerated: + encryptWithKey: master-key # <.> + usage: + configuration: false + key: false + log: false + audit: false + allBuckets: true # <.> +---- +<.> Restrict the master key's usage to only encrypt other keys (Key Encryption Key). +<.> Allow this key to be used for encrypting other keys. +<.> Encrypt this key using the `master-key` encryption key. +<.> This key will only be used to encrypt bucket data. + +== Managing AWS KMS Keys + +AWS Key Management Service (KMS) is a fully managed service that makes it easy to create, manage, and control cryptographic keys used to protect your data. AWS KMS Keys can be used to encrypt the data at rest in the Couchbase Cluster. To use AWS KMS Keys you will need to provide a way to authenticate with AWS, either using IMDS or providing a secret with credentials. + +=== Prerequisites + +* An AWS account with KMS key creation permissions +* A KMS key created in AWS +* Either: + - AWS credentials with permission to use the KMS key, or + - IAM role attached to the Kubernetes nodes with KMS permissions (for IMDS) + +=== Basic Example with AWS Credentials + +To provide AWS credentials via a Kubernetes secret a secret with the AWS credentials must be created. The credentials file should follow the standard AWS credentials format: +[source,ini] +---- +[default] +aws_access_key_id = YOUR_ACCESS_KEY_ID +aws_secret_access_key = YOUR_SECRET_ACCESS_KEY +---- + +The secret can be created using the following command: + +.Step 1: Create an AWS credentials secret +[source,bash] +---- +kubectl create secret generic aws-credentials \ + --from-file=credentials=/path/to/.aws/credentials +---- + +.Step 2: Create the encryption key resource +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-aws-key +spec: + keyType: AWS # <.> + awsKey: + keyARN: "arn:aws:kms:us-east-1:123456789012:key/abcd1234-ab12-cd34-ef56-abcdef123456" # <.> + keyRegion: "us-east-1" # <.> + credentialsSecret: "aws-credentials" # <.> + profileName: # <.> +---- +<.> Specifies that this is an AWS KMS key. +<.> The ARN of your KMS key from AWS. +<.> The AWS region where the KMS key is located. +<.> The name of the Kubernetes secret containing AWS credentials. +<.> The optional profile name to use for the AWS credentials if multiple profiles are present in the credentials file. + +=== Authenticating with IMDS +When running in AWS (EKS or Kubernetes on EC2), you can use Instance Metadata Service (IMDS) to authenticate using the IAM role attached to the nodes: + + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-aws-key-imds +spec: + keyType: AWS + awsKey: + keyARN: "arn:aws:kms:us-east-1:123456789012:key/abcd1234-ab12-cd34-ef56-abcdef123456" + keyRegion: "us-east-1" + useIMDS: true # <.> +---- +<.> Enable authentication using IMDS. No credentials secret is required. + +== Managing KMIP Keys + +Key Management Interoperability Protocol (KMIP) is an OASIS standard for communication between key management systems and applications. KMIP allows you to use external key management solutions from vendors like Thales, IBM, or HashiCorp Vault. + +=== Prerequisites + +* A KMIP-compliant server +* Client certificate and private key in PKCS#8 format +* KMIP server host and port +* A key ID for an existing key on the KMIP server + + +=== Basic Example + +.Step 1: Create a Kubernetes secret with client credentials + + +[source,yaml] +---- +apiVersion: v1 +kind: Secret +metadata: + name: kmip-client-secret +type: Opaque +data: + passphrase: + tls.key: + tls.crt: +---- + +The secret must contain three keys : + +* `tls.crt` - The client certificate +* `tls.key` - The client private key in encrypted PKCS#8 format +* `passphrase` - The passphrase for decrypting the private key + +.Step 2: Create the KMIP encryption key resource +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-kmip-key +spec: + keyType: KMIP # <.> + kmipKey: + host: "kmip.example.com" # <.> + port: 5696 # <.> + timeoutInMs: 5000 # <.> + clientSecret: "kmip-client-cert" # <.> + verifyWithSystemCA: true # <.> + verifyWithCouchbaseCA: true # <.> + keyID: "existing-key-identifier" # <.> + +---- +<.> Specifies that this is a KMIP-managed key. +<.> The hostname of your KMIP server. +<.> The port number of your KMIP server (standard KMIP port is 5696). +<.> Connection timeout in milliseconds (must be between 1000 and 300000). +<.> The name of the Kubernetes secret containing client certificates. +<.> Verify the KMIP server certificate against the system CA bundle. +<.> Verify the KMIP server certificate against the Couchbase CA bundle. +<.> The unique identifier of the existing key in the KMIP server. + +=== Encryption Approaches + +KMIP supports two encryption approaches: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseEncryptionKey +metadata: + name: my-kmip-key-native +spec: + keyType: KMIP + kmipKey: + host: "kmip.example.com" + port: 5696 + timeoutInMs: 5000 + clientSecret: "kmip-client-cert" + encryptionApproach: NativeEncryptDecrypt # <.> +---- +<.> Use native encrypt/decrypt operations on the KMIP server. + +Available approaches: + +* `LocalEncrypt` (default) - Key material is retrieved and encryption/decryption happens locally on Couchbase nodes. Better performance. +* `NativeEncryptDecrypt` - Encryption/decryption operations are performed by the KMIP server. Key material never leaves the KMIP server. More secure but higher latency. + +Choose `NativeEncryptDecrypt` when security requirements mandate that key material never leaves the key management system. Choose `LocalEncrypt` for better performance when the security model allows it. + +== Encrypting Cluster Data + +Once encryption keys are created, you can enable encryption for different types of cluster data. + +=== Encrypting Configuration, Logs, and Audit + +Configuration, logs, and audit logs can be encrypted at the cluster level: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true + configuration: # <.> + enabled: true + keyName: "my-autogen-key" # <.> + keyLifetime: "8760h" # <.> + rotationInterval: "720h" # <.> + audit: # <.> + enabled: true + keyName: "my-autogen-key" + keyLifetime: "8760h" + rotationInterval: "720h" + log: # <.> + enabled: true + keyName: "my-autogen-key" + keyLifetime: "8760h" + rotationInterval: "720h" +---- +<.> Enable encryption for cluster configuration. +<.> Use the `my-autogen-key` encryption key. If not specified, the cluster master password is used. +<.> Data Encryption Key (DEK) lifetime in hours. Default is 8760 hours (1 year). Must be at least 720 hours (30 days). +<.> DEK rotation interval in hours. Default is 720 hours (30 days). Must be at least 168 hours (7 days). +<.> Enable encryption for audit logs. +<.> Enable encryption for log files. + +WARNING: Enabling encryption for log files will break fluent-bit log streaming, as the logs will be encrypted and unreadable by the log collector. Only enable log encryption if you don't rely on log streaming. + +=== Using Default Encryption (Master Password) + +You can enable encryption without specifying a key name. In this case, the cluster's master password is used: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseCluster +metadata: + name: my-cluster +spec: + security: + encryptionAtRest: + managed: true + configuration: + enabled: true # <.> + # keyName not specified - uses master password +---- +<.> Encrypt configuration using the cluster master password instead of an encryption key. + +=== Encrypting Buckets + +Individual buckets can be encrypted with specific keys. This is configured at the bucket level: + +[source,yaml] +---- +apiVersion: couchbase.com/v2 +kind: CouchbaseBucket +metadata: + name: my-encrypted-bucket +spec: + name: my-encrypted-bucket + memoryQuota: 512Mi + encryptionAtRest: # <.> + keyName: "my-autogen-key" # <.> + keyLifetime: "8760h" # <.> + rotationInterval: "720h" # <.> +---- +<.> Enable bucket encryption. +<.> The encryption key to use. +<.> DEK lifetime. Default is 8760 hours (1 year). +<.> DEK rotation interval. Default is 720 hours (30 days). + diff --git a/modules/ROOT/pages/tutorial-mirwatchdog.adoc b/modules/ROOT/pages/tutorial-mirwatchdog.adoc new file mode 100644 index 0000000..bc6bf78 --- /dev/null +++ b/modules/ROOT/pages/tutorial-mirwatchdog.adoc @@ -0,0 +1,83 @@ += Monitor for Manual Intervention Scenarios + +[abstract] +Use the Manual Intervention Required Watchdog to monitor cluster scenarios and alert you when the Operator cannot automatically resolve them. + +include::partial$tutorial.adoc[] + +== Overview + +The Operator automatically resolves most cluster issues without user involvement. +However, some scenarios fall outside the Operator's control and require manual intervention. +The Manual Intervention Required (MIR) Watchdog monitors for these scenarios and places the cluster into a special MIR state when they occur, +alerting you to take action. + + +=== Enable the Manual Intervention Required Watchdog + +Enable the Manual Intervention Required Watchdog for each cluster in the `CouchbaseCluster` CRD (Custom Resource Definitions). + +[source,yaml] +---- +spec: + mirWatchdog: + enabled: true # <.> + interval: 20s # <.> + skipReconciliation: false # <.> +---- + +<.> Enable the Manual Intervention Required Watchdog. +The default value is `false`. +<.> Set the interval at which the Manual Intervention Required Watchdog checks for MIR conditions. +The default value is 20 seconds. +<.> Specify whether the Operator skips reconciliation when the cluster is in the MIR state. +The default value is `false`. + +==== Alerting + +The Manual Intervention Required Watchdog is designed to work with additional alerting based on Kubernetes events, cluster conditions, or metrics. + +When a cluster enters the MIR state, the Operator performs the following actions: + +* Sets the `cluster_manual_intervention` gauge metric to 1. + +* Adds the `ManualInterventionRequired` condition to the cluster, when possible, and includes a message that explains the reason for cluster entering the MIR state. + +* Raises a `ManualInterventionRequired` Kubernetes event with a message that describes the reason for manual intervention. + +* Optionally skips reconciliation based on the `spec.mirWatchdog.skipReconciliation` setting until you resolve the issue that caused the MIR state. + +==== Manual Intervention Required Scenarios + +For each check that the Manual Intervention Required Watchdog performs, the defined entry and exit conditions determine whether the cluster enters or exits the MIR state. + +The supported Manual Intervention Required Watchdog checks are as follows: + +* <> +* <> +* <> +* <> + +[#consecutive-rebalance-failures] +===== Consecutive Rebalance Failures + +* Entry: After the Operator exhausts all rebalance retry attempts in 3 consecutive reconciliation loops. +* Exit: After the cluster becomes balanced and the Operator activates all nodes. + +[#couchbase-cluster-authentication-failure] +===== Couchbase Cluster Authentication Failure + +* Entry: The Operator fails to authenticate with the cluster by using the provided Couchbase cluster credentials. +* Exit: The Operator succeeds to authenticate with the cluster. + +[#down-nodes-when-quorum-is-lost] +===== Down Nodes when Quorum is Lost + +* Entry: The Operator detects down nodes that it cannot recover. +* Exit: The Operator detects no unrecoverable down nodes. + +[#tls-certificate-expiration] +===== TLS Certificate Expiration + +* Entry: The Operator detects an expired CA (Certificate Authority), Client or Server Certificate chain, and finds no valid alternative certificates for rotation. +* Exit: The Operator detects no expired TLS certificates or identifies valid alternative certificates available for rotation. diff --git a/modules/ROOT/pages/whats-new.adoc b/modules/ROOT/pages/whats-new.adoc index a36f045..48a10f5 100644 --- a/modules/ROOT/pages/whats-new.adoc +++ b/modules/ROOT/pages/whats-new.adoc @@ -1,26 +1,75 @@ = What's New? -include::partial$constants.adoc[] +:page-toclevels: 2 -Autonomous Operator {operator-version-minor} introduces our new Cluster Migration functionality well as a number of other improvements and minor fixes. +Couchbase Kubernetes Operator 2.9 was released in December 2025. +New features and improvements are described below. -== Cluster Migration +For information about fixed and known issues, see the xref:release-notes.adoc[Release Notes]. -Cluster Migration allows you to transfer a currently-unmanaged Couchbase Server cluster over to being managed by the Operator, with zero downtime. +[#whats-new-290] +== New Features and Enhancements in 2.9 -See xref:concept-migration.adoc[Couchbase Cluster Migration] for more details. +Couchbase Kubernetes Operator 2.9 adds support for Couchbase Server 8.0 features and +introduces a circuit breaker called Manual Intervention Required (MIR) mode. +This release also improves the upgrade process by introducing a new upgrade object that enables more resilient automated upgrades. -== Admission Controller Improvements +=== Support for Couchbase Server 8.0 Features -The Dynamic Admission Controller (DAC) will now warn if any cluster settings don't match our xref:best-practices.adoc#production-deployments[Best Practices for Production Deployments]. +Couchbase Server 8.0 introduces several key features, listed in xref:server:introduction:whats-new.adoc[What's New in Version 8.0], which are now configurable through Couchbase Kubernetes Operator 2.9. +These are the highlights: -The DAC will now prevent changes to the `CouchbaseCluster` spec while a hibernation is taking place. -If hibernation is enabled while a cluster is migrating, upgrading, scaling, or rebalancing, that process will conclude before the cluster enters hibernation. The DAC will warn when this is the case, and it will be visible in the operator logs. +* *Encryption at Rest:* Added support for a new `CouchbaseEncryptionKey` custom resource to manage encryption keys +and to specify which keys are used for each unit, such as buckets, configuration data, and logs. -To prevent any invalid resources failing to reconcile (i.e. if the DAC is not deployed in the current environment), the DAC Validation is now run at the beginning of the reconciliation loop. -Any invalid resources will be skipped for reconciliation, marked as `NotValid`, and logged. +* *Magma as the Default Storage Engine:* In Couchbase Server 8.0, Magma with 128 vBuckets is the default storage engine for new buckets. +Therefore, for CouchbaseBuckets, now the default storage engine is `magma` and the vBucketCount is `128`. +Couchbase recommends setting the vBucket count to `1024` for high-throughput workloads. -== Miscellaneous Improvements +* *XDCR Conflict Logging for Active-Active Setups:* Added multiple settings to the CouchbaseBucket resource to set up XDCR Conflict Logging. -* Pod Disruption Budgets can now be set per-Server Class by enabling xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-perserviceclasspdb[`couchbaseclusters.spec.perServiceClassPDB`]. -* Sample Buckets can now be loaded via the xref:resource/couchbasebucket.adoc[`CouchbaseBucket`] resource, by using the xref:reference-annotations.adoc#cao-couchbase-comsamplebucket[`cao.couchbase.com/sampleBucket`] annotation. -* Query-related RBAC roles (`query_use_sequential_scans`, `query_use_sequences`, and `query_manage_sequences`) have now been added to xref:resource/couchbasegroup.adoc#couchbasegroups-spec-roles-name[`couchbasegroups.spec.roles.name`]. \ No newline at end of file +* Other features such as enabling and disabling user accounts, and new bucket settings such as for warmup. + +=== Manual Intervention Required (MIR) Mode + +The MirWatchdog is an out-of-band check that provides additional alerting. +It's used when the Operator cannot reconcile a cluster due to reasons outside its control and requires manual user intervention. +Scenarios include TLS expiration, Couchbase authentication errors, and rebalance failures. + +This feature is disabled by default but can be enabled and configured by using the `mirWatchdog` field in the CouchbaseCluster CRD. + +For more information, see xref:tutorial-mirwatchdog.adoc[Monitor for Manual Intervention Scenarios]. + +=== Upgrade Process Improvements + +Autonomous upgrades now provide improved user input and control through a new `upgrade` object in the CouchbaseCluster specification, +which contains all configurations for upgrading a cluster. +The previous ServerClass image-based approach is now hard deprecated. + +This consolidation clarifies how to control and manage future upgrades. +All existing upgrade fields `upgradeProcess`, `upgradeStrategy`, and `rollingUpgrade` are deprecated, and moved under the `upgrade` object. + +The new `upgrade` object has the following new fields: + +* `UpgradeOrderType`: The unit of upgrade that can be a Node, ServerGroup, ServerClass, or Service. +* `UpgradeOrder`: The upgrade order of specified upgrade units. +* `stabilizationPeriod`: The wait time in seconds before upgrading the next unit. +* `previousVersionPodCount`: The number of pods to keep running on the previous version at the end of upgrade. + +For more information, see xref:concept-upgrade.adoc[Upgrade Couchbase Server] and xref:howto-couchbase-upgrade.adoc[Upgrade a Couchbase Deployment]. + +=== Other Improvements + +The following are additional important improvements: + +* Support for changing the bucket eviction policy online without requiring a bucket restart. +* The ability to disable DNS resolution verification when creating pods before activating them in the cluster. +* Support for specifying `overheadMemory` for `autoResourceAllocation` to define a static overhead amount. +* Operator-backup now supports Periodic Merge in addition to Full Only and Full Incremental. +This strategy is better suited for some larger clusters. +You can specify a merge schedule on the CouchbaseBackup resource. +* Support for specifying the `cao.couchbase.com/additionalArgs` annotation on CouchbaseBackup and CouchbaseRestore resources to pass additional `cbbackupmgr` arguments to the container. + +== More Information + +* xref:server:introduction:whats-new.adoc[What's New in Couchbase Server Version 8.0] +* xref:server:release-notes:relnotes.adoc[Couchbase Server Release Notes Version 8.0] \ No newline at end of file diff --git a/modules/ROOT/partials/autogen-reference.adoc b/modules/ROOT/partials/.autogen-reference.adoc similarity index 100% rename from modules/ROOT/partials/autogen-reference.adoc rename to modules/ROOT/partials/.autogen-reference.adoc diff --git a/modules/ROOT/partials/constants.adoc b/modules/ROOT/partials/constants.adoc index f17e54f..fcae046 100644 --- a/modules/ROOT/partials/constants.adoc +++ b/modules/ROOT/partials/constants.adoc @@ -1,5 +1,5 @@ -:operator-version: 2.8.1 -:operator-version-minor: 2.8 +:operator-version: 2.9.0 +:operator-version-minor: 2.9 :admission-controller-version: 2.8.1 :couchbase-version: 7.6.6 :couchbase-version-upgrade-from: 7.0.0 diff --git a/modules/ROOT/partials/couchbase-operator-release-notes-2.8.0.adoc b/modules/ROOT/partials/couchbase-operator-release-notes-2.8.0.adoc deleted file mode 100644 index c152a67..0000000 --- a/modules/ROOT/partials/couchbase-operator-release-notes-2.8.0.adoc +++ /dev/null @@ -1,84 +0,0 @@ - -[#fixed-issues-v280] -=== Fixed Issues - - -*https://jira.issues.couchbase.com/browse/K8S-3558[K8S-3558^]*:: - -Couchbase Autonomous Operator commences an In-place Upgrade when the cluster is under-resourced. - -*https://jira.issues.couchbase.com/browse/K8S-3579[K8S-3579^]*:: - -Couchbase Autonomous Operator tries to change invalid bucket configurations in a loop. - -*https://jira.issues.couchbase.com/browse/K8S-3591[K8S-3591^]*:: - -Couchbase Autonomous Operator crashes if Incremental Backup is missing schedule. - -*https://jira.issues.couchbase.com/browse/K8S-3596[K8S-3596^]*:: - -Crash in Operator due to invalid memory access. - -*https://jira.issues.couchbase.com/browse/K8S-3605[K8S-3605^]*:: - -Upgrade Swap Rebalance is retried with different parameters on Operator Pod deletion. - -*https://jira.issues.couchbase.com/browse/K8S-3609[K8S-3609^]*:: - - Hibernation fails to bring back any Pod with error extracting image version. - -*https://jira.issues.couchbase.com/browse/K8S-3621[K8S-3621^]*:: - -Shadowed Secret did not get updated. - -*https://jira.issues.couchbase.com/browse/K8S-3632[K8S-3632^]*:: - -Unable to set -1 for Collection-level `maxTTL`. - -*https://jira.issues.couchbase.com/browse/K8S-3639[K8S-3639^]*:: - -Operator loses track of pending Pods when an Eviction of the Operator Pod occurs. - -*https://jira.issues.couchbase.com/browse/K8S-3641[K8S-3641^]*:: - - Crash in `handleVolumeExpansion` if `enableOnlineVolumeExpansion` is True but no Volume Mounts configured. - -*https://jira.issues.couchbase.com/browse/K8S-3655[K8S-3655^]*:: - - Clear Upgrade condition if the Operator is not performing an upgrade. - -*https://jira.issues.couchbase.com/browse/K8S-3659[K8S-3659^]*:: - - When scaling down, Cluster does not maintain balance across Server Groups. - -*https://jira.issues.couchbase.com/browse/K8S-3696[K8S-3696^]*:: - - DAC prevents configuration of multiple XDCR Replications of same Buckets to different remote Clusters. - -*https://jira.issues.couchbase.com/browse/K8S-3772[K8S-3772^]*:: - -Self-Certification: Artifacts PVC should use `--storage-class` parameter when creating the Certification Pod. - -*https://jira.issues.couchbase.com/browse/K8S-3788[K8S-3788^]*:: - -Operator container crashes when there is a managed Scope/Collection Group added for the Ephemeral Bucket. - - -[#known-issues-v280] -=== Known Issues - -*https://jira.issues.couchbase.com/browse/K8S-3617[K8S-3617^]*:: - -It's not possible to set xref:resource/couchbasecluster.adoc#couchbaseclusters-spec-cluster-indexer-redistributeindexes[`couchbaseclusters.spec.cluster.indexer.redistributeIndexes`] from True to False during a reconciliation. - -*https://jira.issues.couchbase.com/browse/K8S-3908[K8S-3908^]*:: - -Metric `couchbase_operator_memory_under_management_bytes` is incorrectly showing 0. - -*https://jira.issues.couchbase.com/browse/K8S-3909[K8S-3909^]*:: - -Metric `couchbase_operator_cpu_under_management` is incorrectly showing 0. - -*https://jira.issues.couchbase.com/browse/K8S-3910[K8S-3910^]*:: - -Operator tries to migrate storage backend of buckets even before Couchbase cluster is in 7.6.0+. diff --git a/modules/ROOT/partials/couchbase-operator-release-notes-2.8.1.adoc b/modules/ROOT/partials/couchbase-operator-release-notes-2.8.1.adoc deleted file mode 100644 index c52caf7..0000000 --- a/modules/ROOT/partials/couchbase-operator-release-notes-2.8.1.adoc +++ /dev/null @@ -1,29 +0,0 @@ -[#release-281] -== Release 2.8.1 (June 2025) - -Couchbase Operator 2.8.1 was released in June 2025. -This maintenance release contains fixes to issues. - -[#fixed-issues-v281] -== Fixed Issues - - -[#table-fixed-issues-v281,cols="25,66"] - - -*https://jira.issues.couchbase.com/browse/K8S-3793/[K8S-3793^]*:: - -Fixed a bug in Local Persistent Volume comparison logic that previously triggered unnecessary pod rebalancing when comparing existing and desired states, despite no actual differences being detected. - -*https://jira.issues.couchbase.com/browse/K8S-3840/[K8S-3840^]*:: - -Due to ephemeral volumes removing the staging directory, backups will fail if the defaultRecoveryMethod is set to resume. The admission controller will now invalidate backups using ephemeral volumes unless the defaultRecoveryMethod is set to either purge or none. - -*https://jira.issues.couchbase.com/browse/K8S-3889/[K8S-3889^]*:: - -Inplace upgrades are not supported prior to Couchbase Server Versions 7.2.x due to a required change in the startup files required by Couchbase Server. - - - - - diff --git a/modules/ROOT/partials/couchbase-operator-release-notes-2.9.0.adoc b/modules/ROOT/partials/couchbase-operator-release-notes-2.9.0.adoc new file mode 100644 index 0000000..ca7f5fb --- /dev/null +++ b/modules/ROOT/partials/couchbase-operator-release-notes-2.9.0.adoc @@ -0,0 +1,47 @@ +[#release-290] +== Release 2.9.0 (November 2025) + +Couchbase Operator 2.9.0 was released in November 2025. +This maintenance release contains fixes to issues. + +[#fixed-issues-v290] +== Fixed Issues + + +*https://jira.issues.couchbase.com/browse/K8S-3258/[K8S-3258]*:: + +Added a new `logging.configNameReleasePrefix` boolean to the helm chart. This defaults to false, but setting it to true will prefix the fluent-bit config with the release name. Setting this to true for existing clusters will trigger recreation of all pods so should only really be used for new clusters. + +*https://jira.issues.couchbase.com/browse/K8S-4091/[K8S-4091]*:: + +Updated the `spec.networking.addressFamily` field to accept `IPv4Only`, `IPv4Priority`, `IPv6Only` and `IPv6Priority`. The current `IPv4/IPv6` values will have the `Ipv4/6Only` functionality. I.e. customers that have set the fields will not see any change. ++ +These should be considered deprecated and will be removed in a future release. + +The priority/only choice determines whether `addressFamilyOnly` is true or false. + +*https://jira.issues.couchbase.com/browse/K8S-4097/[K8S-4097]*:: + +Manual Intervention Required is a new state that the couchbase cluster will enter in the unlikely scenario that the operator is unable to reconcile the cluster due to reasons outside of its control/capabilities, and which therefore require manual intervention by a user to resolve. ++ +If the cluster enters this condition, it will: ++ +* Set the cluster_manual_intervention metric by 1 +* Add (where possible) the `ManualInterventionRequired` condition to the cluster, with a message detailing the reason for entering the MIR state. +* Raise a `ManualInterventionRequired` Kubernetes event, with the event message set to the reason for entering manual intervention +* Most importantly, reconciliation will be skipped until the manual intervention required state has been resolved, i.e. the issue that put the cluster into that condition has been fixed. + +*https://jira.issues.couchbase.com/browse/K8S-4144/[K8S-4144]*:: + +In prior versions of Couchbase Operator, the metrics port annotation (`prometheus.io/port`) was set to 8091, even if TLS was enabled. It will now correctly set to 18091. + +*https://jira.issues.couchbase.com/browse/K8S-4161/[K8S-4161]*:: + +The latest update includes the addition of the analytics numReplicas setting in the Couchbase Operator. This enhancement allows users to configure the number of replicas for analytics service, offering improved flexibility and reliability. The update is part of the ongoing improvements to enhance functionality and user experience. + +*https://jira.issues.couchbase.com/browse/K8S-4270/[K8S-4270]*:: + +Potentially where we use `kubectl apply` for CRDS, we add a note that this error is possible in 2.9+, and to add `--server-side` to the `kubectl apply` command. + + + + diff --git a/preview/HEAD.yml b/preview/HEAD.yml new file mode 100644 index 0000000..645a3c1 --- /dev/null +++ b/preview/HEAD.yml @@ -0,0 +1,6 @@ +sources: + docs-server: + branches: [release/8.0] + + docs-operator: + branches: [release/2.9]