diff --git a/contributing/guide/contributing.html b/contributing/guide/contributing.html index 78021c5fc..7dd859810 100644 --- a/contributing/guide/contributing.html +++ b/contributing/guide/contributing.html @@ -1516,7 +1516,7 @@

-

Revised on 2023-06-22 15:42:43 UTC

+

Revised on 2023-06-22 19:18:19 UTC

diff --git a/contributing/guide/full.html b/contributing/guide/full.html index 488deb207..e74083e74 100644 --- a/contributing/guide/full.html +++ b/contributing/guide/full.html @@ -1957,7 +1957,7 @@

-

Revised on 2023-06-22 15:42:41 UTC

+

Revised on 2023-06-22 19:18:16 UTC

@@ -1965,7 +1965,7 @@

diff --git a/docs/operators/in-development/configuring-book.html b/docs/operators/in-development/configuring-book.html index 187987547..621954dbb 100644 --- a/docs/operators/in-development/configuring-book.html +++ b/docs/operators/in-development/configuring-book.html @@ -1169,6 +1169,9 @@

-

Revised on 2023-06-22 15:42:43 UTC

+

Revised on 2023-06-22 19:18:19 UTC

diff --git a/docs/operators/in-development/deploying-book.html b/docs/operators/in-development/deploying-book.html index 2cf7f4077..b696c6bc5 100644 --- a/docs/operators/in-development/deploying-book.html +++ b/docs/operators/in-development/deploying-book.html @@ -42,9 +42,10 @@
  • 5.3. Deploying Kafka
  • 5.4. Deploying Kafka Connect @@ -88,75 +89,81 @@
  • 8.2.2. Default ZooKeeper configuration values
  • -
  • 8.3. Configuring the Entity Operator +
  • 8.3. (Preview) Configuring node pools
  • -
  • 8.4. Configuring Kafka Connect +
  • 8.4. Configuring the Entity Operator
  • -
  • 8.5. Configuring Kafka MirrorMaker 2 +
  • 8.5. Configuring Kafka Connect
  • -
  • 8.6. Configuring Kafka MirrorMaker (deprecated)
  • -
  • 8.7. Configuring the Kafka Bridge
  • -
  • 8.8. Configuring Kafka and ZooKeeper storage +
  • 8.6. Configuring Kafka MirrorMaker 2
  • -
  • 8.9. Configuring CPU and memory resource limits and requests
  • -
  • 8.10. Customizing Kubernetes resources +
  • 8.7. Configuring Kafka MirrorMaker (deprecated)
  • +
  • 8.8. Configuring the Kafka Bridge
  • +
  • 8.9. Configuring Kafka and ZooKeeper storage
  • -
  • 8.11. Configuring pod scheduling +
  • 8.10. Configuring CPU and memory resource limits and requests
  • +
  • 8.11. Customizing Kubernetes resources
  • -
  • 8.12. Configuring log levels +
  • 8.12. Configuring pod scheduling
  • -
  • 8.13. Using ConfigMaps to add configuration +
  • 8.13. Configuring log levels
  • -
  • 8.14. Loading configuration values from external sources +
  • 8.14. Using ConfigMaps to add configuration +
  • +
  • 8.15. Loading configuration values from external sources +
  • @@ -1874,7 +1881,7 @@

    5.3. Deploy
    +

    If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools. +Node pools provide configuration for a set of Kafka nodes. +By using node pools, nodes can have different configuration within the same Kafka cluster.

    +
    +
    +

    If you haven’t deployed a Kafka cluster as a Kafka resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of Kubernetes. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by Strimzi, by deploying them as standalone components. @@ -2066,7 +2081,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -2077,7 +2092,205 @@

    -

    5.3.2. Deploying the Topic Operator using the Cluster Operator

    +

    5.3.2. (Preview) Deploying Kafka node pools

    +
    +

    This procedure shows how to deploy Kafka node pools to your Kubernetes cluster using the Cluster Operator. +Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. +For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka resource.

    +
    +
    + + + + + +
    +
    Note
    +
    +The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. +
    +
    +
    +

    The deployment uses a YAML file to provide the specification to create a KafkaNodePool resource. +You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management.

    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +

    Strimzi provides the following example files that you can use to create a Kafka node pool:

    +
    +
    +
    +
    kafka-with-dual-role-kraft-nodes.yaml
    +
    +

    Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.

    +
    +
    kafka-with-kraft.yaml
    +
    +

    Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes.

    +
    +
    kafka.yaml
    +
    +

    Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.

    +
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +You don’t need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool resources or migrate your existing Kafka cluster. +
    +
    + +
    + + + + + +
    +
    Note
    +
    +If you want to migrate an existing Kafka cluster to use node pools, see the steps to migrate existing Kafka clusters. +
    +
    +
    +
    Procedure
    +
      +
    1. +

      Enable the KafkaNodePools feature gate.

      +
      +

      If using KRaft mode, enable the UseKRaft feature gate too.

      +
      +
      +
      +
      kubectl set env install/cluster-operator STRIMZI_FEATURE_GATES=""
      +env:
      +  - name: STRIMZI_FEATURE_GATES
      +    value: +KafkaNodePools, +UseKRaft
      +
      +
      +
      +

      This updates the Cluster Operator.

      +
      +
    2. +
    3. +

      Create a node pool.

      +
      +
        +
      • +

        To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml
        +
        +
        +
      • +
      • +

        To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka-with-kraft.yaml
        +
        +
        +
      • +
      • +

        To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka.yaml
        +
        +
        +
      • +
      +
      +
    4. +
    5. +

      Check the status of the deployment:

      +
      +
      +
      kubectl get pods -n <my_cluster_operator_namespace>
      +
      +
      +
      +
      Output shows the node pool names and readiness
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-entity-operator  3/3     Running   0
      +my-cluster-pool-a-kafka-0   1/1     Running   0
      +my-cluster-pool-a-kafka-1   1/1     Running   0
      +my-cluster-pool-a-kafka-4   1/1     Running   0
      +
      +
      +
      +
        +
      • +

        my-cluster is the name of the Kafka cluster.

        +
      • +
      • +

        pool-a is the name of the node pool.

        +
        +

        A sequential index number starting with 0 identifies each Kafka pod created. +If you are using ZooKeeper, you’ll also see the ZooKeeper pods.

        +
        +
        +

        READY shows the number of replicas that are ready/expected. +The deployment is successful when the STATUS displays as Running.

        +
        +
        +

        Information on the deployment is also shown in the status of the KafkaNodePool resource, including a list of IDs for nodes in the pool.

        +
        +
        + + + + + +
        +
        Note
        +
        +Node IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed. +
        +
        +
      • +
      +
      +
    6. +
    +
    +
    +
    Additional resources
    +

    Node pool configuration

    +
    + + @@ -3873,7 +4086,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -3978,7 +4191,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -4773,6 +4986,9 @@

    8. Configuring a depl
    @@ -4891,10 +5108,13 @@

    8.1. Us

    Kafka custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment.

  • -

    Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimizations proposals from Cruise Control, with example configurations to use the default or user optimization goals.

    +

    (Preview) KafkaNodePool configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper.

    +
  • +
  • +

    Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals.

  • -

    KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configuration for a single or multi-node deployment.

    +

    KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment.

  • KafkaBridge custom resource configuration for a deployment of Kafka Bridge.

    @@ -5199,33 +5419,522 @@

    8.2.

    Specified User Operator loggers and log levels.

  • -

    Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use.

    +

    Kafka Exporter configuration. Kafka Exporter is an optional component for extracting metrics data from Kafka brokers, in particular consumer lag data. For Kafka Exporter to be able to work properly, consumer groups need to be in use.

    +
  • +
  • +

    Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster.

    +
  • + +

    +
    +

    8.2.1. Setting limits on brokers using the Kafka Static Quota plugin

    +
    +

    Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. +You enable the plugin and set limits by configuring the Kafka resource. +You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.

    +
    +
    +

    You can set byte-rate thresholds for producer and consumer bandwidth. +The total limit is distributed across all clients accessing the broker. +For example, you can set a byte-rate threshold of 40 MBps for producers. +If two producers are running, they are each limited to a throughput of 20 MBps.

    +
    +
    +

    Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. +The limits apply to all available disk space. +Producers are slowed gradually between the soft and hard limit. +The limits prevent disks filling up too quickly and exceeding their capacity. +Full disks can lead to issues that are hard to rectify. +The hard limit is the maximum storage limit.

    +
    +
    + + + + + +
    +
    Note
    +
    +For JBOD storage, the limit applies across all disks. +If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. +
    +
    +
    +
    Prerequisites
    +
      +
    • +

      The Cluster Operator that manages the Kafka cluster is running.

      +
    • +
    +
    +
    +
    Procedure
    +
      +
    1. +

      Add the plugin properties to the config of the Kafka resource.

      +
      +

      The plugin properties are shown in this example configuration.

      +
      +
      +
      Example Kafka Static Quota plugin configuration
      +
      +
      apiVersion: kafka.strimzi.io/v1beta2
      +kind: Kafka
      +metadata:
      +  name: my-cluster
      +spec:
      +  kafka:
      +    # ...
      +    config:
      +      client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback (1)
      +      client.quota.callback.static.produce: 1000000 (2)
      +      client.quota.callback.static.fetch: 1000000 (3)
      +      client.quota.callback.static.storage.soft: 400000000000 (4)
      +      client.quota.callback.static.storage.hard: 500000000000 (5)
      +      client.quota.callback.static.storage.check-interval: 5 (6)
      +
      +
      +
      +
        +
      1. +

        Loads the Kafka Static Quota plugin.

        +
      2. +
      3. +

        Sets the producer byte-rate threshold. 1 MBps in this example.

        +
      4. +
      5. +

        Sets the consumer byte-rate threshold. 1 MBps in this example.

        +
      6. +
      7. +

        Sets the lower soft limit for storage. 400 GB in this example.

        +
      8. +
      9. +

        Sets the higher hard limit for storage. 500 GB in this example.

        +
      10. +
      11. +

        Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check.

        +
      12. +
      +
      +
    2. +
    3. +

      Update the resource.

      +
      +
      +
      kubectl apply -f <kafka_configuration_file>
      +
      +
      +
    4. +
    +
    +
    +
    Additional resources
    + +
    +
    +
    +

    8.2.2. Default ZooKeeper configuration values

    +
    +

    When deploying ZooKeeper with Strimzi, some of the default configuration set by Strimzi differs from the standard ZooKeeper defaults. +This is because Strimzi sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within a Kubernetes environment.

    +
    +
    +

    The default configuration for key ZooKeeper properties in Strimzi is as follows:

    +
    + + +++++ + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + + +
    Table 3. Default ZooKeeper Properties in Strimzi
    PropertyDefault valueDescription

    tickTime

    +

    2000

    +

    The length of a single tick in milliseconds, which determines the length of a session timeout.

    initLimit

    +

    5

    +

    The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster.

    syncLimit

    +

    2

    +

    The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster.

    autopurge.purgeInterval

    +

    1

    +

    Enables the autopurge feature and sets the time interval in hours for purging the server-side ZooKeeper transaction log.

    admin.enableServer

    +

    false

    +

    Flag to disable the ZooKeeper admin server. The admin server is not used by Strimzi.

    +
    + + + + + +
    +
    Important
    +
    +Modifying these default values as zookeeper.config in the Kafka custom resource may impact the behavior and performance of your ZooKeeper cluster. +
    +
    +
    + +
    +

    8.3. (Preview) Configuring node pools

    +
    +

    Update the spec properties of the KafkaNodePool custom resource to configure a node pool deployment.

    +
    +
    + + + + + +
    +
    Note
    +
    +The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. +
    +
    +
    +

    A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. +Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation.

    +
    +
    +

    Optionally, you can also specify values for the following properties:

    +
    +
    +
      +
    • +

      resources to specify memory and cpu requests and limits

      +
    • +
    • +

      template to specify custom configuration for pods and other Kubernetes resources

      +
    • +
    • +

      jvmOptions to specify custom JVM configuration for heap size, runtime and other options

      +
    • +
    +
    +
    +

    The Kafka resource represents the configuration for all nodes in the Kafka cluster. +The KafkaNodePool resource represents the configuration for nodes only in the node pool. +If a configuration property is not specified in KafkaNodePool, it is inherited from the Kafka resource. +Configuration specified in the KafkaNodePool resource takes precedence if set in both resources. +For example, if both the node pool and Kafka configuration includes jvmOptions, the values specified in the node pool configuration are used. +When -Xmx: 1024m is set in KafkaNodePool.spec.jvmOptions and -Xms: 512m is set in Kafka.spec.kafka.jvmOptions, the node uses the value from its node pool configuration.

    +
    +
    +

    Properties from Kafka and KafkaNodePool schemas are not combined. +To clarify, if KafkaNodePool.spec.template includes only podSet.metadata.labels, and Kafka.spec.kafka.template includes podSet.metadata.annotations and pod.metadata.labels, the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration.

    +
    +
    +

    Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for cluster management. +If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. +If you are using ZooKeeper, nodes must be set as brokers only.

    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +

    For a deeper understanding of the node pool configuration options, refer to the Strimzi Custom Resource API Reference.

    +
    +
    + + + + + +
    +
    Note
    +
    +While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the KafkaNodePool resource must also be present in the Kafka resource. The configuration in the Kafka resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the Kafka resource when using KRaft mode. These properties are also ignored. +
    +
    +
    +
    Example configuration for a node pool in a cluster using ZooKeeper
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: pool-a
    +  labels:
    +    strimzi.io/cluster: my-cluster
    +spec:
    +  replicas: 3
    +  roles:
    +    - broker # (1)
    +  storage:
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 100Gi
    +        deleteClaim: false
    +  resources:
    +      requests:
    +        memory: 64Gi
    +        cpu: "8"
    +      limits:
    +        memory: 64Gi
    +        cpu: "12"
    +
    +
    +
    +
      +
    1. +

      Roles for the nodes in the node pool, which can only be broker when using Kafka with ZooKeeper.

      +
    2. +
    +
    +
    +
    Example configuration for a node pool in a cluster using KRaft mode
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: kraft-dual-role # (1)
    +  labels:
    +    strimzi.io/cluster: my-cluster # (2)
    +spec:
    +  replicas: 3 # (3)
    +  roles: # (4)
    +    - controller
    +    - broker
    +  storage: # (5)
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 20Gi
    +        deleteClaim: false
    +  resources: # (6)
    +      requests:
    +        memory: 64Gi
    +        cpu: "8"
    +      limits:
    +        memory: 64Gi
    +        cpu: "12"
    +
    +
    +
    +
      +
    1. +

      Unique name for the node pool.

      +
    2. +
    3. +

      The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster.

      +
    4. +
    5. +

      Number of replicas for the nodes.

      +
    6. +
    7. +

      Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.

      +
    8. +
    9. +

      Storage specification for the nodes.

      +
    10. +
    11. +

      Requests for reservation of supported resources, currently cpu and memory, and limits to specify the maximum resources that can be consumed.

      +
    12. +
    +
    +
    + + + + + +
    +
    Note
    +
    +The configuration for the Kafka resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations. +
    +
    +
    +

    8.3.1. (Preview) Moving Kafka nodes between node pools

    +
    +

    This procedure describes how to move nodes between source and target Kafka node pools without downtime. +You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. +When the replicas on the new node are in-sync, you can delete the old node.

    +
    +
    +

    The steps assume that you are using ZooKeeper for cluster management. +The operation is not possible for clusters using KRaft mode.

    +
    +
    +

    In this procedure, we start with two node pools:

    +
    +
    +
      +
    • +

      pool-a with three replicas is the target node pool

      +
    • +
    • +

      pool-b with four replicas is the source node pool

      +
    • +
    +
    +
    +

    We scale up pool-a, and reassign partitions and scale down pool-b, which results in the following:

    +
    +
    +
      +
    • +

      pool-a with four replicas

      +
    • +
    • +

      pool-b with three replicas

      +
    • +
    +
    +
    + + + + + +
    +
    Note
    +
    +During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. +
    +
    +
    +
    Prerequisites
    + +
    +
    +
    Procedure
    +
      +
    1. +

      Create a new node in the target node pool.

      +
      +

      For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas:

      +
      +
      +
      +
      kubectl scale kafkanodepool pool-a --replicas=4
      +
      +
      +
    2. +
    3. +

      Check the status of the deployment and wait for the pods in the node pool to be created and have a status of READY.

      +
      +
      +
      kubectl get pods -n <my_cluster_operator_namespace>
      +
      +
      +
      +
      Output shows four Kafka nodes in the target node pool
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-pool-a-kafka-0   1/1     Running   0
      +my-cluster-pool-a-kafka-1   1/1     Running   0
      +my-cluster-pool-a-kafka-4   1/1     Running   0
      +my-cluster-pool-a-kafka-5   1/1     Running   0
      +
      +
      +
      + + + + + +
      +
      Note
      +
      +The node IDs appended to each name start at 0 (zero) and run sequentially across the Kafka cluster. This means that they might not run sequentially within a node pool. +
      +
      +
    4. +
    5. +

      Reassign the partitions from the old node to the new node using the kafka-reassign-partitions.sh tool.

      +
      +

      For more information, see Reassigning partitions before removing brokers.

      +
    6. -

      Optional configuration for Cruise Control, which is used to rebalance the Kafka cluster.

      +

      After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool.

      +
      +

      For example, node pool pool-b has four replicas. We remove a node by decreasing the number of replicas:

      +
      +
      +
      +
      kubectl scale kafkanodepool pool-b --replicas=3
      +
      +
      +
      +

      The node with the highest ID within a pool is removed.

      +
      +
      +
      Output shows three Kafka nodes in the source node pool
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-pool-b-kafka-2   1/1     Running   0
      +my-cluster-pool-b-kafka-3   1/1     Running   0
      +my-cluster-pool-b-kafka-6   1/1     Running   0
      +
      +
    +
    -

    8.2.1. Setting limits on brokers using the Kafka Static Quota plugin

    +

    8.3.2. (Preview) Migrating existing Kafka clusters to use Kafka node pools

    -

    Use the Kafka Static Quota plugin to set throughput and storage limits on brokers in your Kafka cluster. -You enable the plugin and set limits by configuring the Kafka resource. -You can set a byte-rate threshold and storage quotas to put limits on the clients interacting with your brokers.

    +

    This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. +After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool.

    -

    You can set byte-rate thresholds for producer and consumer bandwidth. -The total limit is distributed across all clients accessing the broker. -For example, you can set a byte-rate threshold of 40 MBps for producers. -If two producers are running, they are each limited to a throughput of 20 MBps.

    -
    -
    -

    Storage quotas throttle Kafka disk storage limits between a soft limit and hard limit. -The limits apply to all available disk space. -Producers are slowed gradually between the soft and hard limit. -The limits prevent disks filling up too quickly and exceeding their capacity. -Full disks can lead to issues that are hard to rectify. -The hard limit is the maximum storage limit.

    +

    The steps assume that you are using ZooKeeper for cluster management. +The operation is not possible for clusters using KRaft mode.

    @@ -5234,8 +5943,7 @@

    Note

    -For JBOD storage, the limit applies across all disks. -If a broker is using two 1 TB disks and the quota is 1.1 TB, one disk might fill and the other disk will be almost empty. +While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration in the KafkaNodePool resource must also be present in the Kafka resource. The configuration is ignored when node pools are being used.
    @@ -5244,7 +5952,7 @@

    Prerequisites

    @@ -5252,148 +5960,114 @@

    Procedure

    1. -

      Add the plugin properties to the config of the Kafka resource.

      -
      -

      The plugin properties are shown in this example configuration.

      -
      -
      -
      Example Kafka Static Quota plugin configuration
      +

      Create a new KafkaNodePool resource.

      +
      -
      apiVersion: kafka.strimzi.io/v1beta2
      -kind: Kafka
      -metadata:
      -  name: my-cluster
      -spec:
      -  kafka:
      -    # ...
      -    config:
      -      client.quota.callback.class: io.strimzi.kafka.quotas.StaticQuotaCallback (1)
      -      client.quota.callback.static.produce: 1000000 (2)
      -      client.quota.callback.static.fetch: 1000000 (3)
      -      client.quota.callback.static.storage.soft: 400000000000 (4)
      -      client.quota.callback.static.storage.hard: 500000000000 (5)
      -      client.quota.callback.static.storage.check-interval: 5 (6)
      -
      -
      -
      -
        -
      1. -

        Loads the Kafka Static Quota plugin.

        -
      2. -
      3. -

        Sets the producer byte-rate threshold. 1 MBps in this example.

        -
      4. +
        +
        1. -

          Sets the consumer byte-rate threshold. 1 MBps in this example.

          +

          Name the resource kafka.

        2. -

          Sets the lower soft limit for storage. 400 GB in this example.

          +

          Point a strimzi.io/cluster label to your existing Kafka resource.

        3. -

          Sets the higher hard limit for storage. 500 GB in this example.

          +

          Set the replica count and storage configuration to match your current Kafka cluster.

        4. -

          Sets the interval in seconds between checks on storage. 5 seconds in this example. You can set this to 0 to disable the check.

          +

          Set the roles to broker.

        +
      +
      +
      +
      Example configuration for a node pool used in migrating a Kafka cluster
      +
      +
      apiVersion: kafka.strimzi.io/v1beta2
      +kind: KafkaNodePool
      +metadata:
      +  name: kafka
      +  labels:
      +    strimzi.io/cluster: my-cluster
      +spec:
      +  replicas: 3
      +  roles:
      +    - broker
      +  storage:
      +    type: jbod
      +    volumes:
      +      - id: 0
      +        type: persistent-claim
      +        size: 100Gi
      +        deleteClaim: false
      +
      +
    2. -

      Update the resource.

      +

      Apply the KafkaNodePool resource:

      -
      kubectl apply -f <kafka_configuration_file>
      +
      kubectl apply -f <node_pool_configuration_file>
      -
    3. -
    +
    +

    By applying this resource, you switch Kafka to using node pools.

    -
    -
    Additional resources
    -
    +
  • +

    Enable the KafkaNodePools feature gate in the Kafka resource using the strimzi.io/node-pools: enabled annotation.

    +
    +
    Example configuration for a node pool in a cluster using ZooKeeper
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: Kafka
    +metadata:
    +  name: my-cluster
    +  annotations:
    +    strimzi.io/node-pools: enabled
    +spec:
    +  kafka:
    +    version: 3.5.0
    +    replicas: 3
    +  # ...
    +  storage:
    +      type: jbod
    +      volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 100Gi
    +        deleteClaim: false
    -
    -

    8.2.2. Default ZooKeeper configuration values

    -
    -

    When deploying ZooKeeper with Strimzi, some of the default configuration set by Strimzi differs from the standard ZooKeeper defaults. -This is because Strimzi sets a number of ZooKeeper properties with values that are optimized for running ZooKeeper within a Kubernetes environment.

    +
  • +
  • +

    Apply the Kafka resource:

    +
    +
    +
    kubectl apply -f <kafka_configuration_file>
    -
    -

    The default configuration for key ZooKeeper properties in Strimzi is as follows:

    - - ----- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
    Table 3. Default ZooKeeper Properties in Strimzi
    PropertyDefault valueDescription

    tickTime

    -

    2000

    -

    The length of a single tick in milliseconds, which determines the length of a session timeout.

    initLimit

    -

    5

    -

    The maximum number of ticks that a follower is allowed to fall behind the leader in a ZooKeeper cluster.

    syncLimit

    -

    2

    -

    The maximum number of ticks that a follower is allowed to be out of sync with the leader in a ZooKeeper cluster.

    autopurge.purgeInterval

    -

    1

    -

    Enables the autopurge feature and sets the time interval in hours for purging the server-side ZooKeeper transaction log.

    admin.enableServer

    -

    false

    -

    Flag to disable the ZooKeeper admin server. The admin server is not used by Strimzi.

    -
    - - - - - -
    -
    Important
    -
    -Modifying these default values as zookeeper.config in the Kafka custom resource may impact the behavior and performance of your ZooKeeper cluster. -
    +
  • +
    -

    8.3. Configuring the Entity Operator

    +

    8.4. Configuring the Entity Operator

    Use the entityOperator property in Kafka.spec to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators:

    @@ -5477,7 +6151,7 @@

    -

    8.3.1. Configuring the Topic Operator

    +

    8.4.1. Configuring the Topic Operator

    Use topicOperator properties in Kafka.spec.entityOperator to configure the Topic Operator. The following properties are supported:

    @@ -5545,7 +6219,7 @@

    8.3.1. Co

    -

    8.3.2. Configuring the User Operator

    +

    8.4.2. Configuring the User Operator

    Use userOperator properties in Kafka.spec.entityOperator to configure the User Operator. The following properties are supported:

    @@ -5606,7 +6280,7 @@

    8.3.2. Conf

    -

    8.4. Configuring Kafka Connect

    +

    8.5. Configuring Kafka Connect

    Update the spec properties of the KafkaConnect custom resource to configure your Kafka Connect deployment.

    @@ -5830,7 +6504,7 @@

    -

    8.4.1. Configuring Kafka Connect user authorization

    +

    8.5.1. Configuring Kafka Connect user authorization

    This procedure describes how to authorize user access to Kafka Connect.

    @@ -6014,7 +6688,7 @@

    -

    8.5. Configuring Kafka MirrorMaker 2

    +

    8.6. Configuring Kafka MirrorMaker 2

    Update the spec properties of the KafkaMirrorMaker2 custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output.

    @@ -6398,7 +7072,7 @@

    -

    8.5.1. Configuring active/active or active/passive modes

    +

    8.6.1. Configuring active/active or active/passive modes

    You can use MirrorMaker 2 in active/passive or active/active cluster configurations.

    @@ -6458,7 +7132,7 @@
    -

    8.5.2. Configuring MirrorMaker 2 connectors

    +

    8.6.2. Configuring MirrorMaker 2 connectors

    Use Mirrormaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters.

    @@ -7012,7 +7686,7 @@
    -

    8.5.3. Configuring MirrorMaker 2 connector producers and consumers

    +

    8.6.3. Configuring MirrorMaker 2 connector producers and consumers

    MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings.

    @@ -7177,7 +7851,7 @@

    -

    8.5.4. Specifying a maximum number of data replication tasks

    +

    8.6.4. Specifying a maximum number of data replication tasks

    Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. @@ -7282,7 +7956,7 @@

    -

    8.5.5. Synchronizing ACL rules for remote topics

    +

    8.6.5. Synchronizing ACL rules for remote topics

    When using MirrorMaker 2 with Strimzi, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator.

    @@ -7305,7 +7979,7 @@

    -

    8.5.6. Securing a Kafka MirrorMaker 2 deployment

    +

    8.6.6. Securing a Kafka MirrorMaker 2 deployment

    This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment.

    @@ -7757,7 +8431,7 @@

    -

    8.6. Configuring Kafka MirrorMaker (deprecated)

    +

    8.7. Configuring Kafka MirrorMaker (deprecated)

    Update the spec properties of the KafkaMirrorMaker custom resource to configure your Kafka MirrorMaker deployment.

    @@ -7952,7 +8626,7 @@

    -

    8.7. Configuring the Kafka Bridge

    +

    8.8. Configuring the Kafka Bridge

    Update the spec properties of the KafkaBridge custom resource to configure your Kafka Bridge deployment.

    @@ -8107,7 +8781,7 @@

    -

    8.8. Configuring Kafka and ZooKeeper storage

    +

    8.9. Configuring Kafka and ZooKeeper storage

    As stateful applications, Kafka and ZooKeeper store data on disk. Strimzi supports three storage types for this data:

    @@ -8170,7 +8844,7 @@

    8.8.

    -

    8.8.1. Data storage considerations

    +

    8.9.1. Data storage considerations

    For Strimzi to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. @@ -8240,7 +8914,7 @@

    Disk usage
    -

    8.8.2. Ephemeral storage

    +

    8.9.2. Ephemeral storage

    Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. @@ -8307,7 +8981,7 @@

    -

    8.8.3. Persistent storage

    +

    8.9.3. Persistent storage

    Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts.

    @@ -8575,7 +9249,7 @@
    -

    8.8.4. Resizing persistent volumes

    +

    8.9.4. Resizing persistent volumes

    Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, Strimzi instructs the storage infrastructure to make the change. @@ -8698,7 +9372,7 @@

    -

    8.8.5. JBOD storage

    +

    8.9.5. JBOD storage

    You can configure Strimzi to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. @@ -8776,7 +9450,7 @@

    -

    8.8.6. Adding volumes to JBOD storage

    +

    8.9.6. Adding volumes to JBOD storage

    This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.

    @@ -8872,7 +9546,7 @@

    -

    8.8.7. Removing volumes from JBOD storage

    +

    8.9.7. Removing volumes from JBOD storage

    This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. @@ -8962,7 +9636,7 @@

    -

    8.9. Configuring CPU and memory resource limits and requests

    +

    8.10. Configuring CPU and memory resource limits and requests

    By default, the Strimzi Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. @@ -8973,7 +9647,7 @@

    8.9.

    -

    8.10. Customizing Kubernetes resources

    +

    8.11. Customizing Kubernetes resources

    A Strimzi deployment creates Kubernetes resources, such as Deployment, Pod, and Service resources. These resources are managed by Strimzi operators. @@ -9017,6 +9691,9 @@

    -

    8.10.1. Customizing the image pull policy

    +

    8.11.1. Customizing the image pull policy

    Strimzi allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. @@ -9102,7 +9779,7 @@

    -

    8.10.2. Applying a termination grace period

    +

    8.11.2. Applying a termination grace period

    Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly.

    @@ -9145,13 +9822,13 @@

    -

    8.11. Configuring pod scheduling

    +

    8.12. Configuring pod scheduling

    To avoid performance degradation caused by resource conflicts between applications scheduled on the same Kubernetes node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka.

    -

    8.11.1. Specifying affinity, tolerations, and topology spread constraints

    +

    8.12.1. Specifying affinity, tolerations, and topology spread constraints

    Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity, tolerations, and topologySpreadConstraint properties in following resources:

    @@ -9250,7 +9927,7 @@
    -

    8.11.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node

    +

    8.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node

    Many Kafka brokers or ZooKeeper nodes can run on the same Kubernetes worker node. If the worker node fails, they will all become unavailable at the same time. @@ -9373,7 +10050,7 @@

    -

    8.11.3. Configuring pod anti-affinity in Kafka components

    +

    8.12.3. Configuring pod anti-affinity in Kafka components

    Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity, Kubernetes will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms.

    @@ -9438,7 +10115,7 @@

    -

    8.11.4. Configuring node affinity in Kafka components

    +

    8.12.4. Configuring node affinity in Kafka components

    Prerequisites
    -
  • 8.3. Configuring the Entity Operator +
  • 8.3. (Preview) Configuring node pools
  • -
  • 8.4. Configuring Kafka Connect +
  • 8.4. Configuring the Entity Operator
  • -
  • 8.5. Configuring Kafka MirrorMaker 2 +
  • 8.5. Configuring Kafka Connect
  • -
  • 8.6. Configuring Kafka MirrorMaker (deprecated)
  • -
  • 8.7. Configuring the Kafka Bridge
  • -
  • 8.8. Configuring Kafka and ZooKeeper storage +
  • 8.6. Configuring Kafka MirrorMaker 2
  • -
  • 8.9. Configuring CPU and memory resource limits and requests
  • -
  • 8.10. Customizing Kubernetes resources +
  • 8.7. Configuring Kafka MirrorMaker (deprecated)
  • +
  • 8.8. Configuring the Kafka Bridge
  • +
  • 8.9. Configuring Kafka and ZooKeeper storage
  • -
  • 8.11. Configuring pod scheduling +
  • 8.10. Configuring CPU and memory resource limits and requests
  • +
  • 8.11. Customizing Kubernetes resources
  • -
  • 8.12. Configuring log levels +
  • 8.12. Configuring pod scheduling
  • -
  • 8.13. Using ConfigMaps to add configuration +
  • 8.13. Configuring log levels
  • -
  • 8.14. Loading configuration values from external sources +
  • 8.14. Using ConfigMaps to add configuration +
  • +
  • 8.15. Loading configuration values from external sources +
  • @@ -2315,7 +2322,7 @@

    5.3. Deploy
    +

    If you are trying the preview of the node pools feature, you can deploy a Kafka cluster with one or more node pools. +Node pools provide configuration for a set of Kafka nodes. +By using node pools, nodes can have different configuration within the same Kafka cluster.

    +
    +
    +

    If you haven’t deployed a Kafka cluster as a Kafka resource, you can’t use the Cluster Operator to manage it. This applies, for example, to a Kafka cluster running outside of Kubernetes. However, you can use the Topic Operator and User Operator with a Kafka cluster that is not managed by Strimzi, by deploying them as standalone components. @@ -2507,7 +2522,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -2518,7 +2533,205 @@

    -

    5.3.2. Deploying the Topic Operator using the Cluster Operator

    +

    5.3.2. (Preview) Deploying Kafka node pools

    +
    +

    This procedure shows how to deploy Kafka node pools to your Kubernetes cluster using the Cluster Operator. +Node pools represent a distinct group of Kafka nodes within a Kafka cluster that share the same configuration. +For each Kafka node in the node pool, any configuration not defined in node pool is inherited from the cluster configuration in the kafka resource.

    +
    +
    + + + + + +
    +
    Note
    +
    +The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. +
    +
    +
    +

    The deployment uses a YAML file to provide the specification to create a KafkaNodePool resource. +You can use node pools with Kafka clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper for cluster management.

    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +

    Strimzi provides the following example files that you can use to create a Kafka node pool:

    +
    +
    +
    +
    kafka-with-dual-role-kraft-nodes.yaml
    +
    +

    Deploys a Kafka cluster with one pool of KRaft nodes that share the broker and controller roles.

    +
    +
    kafka-with-kraft.yaml
    +
    +

    Deploys a Kafka cluster with one pool of controller nodes and one pool of broker nodes.

    +
    +
    kafka.yaml
    +
    +

    Deploys ZooKeeper with 3 nodes, and 2 different pools of Kafka brokers. Each of the pools has 3 brokers. The pools in the example use different storage configuration.

    +
    +
    +
    +
    + + + + + +
    +
    Note
    +
    +You don’t need to start using node pools right away. If you decide to use them, you can perform the steps outlined here to deploy a new Kafka cluster with KafkaNodePool resources or migrate your existing Kafka cluster. +
    +
    + +
    + + + + + +
    +
    Note
    +
    +If you want to migrate an existing Kafka cluster to use node pools, see the steps to migrate existing Kafka clusters. +
    +
    +
    +
    Procedure
    +
      +
    1. +

      Enable the KafkaNodePools feature gate.

      +
      +

      If using KRaft mode, enable the UseKRaft feature gate too.

      +
      +
      +
      +
      kubectl set env install/cluster-operator STRIMZI_FEATURE_GATES=""
      +env:
      +  - name: STRIMZI_FEATURE_GATES
      +    value: +KafkaNodePools, +UseKRaft
      +
      +
      +
      +

      This updates the Cluster Operator.

      +
      +
    2. +
    3. +

      Create a node pool.

      +
      +
        +
      • +

        To deploy a Kafka cluster in KRaft mode with a single node pool that uses dual-role nodes:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka-with-dual-role-kraft-nodes.yaml
        +
        +
        +
      • +
      • +

        To deploy a Kafka cluster in KRaft mode with separate node pools for broker and controller nodes:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka-with-kraft.yaml
        +
        +
        +
      • +
      • +

        To deploy a Kafka cluster and ZooKeeper cluster with two node pools of three brokers:

        +
        +
        +
        kubectl apply -f examples/kafka/nodepools/kafka.yaml
        +
        +
        +
      • +
      +
      +
    4. +
    5. +

      Check the status of the deployment:

      +
      +
      +
      kubectl get pods -n <my_cluster_operator_namespace>
      +
      +
      +
      +
      Output shows the node pool names and readiness
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-entity-operator  3/3     Running   0
      +my-cluster-pool-a-kafka-0   1/1     Running   0
      +my-cluster-pool-a-kafka-1   1/1     Running   0
      +my-cluster-pool-a-kafka-4   1/1     Running   0
      +
      +
      +
      +
        +
      • +

        my-cluster is the name of the Kafka cluster.

        +
      • +
      • +

        pool-a is the name of the node pool.

        +
        +

        A sequential index number starting with 0 identifies each Kafka pod created. +If you are using ZooKeeper, you’ll also see the ZooKeeper pods.

        +
        +
        +

        READY shows the number of replicas that are ready/expected. +The deployment is successful when the STATUS displays as Running.

        +
        +
        +

        Information on the deployment is also shown in the status of the KafkaNodePool resource, including a list of IDs for nodes in the pool.

        +
        +
        + + + + + +
        +
        Note
        +
        +Node IDs are assigned sequentially starting at 0 (zero) across all node pools within a cluster. This means that node IDs might not run sequentially within a specific node pool. If there are gaps in the sequence of node IDs across the cluster, the next node to be added is assigned an ID that fills the gap. When scaling down, the node with the highest node ID within a pool is removed. +
        +
        +
      • +
      +
      +
    6. +
    +
    +
    +
    Additional resources
    +

    Node pool configuration

    +
    +

    +
    @@ -4314,7 +4527,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -4419,7 +4632,7 @@

    READY shows the number of replicas that are ready/expected. -The deployment is successful when the STATUS shows as Running.

    +The deployment is successful when the STATUS displays as Running.

    @@ -5214,6 +5427,9 @@

    8. Configuring a depl

    @@ -5332,10 +5549,13 @@

    8.1. Us

    Kafka custom resource configuration for a deployment of Kafka. Includes example configuration for an ephemeral or persistent single or multi-node deployment.

  • -

    Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimizations proposals from Cruise Control, with example configurations to use the default or user optimization goals.

    +

    (Preview) KafkaNodePool configuration for Kafka nodes in a Kafka cluster. Includes example configuration for nodes in clusters that use KRaft (Kafka Raft metadata) mode or ZooKeeper.

    +
  • +
  • +

    Kafka custom resource with a deployment configuration for Cruise Control. Includes KafkaRebalance custom resources to generate optimization proposals from Cruise Control, with example configurations to use the default or user optimization goals.

  • -

    KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configuration for a single or multi-node deployment.

    +

    KafkaConnect and KafkaConnector custom resource configuration for a deployment of Kafka Connect. Includes example configurations for a single or multi-node deployment.

  • KafkaBridge custom resource configuration for a deployment of Kafka Bridge.

    @@ -5834,7 +6054,461 @@

    -

    8.3. Configuring the Entity Operator

    +

    8.3. (Preview) Configuring node pools

    +
    +

    Update the spec properties of the KafkaNodePool custom resource to configure a node pool deployment.

    +
    +
    + + + + + +
    +
    Note
    +
    +The node pools feature is available as a preview. Node pools are not enabled by default, so you must enable the KafkaNodePools feature gate before using them. +
    +
    +
    +

    A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. +Each pool has its own unique configuration, which includes mandatory settings for the number of replicas, roles, and storage allocation.

    +
    +
    +

    Optionally, you can also specify values for the following properties:

    +
    +
    +
      +
    • +

      resources to specify memory and cpu requests and limits

      +
    • +
    • +

      template to specify custom configuration for pods and other Kubernetes resources

      +
    • +
    • +

      jvmOptions to specify custom JVM configuration for heap size, runtime and other options

      +
    • +
    +
    +
    +

    The Kafka resource represents the configuration for all nodes in the Kafka cluster. +The KafkaNodePool resource represents the configuration for nodes only in the node pool. +If a configuration property is not specified in KafkaNodePool, it is inherited from the Kafka resource. +Configuration specified in the KafkaNodePool resource takes precedence if set in both resources. +For example, if both the node pool and Kafka configuration includes jvmOptions, the values specified in the node pool configuration are used. +When -Xmx: 1024m is set in KafkaNodePool.spec.jvmOptions and -Xms: 512m is set in Kafka.spec.kafka.jvmOptions, the node uses the value from its node pool configuration.

    +
    +
    +

    Properties from Kafka and KafkaNodePool schemas are not combined. +To clarify, if KafkaNodePool.spec.template includes only podSet.metadata.labels, and Kafka.spec.kafka.template includes podSet.metadata.annotations and pod.metadata.labels, the template values from the Kafka configuration are ignored since there is a template value in the node pool configuration.

    +
    +
    +

    Node pools can be used with Kafka clusters that operate in KRaft mode (using Kafka Raft metadata) or use ZooKeeper for cluster management. +If you are using KRaft mode, you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. +If you are using ZooKeeper, nodes must be set as brokers only.

    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +

    For a deeper understanding of the node pool configuration options, refer to the Strimzi Custom Resource API Reference.

    +
    +
    + + + + + +
    +
    Note
    +
    +While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration properties in the KafkaNodePool resource must also be present in the Kafka resource. The configuration in the Kafka resource is ignored when node pools are used. Similarly, ZooKeeper configuration properties must also be present in the Kafka resource when using KRaft mode. These properties are also ignored. +
    +
    +
    +
    Example configuration for a node pool in a cluster using ZooKeeper
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: pool-a
    +  labels:
    +    strimzi.io/cluster: my-cluster
    +spec:
    +  replicas: 3
    +  roles:
    +    - broker # (1)
    +  storage:
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 100Gi
    +        deleteClaim: false
    +  resources:
    +      requests:
    +        memory: 64Gi
    +        cpu: "8"
    +      limits:
    +        memory: 64Gi
    +        cpu: "12"
    +
    +
    +
    +
      +
    1. +

      Roles for the nodes in the node pool, which can only be broker when using Kafka with ZooKeeper.

      +
    2. +
    +
    +
    +
    Example configuration for a node pool in a cluster using KRaft mode
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: kraft-dual-role # (1)
    +  labels:
    +    strimzi.io/cluster: my-cluster # (2)
    +spec:
    +  replicas: 3 # (3)
    +  roles: # (4)
    +    - controller
    +    - broker
    +  storage: # (5)
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 20Gi
    +        deleteClaim: false
    +  resources: # (6)
    +      requests:
    +        memory: 64Gi
    +        cpu: "8"
    +      limits:
    +        memory: 64Gi
    +        cpu: "12"
    +
    +
    +
    +
      +
    1. +

      Unique name for the node pool.

      +
    2. +
    3. +

      The Kafka cluster the node pool belongs to. A node pool can only belong to a single cluster.

      +
    4. +
    5. +

      Number of replicas for the nodes.

      +
    6. +
    7. +

      Roles for the nodes in the node pool. In this example, the nodes have dual roles as controllers and brokers.

      +
    8. +
    9. +

      Storage specification for the nodes.

      +
    10. +
    11. +

      Requests for reservation of supported resources, currently cpu and memory, and limits to specify the maximum resources that can be consumed.

      +
    12. +
    +
    +
    + + + + + +
    +
    Note
    +
    +The configuration for the Kafka resource must be suitable for KRaft mode. Currently, KRaft mode has a number of limitations. +
    +
    +
    +

    8.3.1. (Preview) Moving Kafka nodes between node pools

    +
    +

    This procedure describes how to move nodes between source and target Kafka node pools without downtime. +You create a new node on the target node pool and reassign partitions to move data from the old node on the source node pool. +When the replicas on the new node are in-sync, you can delete the old node.

    +
    +
    +

    The steps assume that you are using ZooKeeper for cluster management. +The operation is not possible for clusters using KRaft mode.

    +
    +
    +

    In this procedure, we start with two node pools:

    +
    +
    +
      +
    • +

      pool-a with three replicas is the target node pool

      +
    • +
    • +

      pool-b with four replicas is the source node pool

      +
    • +
    +
    +
    +

    We scale up pool-a, and reassign partitions and scale down pool-b, which results in the following:

    +
    +
    +
      +
    • +

      pool-a with four replicas

      +
    • +
    • +

      pool-b with three replicas

      +
    • +
    +
    +
    + + + + + +
    +
    Note
    +
    +During this process, the ID of the node that holds the partition replicas changes. Consider any dependencies that reference the node ID. +
    +
    +
    +
    Prerequisites
    + +
    +
    +
    Procedure
    +
      +
    1. +

      Create a new node in the target node pool.

      +
      +

      For example, node pool pool-a has three replicas. We add a node by increasing the number of replicas:

      +
      +
      +
      +
      kubectl scale kafkanodepool pool-a --replicas=4
      +
      +
      +
    2. +
    3. +

      Check the status of the deployment and wait for the pods in the node pool to be created and have a status of READY.

      +
      +
      +
      kubectl get pods -n <my_cluster_operator_namespace>
      +
      +
      +
      +
      Output shows four Kafka nodes in the target node pool
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-pool-a-kafka-0   1/1     Running   0
      +my-cluster-pool-a-kafka-1   1/1     Running   0
      +my-cluster-pool-a-kafka-4   1/1     Running   0
      +my-cluster-pool-a-kafka-5   1/1     Running   0
      +
      +
      +
      + + + + + +
      +
      Note
      +
      +The node IDs appended to each name start at 0 (zero) and run sequentially across the Kafka cluster. This means that they might not run sequentially within a node pool. +
      +
      +
    4. +
    5. +

      Reassign the partitions from the old node to the new node using the kafka-reassign-partitions.sh tool.

      +
      +

      For more information, see Reassigning partitions before removing brokers.

      +
      +
    6. +
    7. +

      After the reassignment process is complete, reduce the number of Kafka nodes in the source node pool.

      +
      +

      For example, node pool pool-b has four replicas. We remove a node by decreasing the number of replicas:

      +
      +
      +
      +
      kubectl scale kafkanodepool pool-b --replicas=3
      +
      +
      +
      +

      The node with the highest ID within a pool is removed.

      +
      +
      +
      Output shows three Kafka nodes in the source node pool
      +
      +
      NAME                        READY   STATUS    RESTARTS
      +my-cluster-pool-b-kafka-2   1/1     Running   0
      +my-cluster-pool-b-kafka-3   1/1     Running   0
      +my-cluster-pool-b-kafka-6   1/1     Running   0
      +
      +
      +
    8. +
    +
    +
    +
    +

    8.3.2. (Preview) Migrating existing Kafka clusters to use Kafka node pools

    +
    +

    This procedure describes how to migrate existing Kafka clusters to use Kafka node pools. +After you have updated the Kafka cluster, you can use the node pools to manage the configuration of nodes within each pool.

    +
    +
    +

    The steps assume that you are using ZooKeeper for cluster management. +The operation is not possible for clusters using KRaft mode.

    +
    +
    + + + + + +
    +
    Note
    +
    +While the KafkaNodePools feature gate that enables node pools is in alpha phase, replica and storage configuration in the KafkaNodePool resource must also be present in the Kafka resource. The configuration is ignored when node pools are being used. +
    +
    +
    +
    Prerequisites
    + +
    +
    +
    Procedure
    +
      +
    1. +

      Create a new KafkaNodePool resource.

      +
      +
      +
      +
        +
      1. +

        Name the resource kafka.

        +
      2. +
      3. +

        Point a strimzi.io/cluster label to your existing Kafka resource.

        +
      4. +
      5. +

        Set the replica count and storage configuration to match your current Kafka cluster.

        +
      6. +
      7. +

        Set the roles to broker.

        +
      8. +
      +
      +
      +
      +
      +
      Example configuration for a node pool used in migrating a Kafka cluster
      +
      +
      apiVersion: kafka.strimzi.io/v1beta2
      +kind: KafkaNodePool
      +metadata:
      +  name: kafka
      +  labels:
      +    strimzi.io/cluster: my-cluster
      +spec:
      +  replicas: 3
      +  roles:
      +    - broker
      +  storage:
      +    type: jbod
      +    volumes:
      +      - id: 0
      +        type: persistent-claim
      +        size: 100Gi
      +        deleteClaim: false
      +
      +
      +
    2. +
    3. +

      Apply the KafkaNodePool resource:

      +
      +
      +
      kubectl apply -f <node_pool_configuration_file>
      +
      +
      +
      +

      By applying this resource, you switch Kafka to using node pools.

      +
      +
      +

      There is no change or rolling update and resources are identical to how they were before.

      +
      +
    4. +
    5. +

      Update the STRIMZI_FEATURE_GATES environment variable in the Cluster Operator configuration to include +KafkaNodePools.

      +
      +
      +
      env:
      +  - name: STRIMZI_FEATURE_GATES
      +    value: +KafkaNodePools
      +
      +
      +
    6. +
    7. +

      Enable the KafkaNodePools feature gate in the Kafka resource using the strimzi.io/node-pools: enabled annotation.

      +
      +
      Example configuration for a node pool in a cluster using ZooKeeper
      +
      +
      apiVersion: kafka.strimzi.io/v1beta2
      +kind: Kafka
      +metadata:
      +  name: my-cluster
      +  annotations:
      +    strimzi.io/node-pools: enabled
      +spec:
      +  kafka:
      +    version: 3.5.0
      +    replicas: 3
      +  # ...
      +  storage:
      +      type: jbod
      +      volumes:
      +      - id: 0
      +        type: persistent-claim
      +        size: 100Gi
      +        deleteClaim: false
      +
      +
      +
    8. +
    9. +

      Apply the Kafka resource:

      +
      +
      +
      kubectl apply -f <kafka_configuration_file>
      +
      +
      +
    10. +
    +
    +
    +
  • +
    +

    8.4. Configuring the Entity Operator

    Use the entityOperator property in Kafka.spec to configure the Entity Operator. The Entity Operator is responsible for managing Kafka-related entities in a running Kafka cluster. It comprises the following operators:

    @@ -5918,7 +6592,7 @@

    -

    8.3.1. Configuring the Topic Operator

    +

    8.4.1. Configuring the Topic Operator

    Use topicOperator properties in Kafka.spec.entityOperator to configure the Topic Operator. The following properties are supported:

    @@ -5986,7 +6660,7 @@

    8.3.1. Co

    -

    8.3.2. Configuring the User Operator

    +

    8.4.2. Configuring the User Operator

    Use userOperator properties in Kafka.spec.entityOperator to configure the User Operator. The following properties are supported:

    @@ -6047,7 +6721,7 @@

    8.3.2. Conf

    -

    8.4. Configuring Kafka Connect

    +

    8.5. Configuring Kafka Connect

    Update the spec properties of the KafkaConnect custom resource to configure your Kafka Connect deployment.

    @@ -6271,7 +6945,7 @@

    -

    8.4.1. Configuring Kafka Connect user authorization

    +

    8.5.1. Configuring Kafka Connect user authorization

    This procedure describes how to authorize user access to Kafka Connect.

    @@ -6455,7 +7129,7 @@

    -

    8.5. Configuring Kafka MirrorMaker 2

    +

    8.6. Configuring Kafka MirrorMaker 2

    Update the spec properties of the KafkaMirrorMaker2 custom resource to configure your MirrorMaker 2 deployment. MirrorMaker 2 uses source cluster configuration for data consumption and target cluster configuration for data output.

    @@ -6839,7 +7513,7 @@

    -

    8.5.1. Configuring active/active or active/passive modes

    +

    8.6.1. Configuring active/active or active/passive modes

    You can use MirrorMaker 2 in active/passive or active/active cluster configurations.

    @@ -6899,7 +7573,7 @@
    -

    8.5.2. Configuring MirrorMaker 2 connectors

    +

    8.6.2. Configuring MirrorMaker 2 connectors

    Use Mirrormaker 2 connector configuration for the internal connectors that orchestrate the synchronization of data between Kafka clusters.

    @@ -7453,7 +8127,7 @@
    -

    8.5.3. Configuring MirrorMaker 2 connector producers and consumers

    +

    8.6.3. Configuring MirrorMaker 2 connector producers and consumers

    MirrorMaker 2 connectors use internal producers and consumers. If needed, you can configure these producers and consumers to override the default settings.

    @@ -7618,7 +8292,7 @@

    -

    8.5.4. Specifying a maximum number of data replication tasks

    +

    8.6.4. Specifying a maximum number of data replication tasks

    Connectors create the tasks that are responsible for moving data in and out of Kafka. Each connector comprises one or more tasks that are distributed across a group of worker pods that run the tasks. @@ -7723,7 +8397,7 @@

    -

    8.5.5. Synchronizing ACL rules for remote topics

    +

    8.6.5. Synchronizing ACL rules for remote topics

    When using MirrorMaker 2 with Strimzi, it is possible to synchronize ACL rules for remote topics. However, this feature is only available if you are not using the User Operator.

    @@ -7746,7 +8420,7 @@

    -

    8.5.6. Securing a Kafka MirrorMaker 2 deployment

    +

    8.6.6. Securing a Kafka MirrorMaker 2 deployment

    This procedure describes in outline the configuration required to secure a MirrorMaker 2 deployment.

    @@ -8198,7 +8872,7 @@

    -

    8.6. Configuring Kafka MirrorMaker (deprecated)

    +

    8.7. Configuring Kafka MirrorMaker (deprecated)

    Update the spec properties of the KafkaMirrorMaker custom resource to configure your Kafka MirrorMaker deployment.

    @@ -8393,7 +9067,7 @@

    -

    8.7. Configuring the Kafka Bridge

    +

    8.8. Configuring the Kafka Bridge

    Update the spec properties of the KafkaBridge custom resource to configure your Kafka Bridge deployment.

    @@ -8548,7 +9222,7 @@

    -

    8.8. Configuring Kafka and ZooKeeper storage

    +

    8.9. Configuring Kafka and ZooKeeper storage

    As stateful applications, Kafka and ZooKeeper store data on disk. Strimzi supports three storage types for this data:

    @@ -8611,7 +9285,7 @@

    8.8.

    -

    8.8.1. Data storage considerations

    +

    8.9.1. Data storage considerations

    For Strimzi to work well, an efficient data storage infrastructure is essential. We strongly recommend using block storage. @@ -8681,7 +9355,7 @@

    Disk usage
    -

    8.8.2. Ephemeral storage

    +

    8.9.2. Ephemeral storage

    Ephemeral data storage is transient. All pods on a node share a local ephemeral storage space. @@ -8748,7 +9422,7 @@

    -

    8.8.3. Persistent storage

    +

    8.9.3. Persistent storage

    Persistent data storage retains data in the event of system disruption. For pods that use persistent data storage, data is persisted across pod failures and restarts.

    @@ -9016,7 +9690,7 @@
    -

    8.8.4. Resizing persistent volumes

    +

    8.9.4. Resizing persistent volumes

    Persistent volumes used by a cluster can be resized without any risk of data loss, as long as the storage infrastructure supports it. Following a configuration update to change the size of the storage, Strimzi instructs the storage infrastructure to make the change. @@ -9139,7 +9813,7 @@

    -

    8.8.5. JBOD storage

    +

    8.9.5. JBOD storage

    You can configure Strimzi to use JBOD, a data storage configuration of multiple disks or volumes. JBOD is one approach to providing increased data storage for Kafka brokers. @@ -9217,7 +9891,7 @@

    -

    8.8.6. Adding volumes to JBOD storage

    +

    8.9.6. Adding volumes to JBOD storage

    This procedure describes how to add volumes to a Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type.

    @@ -9313,7 +9987,7 @@

    -

    8.8.7. Removing volumes from JBOD storage

    +

    8.9.7. Removing volumes from JBOD storage

    This procedure describes how to remove volumes from Kafka cluster configured to use JBOD storage. It cannot be applied to Kafka clusters configured to use any other storage type. @@ -9403,7 +10077,7 @@

    -

    8.9. Configuring CPU and memory resource limits and requests

    +

    8.10. Configuring CPU and memory resource limits and requests

    By default, the Strimzi Cluster Operator does not specify CPU and memory resource requests and limits for its deployed operands. Ensuring an adequate allocation of resources is crucial for maintaining stability and achieving optimal performance in Kafka. @@ -9414,7 +10088,7 @@

    8.9.

    -

    8.10. Customizing Kubernetes resources

    +

    8.11. Customizing Kubernetes resources

    A Strimzi deployment creates Kubernetes resources, such as Deployment, Pod, and Service resources. These resources are managed by Strimzi operators. @@ -9458,6 +10132,9 @@

    -

    8.10.1. Customizing the image pull policy

    +

    8.11.1. Customizing the image pull policy

    Strimzi allows you to customize the image pull policy for containers in all pods deployed by the Cluster Operator. The image pull policy is configured using the environment variable STRIMZI_IMAGE_PULL_POLICY in the Cluster Operator deployment. @@ -9543,7 +10220,7 @@

    -

    8.10.2. Applying a termination grace period

    +

    8.11.2. Applying a termination grace period

    Apply a termination grace period to give a Kafka cluster enough time to shut down cleanly.

    @@ -9586,13 +10263,13 @@

    -

    8.11. Configuring pod scheduling

    +

    8.12. Configuring pod scheduling

    To avoid performance degradation caused by resource conflicts between applications scheduled on the same Kubernetes node, you can schedule Kafka pods separately from critical workloads. This can be achieved by either selecting specific nodes or dedicating a set of nodes exclusively for Kafka.

    -

    8.11.1. Specifying affinity, tolerations, and topology spread constraints

    +

    8.12.1. Specifying affinity, tolerations, and topology spread constraints

    Use affinity, tolerations and topology spread constraints to schedule the pods of kafka resources onto nodes. Affinity, tolerations and topology spread constraints are configured using the affinity, tolerations, and topologySpreadConstraint properties in following resources:

    @@ -9691,7 +10368,7 @@
    -

    8.11.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node

    +

    8.12.2. Configuring pod anti-affinity to schedule each Kafka broker on a different worker node

    Many Kafka brokers or ZooKeeper nodes can run on the same Kubernetes worker node. If the worker node fails, they will all become unavailable at the same time. @@ -9814,7 +10491,7 @@

    -

    8.11.3. Configuring pod anti-affinity in Kafka components

    +

    8.12.3. Configuring pod anti-affinity in Kafka components

    Pod anti-affinity configuration helps with the stability and performance of Kafka brokers. By using podAntiAffinity, Kubernetes will not schedule Kafka brokers on the same nodes as other workloads. Typically, you want to avoid Kafka running on the same worker node as other network or storage intensive applications such as databases, storage or other messaging platforms.

    @@ -9879,7 +10556,7 @@

    -

    8.11.4. Configuring node affinity in Kafka components

    +

    8.12.4. Configuring node affinity in Kafka components

    Prerequisites
  • 8. Securing Kafka @@ -1458,7 +1459,7 @@

    -

    For more information, see Feature gates.

    +

    For more information, see Feature gates.

  • @@ -1800,7 +1801,60 @@

    Example YAML

    -

    7.4. Kafka MirrorMaker 2 configuration

    +

    7.4. (Preview) Kafka node pools configuration

    +
    +

    A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. +By using node pools, nodes can have different configuration within the same Kafka cluster. +Configuration options not specified in the node pool are inherited from the Kafka configuration.

    +
    +
    +

    The node pools feature is available as a preview that can be enabled using the KafkaNodePool feature gate. +You can deploy a Kafka cluster with one or more node pools. +The node pool configuration includes mandatory and optional settings. +Configuration for replicas, roles, and storage is mandatory.

    +
    +
    +

    If you are using KRaft mode (which is also available as a preview), you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. +Controller and dual roles are specific to KRaft. +If you are using Kafka clusters that use ZooKeeper for cluster management, you can use node pools that are configured with broker roles only.

    +
    +

    Example YAML showing node pool configuration

    +
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: pool-a
    +  labels:
    +    strimzi.io/cluster: my-cluster
    +spec:
    +  replicas: 3
    +  roles:
    +    - broker
    +  storage:
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 100Gi
    +        deleteClaim: false
    +
    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +
    +

    7.5. Kafka MirrorMaker 2 configuration

    Kafka MirrorMaker 2 replicates data between two or more active Kafka clusters, within or across data centers. To set up MirrorMaker 2, a source and target (destination) Kafka cluster must be running.

    @@ -1857,7 +1911,7 @@

    Examp

    -

    7.5. Kafka MirrorMaker configuration

    +

    7.6. Kafka MirrorMaker configuration

    Kafka MirrorMaker (also referred to as MirrorMaker 1) uses producers and consumers to replicate data across clusters as follows:

    @@ -1940,7 +1994,7 @@

    Example

    -

    7.6. Kafka Connect configuration

    +

    7.7. Kafka Connect configuration

    Use Strimzi’s KafkaConnect resource to quickly and easily create new Kafka Connect clusters.

    @@ -2339,7 +2393,7 @@

    Kafka Connect API

    -

    7.7. Kafka Bridge configuration

    +

    7.8. Kafka Bridge configuration

    A Kafka Bridge configuration requires a bootstrap server specification for the Kafka cluster it connects to, as well as any encryption and authentication options required.

    @@ -2798,7 +2852,7 @@

    diff --git a/docs/operators/in-development/overview-book.html b/docs/operators/in-development/overview-book.html index 73e48f3f8..a002edd16 100644 --- a/docs/operators/in-development/overview-book.html +++ b/docs/operators/in-development/overview-book.html @@ -51,10 +51,11 @@
  • 7.1. Custom resources
  • 7.2. Common configuration
  • 7.3. Kafka cluster configuration
  • -
  • 7.4. Kafka MirrorMaker 2 configuration
  • -
  • 7.5. Kafka MirrorMaker configuration
  • -
  • 7.6. Kafka Connect configuration
  • -
  • 7.7. Kafka Bridge configuration
  • +
  • 7.4. (Preview) Kafka node pools configuration
  • +
  • 7.5. Kafka MirrorMaker 2 configuration
  • +
  • 7.6. Kafka MirrorMaker configuration
  • +
  • 7.7. Kafka Connect configuration
  • +
  • 7.8. Kafka Bridge configuration
  • 8. Securing Kafka @@ -1017,7 +1018,7 @@

    -

    For more information, see Feature gates.

    +

    For more information, see Feature gates.

  • @@ -1359,7 +1360,60 @@

    Example YAML

    -

    7.4. Kafka MirrorMaker 2 configuration

    +

    7.4. (Preview) Kafka node pools configuration

    +
    +

    A node pool refers to a distinct group of Kafka nodes within a Kafka cluster. +By using node pools, nodes can have different configuration within the same Kafka cluster. +Configuration options not specified in the node pool are inherited from the Kafka configuration.

    +
    +
    +

    The node pools feature is available as a preview that can be enabled using the KafkaNodePool feature gate. +You can deploy a Kafka cluster with one or more node pools. +The node pool configuration includes mandatory and optional settings. +Configuration for replicas, roles, and storage is mandatory.

    +
    +
    +

    If you are using KRaft mode (which is also available as a preview), you can specify roles for all nodes in the node pool to operate as brokers, controllers, or both. +Controller and dual roles are specific to KRaft. +If you are using Kafka clusters that use ZooKeeper for cluster management, you can use node pools that are configured with broker roles only.

    +
    +

    Example YAML showing node pool configuration

    +
    +
    +
    apiVersion: kafka.strimzi.io/v1beta2
    +kind: KafkaNodePool
    +metadata:
    +  name: pool-a
    +  labels:
    +    strimzi.io/cluster: my-cluster
    +spec:
    +  replicas: 3
    +  roles:
    +    - broker
    +  storage:
    +    type: jbod
    +    volumes:
    +      - id: 0
    +        type: persistent-claim
    +        size: 100Gi
    +        deleteClaim: false
    +
    +
    +
    + + + + + +
    +
    Important
    +
    +KRaft mode is not ready for production in Apache Kafka or in Strimzi. +
    +
    +
    +
    +

    7.5. Kafka MirrorMaker 2 configuration

    Kafka MirrorMaker 2 replicates data between two or more active Kafka clusters, within or across data centers. To set up MirrorMaker 2, a source and target (destination) Kafka cluster must be running.

    @@ -1416,7 +1470,7 @@

    Examp

    -

    7.5. Kafka MirrorMaker configuration

    +

    7.6. Kafka MirrorMaker configuration

    Kafka MirrorMaker (also referred to as MirrorMaker 1) uses producers and consumers to replicate data across clusters as follows:

    @@ -1499,7 +1553,7 @@

    Example

    -

    7.6. Kafka Connect configuration

    +

    7.7. Kafka Connect configuration

    Use Strimzi’s KafkaConnect resource to quickly and easily create new Kafka Connect clusters.

    @@ -1898,7 +1952,7 @@

    Kafka Connect API

    -

    7.7. Kafka Bridge configuration

    +

    7.8. Kafka Bridge configuration

    A Kafka Bridge configuration requires a bootstrap server specification for the Kafka cluster it connects to, as well as any encryption and authentication options required.