Skip to content

Commit a13bf28

Browse files
committed
Integrate/Kafka: Implement suggestions by CodeRabbit
1 parent 6c8caf3 commit a13bf28

File tree

2 files changed

+4
-4
lines changed

2 files changed

+4
-4
lines changed

docs/integrate/kafka/attic.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -10,12 +10,12 @@ data integration, and mission-critical applications.
1010

1111
:::{dropdown} **Managed Kafka**
1212
Several companies provide managed Kafka services (see the [overview of managed Kafka offerings]
13-
for a more complete list).
13+
for examples; note that offerings and features change frequently).
1414

1515
- [Aiven for Apache Kafka]
1616
- [Amazon Managed Streaming for Apache Kafka (MSK)]
1717
- [Apache Kafka on Azure]
18-
- [Azure Event Hubs for Apache Kafka]
18+
- [Azure Event Hubs for Apache Kafka] (Kafka protocol–compatible service, not Apache Kafka)
1919
- [Confluent Cloud]
2020
- [DoubleCloud Managed Service for Apache Kafka]
2121
:::

docs/integrate/kafka/index.md

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -34,7 +34,7 @@ Apache Kafka is a widely used open-source distributed event-store and streaming
3434
* **Buffering & decoupling** – Kafka absorbs bursty writes and isolates producers from database load. This is particularly useful when it comes to heavy-load ingestion scenarios.
3535
* **Scalability end-to-end** – Partitioned topics and a sharded cluster let you scale producers, brokers, consumers, and CrateDB independently.
3636
* **Near-real-time analytics** – New events are available in CrateDB seconds (or even milliseconds) after production, exposed via SQL to standard BI tools.
37-
* **Operational resilience** – Use Kafka as a durable buffer between CrateDB and data producers. Idempotent upserts (exactly-once semantics) reduce data-loss and duplication risks.
37+
* **Operational resilience** – Use Kafka as a durable buffer between producers and CrateDB. Idempotent upserts reduce duplication risks and improve recovery from retries.
3838

3939
## Common Ingestion Options
4040

@@ -80,7 +80,7 @@ The processed results are then written into CrateDB, where they’re immediately
8080
How you run Kafka and CrateDB depends a lot on your environment and preferences. The most common approaches are:
8181

8282
* **Containerised on-premise** – Run both Kafka and CrateDB on Docker or Kubernetes in your own data centre or private cloud. This gives you the most control, but also means you manage scaling, upgrading, and monitoring.
83-
* **Managed Kafka services** – Use a provider such as Confluent Cloud or AWS MSK to offload the operational heavy lifting of Kafka. You can still connect these managed clusters directly to a CrateDB deployment that you operate. CrateDB is also available on the major cloud providers as well.
83+
* **Managed Kafka services** – Use a provider such as Confluent Cloud or AWS MSK to offload Kafka operations. Some services (e.g., Azure Event Hubs) provide Kafka‑compatible endpoints rather than Kafka itself. Any of these can connect to a CrateDB deployment you operate or to CrateDB Cloud.
8484
* **Managed CrateDB** – Crate\.io offers CrateDB Cloud, which can pair with either self-managed Kafka or managed Kafka services. This option reduces database operations to a minimum.
8585
* **Hybrid setups** – A common pattern is managed Kafka + self-managed CrateDB, or vice versa, depending on where you want to keep operational control.
8686

0 commit comments

Comments
 (0)