You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Replace IOPS calculations with benchmarking tool reference
- Remove IOPS calculation section with hardcoded throughput numbers
- Remove scaling guidelines with specific node/throughput estimates
- Add reference to forthcoming BigQuery Ingestor Benchmarking Tool
- Update troubleshooting to recommend benchmarking instead of IOPS checks
- Encourage users to use proper benchmarking for their specific workloads
3. **Benchmark your workload** - use the benchmarking tool to determine optimal cluster sizing
513
513
4. **Reduce columns** - fetch only needed columns to reduce network transfer
514
514
515
515
### Data Drift Detected
@@ -554,24 +554,20 @@ LIMIT 10;
554
554
555
555
## Performance Tuning
556
556
557
-
### IOPS Calculation
557
+
### Benchmarking Your Workload
558
558
559
-
```
560
-
Indexes: 1 primary + 1 timestamp = 2 indexes
561
-
IOPS per record: ~4 IOPS
562
-
Target throughput: 5000 records/sec per node
563
-
Required IOPS: 20,000 per node
564
-
```
559
+
To determine the optimal cluster size and configuration for your specific use case, use the **BigQuery Ingestor Benchmarking Tool** (coming soon). The tool will:
565
560
566
-
Learn more about [Harper's storage architecture](https://docs.harperdb.io/docs/reference/storage-algorithm)
561
+
- Measure actual throughput with your data volume and record sizes
562
+
- Test different batch size configurations
563
+
- Recommend optimal cluster sizing based on your target latency
564
+
- Identify storage and network bottlenecks specific to your workload
567
565
568
-
### Scaling Guidelines
566
+
Until the benchmarking tool is available, start with the batch size recommendations below and monitor your sync lag to determine if scaling is needed.
569
567
570
-
- **3 nodes**: ~15K records/sec total
571
-
- **6 nodes**: ~30K records/sec total
572
-
- **12 nodes**: ~60K records/sec total
568
+
**Note:** Harper doesn't autoscale. Add/remove nodes manually via Fabric UI or self-hosted configuration. Cluster size changes require workload rebalancing (see Limitations).
573
569
574
-
**Note:** Harper doesn't autoscale. Add/remove nodes manually via Fabric UI or self-hosted configuration. Cluster size changes require consideration (see Limitations).
570
+
Learn more about [Harper's storage architecture](https://docs.harperdb.io/docs/reference/storage-algorithm)
0 commit comments