-
Notifications
You must be signed in to change notification settings - Fork 50
Description
Is your feature request related to a problem? Please describe.
First, thank you for the work on Harvest and ONTAP observability.
During our migration of Harvest from VM-based deployment to Kubernetes, we encountered limitations in the current Kubernetes deployment path.
The documented workflow relies on generating Docker Compose from harvest.yml and converting it via Kompose into Kubernetes manifests.
While functional, this approach introduces friction in production environments, especially when:
- Operating multiple ONTAP systems
- Using GitOps workflows
- Running Grafana/Prometheus Operator with declarative dashboards and alerts
- Enforcing Kubernetes production practices (resources, probes, hardened security)
When scaling across multiple ONTAP systems, these constraints increase operational complexity.
Related discussions:
The organization associated with this work is: https://github.com/grnet
Describe the solution you'd like
Introduce (or officially adopt) a Kubernetes-native Helm chart for Harvest.
We have implemented such a chart and are running it in production, monitoring multiple ONTAP clusters, and would like to contribute it upstream.
How the chart addresses the problem
Declarative Multi-Poller Deployment
- Each poller in
values.yamlgenerates:- A dedicated Deployment
- A corresponding
PodMonitor - Optional collector extension ConfigMaps
- Follows the recommended one-poller-per-container pattern
- Adding/removing pollers reconciles automatically via Helm
This provides clear failure domains and removes manual wiring.
Secure & Production-Ready Configuration
- Supports Kubernetes Secrets / env-based credential injection
- Resource requests and limits
- Startup / readiness / liveness probes
- Affinity / topology spread
- Hardened container security:
readOnlyRootFilesystemallowPrivilegeEscalation: false- Capability dropping (only
NET_RAWretained for ICMP poller status metric) seccompProfile
This enables deployment in hardened clusters without post-generation patching.
Monitoring Mixin Workflow
We vendor upstream dashboards and alerts using a Jsonnet mixin approach (similar to ceph-mixin), generate artifacts via mixtool, and deploy them through Helm templates via GitOps
Describe alternatives you've considered
We evaluated:
- Continuing with the Kompose-based workflow and maintaining local patches
These approaches introduce manual steps and split monitoring logic across repositories.
An official Helm chart upstream would provide a declarative, production-ready Kubernetes deployment path aligned with modern Kubernetes and Prometheus Operator practices.
Additional context
High-level chart structure:
├── Chart.yaml
├── values.yaml
├── templates/
│ ├── pollers/
│ ├── admin/
│ ├── monitoring/
│ └── configmap.yaml
├── monitoring/
└── mixin/The chart is currently production-validated and monitors multiple ONTAP clusters using Prometheus Operator and Grafana sidecar dashboards.
We are happy to open a PR and align with the preferred repository structure and CI requirements.