-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add timeout and failureThreshold to multicluster probe #13061
Conversation
- This adds the `probeSpec.failureThreshold` and `probeSpec.timeout` fields to the Link CRD spec. - Likewise, the `gateway.probe.failureThreshold` and `gateway.probe.timeout` fields are added to the linkerd-multicluster chart, that are used to populate the new `mirror.linkerd.io/probe-failure-threshold` and `mirror.linkerd.io/probe-timeout` annotations in the gateway service (consumed by `linkerd mc link` to populate probe spec). - In the probe worker, we replace the hard-coded 50s timeout with the new timeout config (which now defaults to 30s). And the probe loop got refactored in order to not mark the gateway as unhealty until the consecutive failures threshold is reached.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This looks good but I have questions about the defaults (which preceded this PR). A probe timeout of 30s but interval of 3s means that in the case of a timeout, many probe requests will pile up, right? It seems like the interval should be at least as long as the timeout.
Good insight 👍, yes I've observed a pile up of one request when timeouts occur; the ticker goroutine blocks on run() consuming |
That would essentially amount to removing the ticker and simply waiting the jittered period after each probe completes (successfully or unsuccessfully). This could work, but it seems a bit unintuitive that the rate of probes would change depending on how quickly probes are returning. I'm not familiar with any other probe systems that do this. More straightforward would be to increase the probe period to be longer than the timeout. Do we really need a 3 second period? Our timeout suggests that probes could take up to 30s to respond which means that we can't really detect liveness changes faster than that anyway. |
Under normal circumstances and with the default config, probes are triggered every 3s to 3.3s (accounting for jitter). I don't expect the probe to take more than that. This rate remains constant unless we're dealing with a pathological case where the probes take more than those 3s, in which case the following probe will be triggered immediately.
I think those long probes are not the common case. Increasing the probe period would introduce an unnecessary liveness delay for the base case IMO. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I don't want to block this PR on these values which predate this change. But I still think that having a period which is shorter than the timeout leads to some pretty unexpected properties.
Followup to #13061 When the CLI queried for Links in the cluster, we were assuming the CLI version matched the linkerd version in the cluster, and so we validated that all fields should be present. This change relaxes that assumption by setting defaults for the new fields `failureThreshold` and `timeout`, so we don't get an error for example when trying to run `linkerd multicluster uninstall` on a cluster with an older `Link` CRD.
Followup to #13061 When the CLI queried for Links in the cluster, we were assuming the CLI version matched the linkerd version in the cluster, and so we validated that all fields should be present. This change relaxes that assumption by setting defaults for the new fields `failureThreshold` and `timeout`, so we don't get an error for example when trying to run `linkerd multicluster uninstall` on a cluster with an older `Link` CRD.
This adds the
probeSpec.failureThreshold
andprobeSpec.timeout
fields to the Link CRD spec.Likewise, the
gateway.probe.failureThreshold
andgateway.probe.timeout
fields are added to the linkerd-multicluster chart, that are used to populate the newmirror.linkerd.io/probe-failure-threshold
andmirror.linkerd.io/probe-timeout
annotations in the gateway service (consumed bylinkerd mc link
to populate probe spec).In the probe worker, we replace the hard-coded 50s timeout with the new timeout config (which now defaults to 30s). And the probe loop got refactored in order to not mark the gateway as unhealty until the consecutive failures threshold is reached.