Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add range response KV length as a metric #16881

Closed
wants to merge 2 commits into from

Conversation

tjungblu
Copy link
Contributor

@tjungblu tjungblu commented Nov 7, 2023

This adds a metric to allow us to alert on large range responses described in #14809.

Please read https://github.com/etcd-io/etcd/blob/main/CONTRIBUTING.md#contribution-flow.

This adds a metric to allow us to alert on large range responses
described in etcd-io#14809.

Signed-off-by: Thomas Jungblut <[email protected]>
@@ -136,6 +136,9 @@ func (s *EtcdServer) Range(ctx context.Context, r *pb.RangeRequest) (*pb.RangeRe
err = serr
return nil, err
}

rangeResponseKvCount.WithLabelValues(string(r.Key)).Observe(float64(len(resp.Kvs)))
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

given the key label could cause a very large /metrics response, we might want to only emit this on n > 1000 or so. Also thought about adding this to the expensive trace request above.

Let me know what you guys think.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey @tjungblu - Thanks for raising this idea. With the etcd project making tentative steps towards adopting the kubernetes enhancements process for new features or meaningful decisions this addition could be a potential item to first have a kep for sig/etcd.

Just an idea if you wanted a larger pool of eyes on the proposal. If you would rather just proceed here personally I am ok with it.

Is there any way you can provide some qualification of what impact this new metric would have on the response size? 🤔

@tjungblu tjungblu marked this pull request as draft November 9, 2023 10:55
@tjungblu
Copy link
Contributor Author

Closing for now, need to rethink this somewhat with our stuff downstream.

@tjungblu tjungblu closed this Nov 30, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

Successfully merging this pull request may close these issues.

2 participants