-
Notifications
You must be signed in to change notification settings - Fork 2.4k
[tempo-distributed] More autoscaling configurations #3908
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
[tempo-distributed] More autoscaling configurations #3908
Conversation
…mponents Signed-off-by: Jordan Simonovski <[email protected]>
8b5bcfc to
0168401
Compare
Signed-off-by: Jordan Simonovski <[email protected]>
… enabled Signed-off-by: Jordan Simonovski <[email protected]>
|
I'm currently optimising our Tempo setup and have just added KEDA to the Compactor. We are also searching for a solution to automatically scale the metrics generator. Therefore, the PR would be a great help. I do have one remark, though: do you think it would make sense to allow the advanced configuration to be configured in the ScaledObject? This would also allow HPA behaviours to be set. |
I'm definitely open to this if the maintainers are happy with it as an additional config. |
| # -- Init containers for the metrics generator pod | ||
| initContainers: [] | ||
| autoscaling: | ||
| # -- Enable autoscaling for the metrics-generator. WARNING: Autoscaling metrics-generators can result in lost data. Only do this if you know what you're doing. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
WARNING: Autoscaling metrics-generators can result in lost data.
Where can I find more info about this? I haven't been able to find any documentation about autoscaling the metrics-generator.
Added some more autoscaling configurations to Tempo components:
Happy to discuss further. We've been running the metrics generator HPA in production without any issues.
The context behind adding KEDA support to the ingester and distributor is to allow for more flexible scaling on external metrics such as pipeline queue backlogs, spikes in ingestion volume, etc.