-
Notifications
You must be signed in to change notification settings - Fork 4.7k
[WIP] test: Increase etcd IOPS for AWS scale jobs #17741
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
1 similar comment
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
1 similar comment
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-small-scale-amazonvpc-using-cl2 |
2 similar comments
|
/test presubmit-kops-aws-small-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-small-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
It seems like now tests (both presubmits and periodics) are not even able to create clusters @dims @hakman , it says it is unable to get bucket details.
Do we know if these buckets still exist ? I don't have access to the account to check this. |
|
/test presubmit-kops-aws-small-scale-amazonvpc-using-cl2 |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
1 similar comment
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
@hakuna-matatah Looks like cluster validation passed with 5k nodes. I think this is mostly ready to merge. |
Unfortunately, not yet. It appears prometheus stack failed to set up, need to understand why ? And it appears somehow job is still in running state for last 20hrs and it dump the logs yet - https://gcsweb.k8s.io/gcs/kubernetes-ci-logs/pr-logs/pull/kops/17741/presubmit-kops-aws-scale-amazonvpc-using-cl2/2005523352457842688/ will re-run to see if it kills old one and if prom stack setup failure is consistent ^^^, hard to debug if there are no logs on why prometheus stack failed to set up. |
|
/test presubmit-kops-aws-scale-amazonvpc-using-cl2 |
|
@ameukam @BenTheElder It looks like prow job is stuck in running state for last 20 hours - https://prow.k8s.io/view/gs/kubernetes-ci-logs/pr-logs/pull/kops/17741/presubmit-kops-aws-scale-amazonvpc-using-cl2/2005523352457842688 I remember vaguely this happened in the past and there was a fix on Infra for this. Do you happen to know if it has regressed ? |
|
@hakuna-matatah: The following tests failed, say
Full PR test history. Your PR dashboard. Please help us cut down on flakes by linking to an open issue when you hit one in your PR. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
I remember @upodroid investigating this and being related to |
|
Double check if the pvc of the prometheus pod is up and running. Also, we should be running 100 node jobs every other hour like we do for gce |
I think even for this we would need APIServer audit logs It appears that prometheus stack set up went fine in the last test, but test itself failed due to API SLOs breaching. Unfortunately we don't have APIServer audit logs with kops setup to debug where the latency is coming from ? |
To test some theories