-
Notifications
You must be signed in to change notification settings - Fork 56
snc-library: Delete the failed pods before check for available one #921
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: The full list of commands accepted by this bot can be found here.
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
|
/cherry-pick release-4.16 |
|
@praveenkumar: once the present PR merges, I will cherry-pick it on top of release-4.16 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
| function all_pods_are_running_completed() { | ||
| local ignoreNamespace=$1 | ||
| ${OC} delete pods --field-selector=status.phase=Failed -A | ||
| ! ${OC} get pod --no-headers --all-namespaces --field-selector=metadata.namespace!="${ignoreNamespace}" | grep -v Running | grep -v Completed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding of all_pods_are_running_completed is that it tells us if there are pods which are not in the Running state nor in the Completed state. If we start deleting these pods before doing the check, is it worth keeping the check at all?
Which pods get to this weird ContainerStatusUnknown state? Is there any explanation for this? Maybe we can only delete pods in this ContainerStatusUnknown state if we are sure it's always a false positive for all_pods_are_running_complete?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
My understanding of all_pods_are_running_completed is that it tells us if there are pods which are not in the Running state nor in the Completed state. If we start deleting these pods before doing the check, is it worth keeping the check at all?
we are only deleting the pods which have in Failed phase , there might be other phase of pod like pending . (https://kubernetes.io/docs/concepts/workloads/pods/pod-lifecycle/#pod-phase )
Which pods get to this weird ContainerStatusUnknown state? Is there any explanation for this?
We did try to describe those pods but doesn't get proper explanation, just get the phase as Failed :(
Maybe we can only delete pods in this ContainerStatusUnknown state if we are sure it's always a false positive for all_pods_are_running_complete?
when container goes to this state the phase become Failed and that is used for field-section option. There is no option around ContainerStatusUnknown state in the field section side. It shouldn't provide a false positive because there we are making sure no failed container but also not a pending state container.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a strict equivalence between Failed and ContainerStatusUnknown? In other words, if a container in the Failed phase, it always has ContainerStatusUnknown, and if a container has ContainerStatusUnknown, it always is in the Failed phase. The latter is true, but is the former also true Failed implies ContainerStatusUnknown, and nothing else? We don't want to ignore Failed containers which are not ContainerStatusUnknown.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do we have a strict equivalence between
FailedandContainerStatusUnknown? In other words, if a container in theFailedphase, it always hasContainerStatusUnknown, and if a container hasContainerStatusUnknown, it always is in theFailedphase.
Second assumption is correct if a container has ContainerStatusUnknown then it always in the Failed phase
The latter is true, but is the former also true
FailedimpliesContainerStatusUnknown, and nothing else? We don't want to ignoreFailedcontainers which are notContainerStatusUnknown.
There can be many reason a container can be in Failed phase like resource limitation or dependent resource not available ...etc. and most of the time it retries and change the phase like container creating or pending . If we delete a Failed pod which is associated with a deployment (which is the case for all the operator) then new pod come up. Even the ContainerStatusUnknown state pod is deleted and associated with deployment it will come up a new one so it is not like we ignoring it but we are making sure once we delete the failed one the new pods initiated by respective deployment should be green.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
With this explanation, it makes sense to add a retry_failed_pods method which will do the oc delete. It's really unexpected that the method all_pods_are_running_or_completed would delete pods or ensure the failed pods are restated.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
added retry_failed_pods and this method is called from all_pods_are_running_or_completed .
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
If you call retry_failed_pods from all_pods_are_running_or_completed, you still have the same issue I described in the previous comment:
It's really unexpected that the method
all_pods_are_running_or_completedwould delete pods or ensure the failed pods are restarted.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We already delete the failed pods here
Line 271 in 26b44f5
| retry ${OC} delete pods --field-selector=status.phase=Failed -A |
Are there new failed pods appearing while we are waiting for all the pods to be ready?
all_pods_are_running_or_completed has an ignore_namespace parameter, should we also ignore that namespace when deleting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are there new failed pods appearing while we are waiting for all the pods to be ready?
@cfergeau yes that's the reason we are putting it in all_pods_are_running_or_completed function.
all_pods_are_running_or_completed has an ignore_namespace parameter, should we also ignore that namespace when deleting?
No, we don't need to ignore namespace for deleting failed pods.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@cfergeau yes that's the reason we are putting it in
all_pods_are_running_or_completedfunction.
So even something like this is not enough?
diff --git a/snc-library.sh b/snc-library.sh
index f3f2831..3997a06 100755
--- a/snc-library.sh
+++ b/snc-library.sh
@@ -248,13 +248,14 @@ function wait_till_cluster_stable() {
fi
if [ $count -eq $numConsecutive ]; then
echo "Cluster has stabilized"
- retry ${OC} delete pods --field-selector=status.phase=Failed -A
break
fi
sleep 30s
a=$((a + 1))
done
+ retry ${OC} delete pods --field-selector=status.phase=Failed -A
+
# Wait till all the pods are either running or complete state
retry all_pods_are_running_completed "${ignoreNamespace}"
}
The removal of this oc delete pods could be part of this PR..
No, we don't need to ignore namespace for deleting failed pods.
Why? This makes all_pods_are_running_or_completed even more odd and confusing.
Sometime pods goes to `ContainerStatusUnknown` state where it is not able to send the status to kubelet and it stays there till manually deleted and due to it our snc script fails. In this PR we are deleting the pods which are in failed state (which is the same for ContainerStatusUnknown one) and then checks the pods availablity. ``` + sleep 256 + all_pods_are_running_completed none + local ignoreNamespace=none + ./openshift-clients/linux/oc get pod --no-headers --all-namespaces '--field-selector=metadata.namespace!=none' + grep -v Running + grep -v Completed openshift-kube-apiserver installer-11-crc 0/1 ContainerStatusUnknown 1 19m + exit=1 + wait=512 + count=10 + '[' 10 -lt 10 ']' + echo 'Retry 10/10 exited 1, no more retries left.' Retry 10/10 exited 1, no more retries left. ``` fixes: crc-org#920
|
/cherry-pick release-4.17 |
|
@praveenkumar: once the present PR merges, I will cherry-pick it on top of release-4.17 in a new PR and assign it to you. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. |
|
@praveenkumar: The following tests failed, say
Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Sometime pods goes to
ContainerStatusUnknownstate where it is not able to send the status to kubelet and it stays there till manually deleted and due to it our snc script fails. In this PR we are deleting the pods which are in failed state (which is the same for ContainerStatusUnknown one) and then checks the pods availablity.fixes: #920