Targets goes unhealthy randomly and timeouts when there is more than 1 pod #3979
Labels
lifecycle/stale
Denotes an issue or PR has remained open with no activity and has become stale.
triage/needs-investigation
Describe the bug
I'm trying to deploy web application with pretty standard stack, that contains deployment, service and ingress. We are using 3 worker nodes on EKS, and when I scale replicas to at least 2, Load Balancer UI shows that targets randomly goes unhealthy. Also app responds with
504 Gateway timed out
every few requests. It seems that traffic goes only to one pod/worker node, completely ignoring routing to other pods on other nodes.Worth to notice is that in ALB UI shows that statuses of targets are some kind of random. In one moment all targets are healthy, after refresh only one node is healthy. After second refresh to of them are healthy, with one unhealthy. At least one node is always healthy, as this is the worker that responds for successful requests.
Steps to reproduce
v1.30
EKS cluster with 2 or 3 worker nodesNodePort
service for appspec.rules.0.host
Expected outcome
Environment
AWS Load Balancer controller version:
v2.8.2
Kubernetes version:
v1.30
Using EKS:
eks.20
Additional Context:
Ingress annotations:
Potentially related issue: kubernetes/ingress-nginx#9990
The text was updated successfully, but these errors were encountered: