-
Notifications
You must be signed in to change notification settings - Fork 260
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
locust master / slave sync #12
Comments
I have tried to deploy the workers in 2 different ways.
is through the worker yaml file from the git repo. It creates the pods with error status and none of them is in running state because of the following error message:
If the workers are deployed manually, containing the direct IP address of the master, for example LOCUST_MASTER=172.20.140.2 (as it is my current IP of the master container), everything is working file If I deploy the worker with this command:
The kubectl has an error on the status of the worker pod:
When creating the worker with the direct IP address of the master:
How can I overcome this issue? |
@razghe How were you able to resolve this ? Im facing exactly same problem. |
I have the same issue. Currently I have to deploy the master controller + service first, get the external lb ip of the master and set LOCUST_MASTER to this IP in the slave yaml. It seems like the dns resolution is not working, when putting |
Hi,
I am using the locust deployment on a kubernetes cluster with 300 slaves. Every time my cluster is crashing, the machines are re-created by kubernetes changing also the IP address of the locust master.
I tried to assign the same IP address to the locust master and re-create it but no slave is syncing back in a locust UI.
What should be the best way to overcome this issue ?
The text was updated successfully, but these errors were encountered: