Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Multi-node setup w/ coordinator as worker - fatal error #370

Closed
Concurser opened this issue Mar 12, 2018 · 6 comments
Closed

Multi-node setup w/ coordinator as worker - fatal error #370

Concurser opened this issue Mar 12, 2018 · 6 comments

Comments

@Concurser
Copy link

Hello,

I have the following config.json setup:

{
"username": "qg-presto01",
"coordinator": "172.17.233.164",
"workers": ["172.17.233.164", "172.17.233.165","172.17.233.166"],
"java8_home":"/usr/java/jdk1.8.0_162/"
}

But I get this error:

Fatal error: [172.17.233.164] discovery.uri should not be localhost in a multi-node cluster, but found http://localhost:8080. You may have encountered this error by choosing a coordinator that is localhost and a worker that is not. The default discovery-uri is http://<coordinator>:8080

After that I get some error messages from the workers:

Fatal error: [172.17.233.xxx] sudo() received nonzero return code 1 while executing!

Why is that? Can´t the coordinator share the worker status in a multi-node cluster?

@Concurser
Copy link
Author

I did exclude the coordinator from the worker list:

{
"username": "qg-presto01",
"coordinator": "172.17.233.164",
"workers": ["172.17.233.165","172.17.233.166"],
"java8_home":"/usr/java/jdk1.8.0_162/"
}

But still getting the same error:

Fatal error: [172.17.233.164] discovery.uri should not be localhost in a multi-node cluster, but found http://localhost:8080. You may have encountered this error by choosing a coordinator that is localhost and a worker that is not. The default discovery-uri is http://<coordinator>:8080

This means that prestoadmin must be started from outside the cluster? I am running it from the coordinator.

@kokosing
Copy link
Contributor

This means that prestoadmin must be started from outside the cluster? I am running it from the coordinator.

I do not think it should matter.

What version are using?

@kokosing
Copy link
Contributor

What do you have in config.properties for worker and coordinator?

Here you have some docs where you could find these files: https://prestodb.io/presto-admin/docs/current/installation/presto-configuration.html#

@Concurser
Copy link
Author

HI kokosing,

Many thansk for the reply.

a) Packages being installed are
prestoadmin-2.2-online.tar.gz
presto-server-rpm-0.167-t.0.2.x86_64.rpm

b) My config.json file follows:

{
"username": "qg-presto01",
"coordinator": "172.17.233.164",
"workers": ["172.17.233.165","172.17.233.166"],
"java8_home":"/usr/java/jdk1.8.0_162/"
}

I had the coordinator as a worker too, but tried taking it out to see if it works, but it did not.

@kokosing
Copy link
Contributor

kokosing commented Mar 16, 2018

I am sorry, but I did not ask for presto-admin config.json but Presto config.properties.

@Concurser
Copy link
Author

Dear @kokosing we found the issue. In fact config.properties under /worker and /coordinator nodes where the discovery.uri field was set to . We replaced it with the real IP and it worked. We algo could accumulate the coordinator and worker job in the same node. Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants