An error occurs when creating a folder or file in the resource center, and HDFS not startup #8374
Unanswered
chengchengxiao
asked this question in
Q&A
Replies: 2 comments
-
resolved |
Beta Was this translation helpful? Give feedback.
0 replies
-
How to solve it ? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
An error occurs when creating a folder or file in the resource center, and HDFS not startup
The version I use is 2.0.3
I follow the official website configuration, the steps are as follows:
1.copy core-site.xml and hdfs-site.xml to /opt/dolphinscheduler/conf
2.modify conf/common.properties
The common.properties of detailed configuration is as follows
`# user data local directory path, please make sure the directory exists and have read write permissions
data.basedir.path=/tmp/dolphinscheduler
resource storage type: HDFS, S3, NONE
resource.storage.type=HDFS
resource store on HDFS/S3 path, resource file will store to this hadoop hdfs path, self configuration, please make sure the directory exists on hdfs and have read write permissions. "/dolphinscheduler" is recommended
resource.upload.path=/dolphinscheduler
whether to startup kerberos
hadoop.security.authentication.startup.state=false
java.security.krb5.conf path
java.security.krb5.conf.path=/etc/krb5.conf
login user from keytab username
login.user.keytab.username=[email protected]
login user from keytab path
login.user.keytab.path=/opt/hdfs.headless.keytab
kerberos expire time, the unit is hour
kerberos.expire.time=2
resource view suffixs
#resource.view.suffixs=txt,log,sh,bat,conf,cfg,py,java,sql,xml,hql,properties,json,yml,yaml,ini,js
if resource.storage.type=HDFS, the user must have the permission to create directories under the HDFS root path
hdfs.root.user=ekwing
if resource.storage.type=S3, the value like: s3a://dolphinscheduler; if resource.storage.type=HDFS and namenode HA is enabled, you need to copy core-site.xml and hdfs-site.xml to conf dir
fs.defaultFS=hdfs://mycluster:8020
if resource.storage.type=S3, s3 endpoint
fs.s3a.endpoint=http://192.168.xx.xx:9010
if resource.storage.type=S3, s3 access key
fs.s3a.access.key=A3DXS30FO22544RE
if resource.storage.type=S3, s3 secret key
fs.s3a.secret.key=OloCLq3n+8+sdPHUhJ21XrSxTC+JK
resourcemanager port, the default value is 8088 if not specified
resource.manager.httpaddress.port=8088
if resourcemanager HA is enabled, please set the HA IPs; if resourcemanager is single, keep this value empty
yarn.resourcemanager.ha.rm.ids=172.18.7.10,172.18.7.19
if resourcemanager HA is enabled or not use resourcemanager, please keep the default value; If resourcemanager is single, you only need to replace ds1 to actual resourcemanager hostname
yarn.application.status.address=http://ds1:%s/ws/v1/cluster/apps/%s
job history status url when application number threshold is reached(default 10000, maybe it was set to 1000)
yarn.job.history.status.address=http://ds1:19888/ws/v1/history/mapreduce/jobs/%s
datasource encryption enable
datasource.encryption.enable=false
datasource encryption salt
datasource.encryption.salt=!@#$%^&*
Whether hive SQL is executed in the same session
support.hive.oneSession=false
use sudo or not, if set true, executing user is tenant user and deploy user needs sudo permissions; if set false, executing user is the deploy user and doesn't need sudo permissions
sudo.enable=true
network interface preferred like eth0, default: empty
#dolphin.scheduler.network.interface.preferred=
network IP gets priority, default: inner outer
#dolphin.scheduler.network.priority.strategy=default
system env path
#dolphinscheduler.env.path=env/dolphinscheduler_env.sh
development state
development.state=false
#datasource.plugin.dir config
datasource.plugin.dir=lib/plugin/datasource`
However, an error is reported that HDFS not startup
Do I need to modify any other configurations
Beta Was this translation helpful? Give feedback.
All reactions