You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: README.md
+14-9
Original file line number
Diff line number
Diff line change
@@ -10,13 +10,25 @@ Version 1.1.0 introduces healthchecks for the containers.
10
10
* 2.7.1 with OpenJDK 7
11
11
* 2.7.1 with OpenJDK 8
12
12
13
-
## Description
13
+
## Quick Start
14
14
15
15
To deploy an example HDFS cluster, run:
16
16
```
17
17
docker-compose up
18
18
```
19
19
20
+
`docker-compose` creates a docker network that can be found by running `docker network list`, e.g. `dockerhadoop_default`.
21
+
22
+
Run `docker network inpect` on the network (e.g. `dockerhadoop_default`) to find the IP the hadoop interfaces are published on. Access these interfaces with the following URLs:
The configuration parameters can be specified in the hadoop.env file or as environmental variables for specific services (e.g. namenode, datanode etc.):
21
33
```
22
34
CORE_CONF_fs_defaultFS=hdfs://namenode:8020
@@ -38,11 +50,4 @@ The available configurations are:
38
50
* /etc/hadoop/httpfs-site.xml HTTPFS_CONF
39
51
* /etc/hadoop/kms-site.xml KMS_CONF
40
52
41
-
If you need to extend some other configuration file, refer to base/entrypoint.sh bash script.
42
-
43
-
After starting the example Hadoop cluster, you should be able to access interfaces of all the components (substitute domain names by IP addresses from ```network inspect hadoop``` command):
0 commit comments