An in-memory, location aware, HDFS based file system.
gradle build
./namenode.sh src/main/resources/config.properties
./datanode.sh src/main/resources/config1.properties
./datanode.sh src/main/resources/config2.properties
./datanode.sh src/main/resources/config3.properties
./datanode.sh src/main/resources/config4.properties
./anamnesis.sh mkdir foo/bar
./anamnesis.sh ls foo
./anamnesis.sh upload /tmp/test.txt foo/test.txt -b 2 -f 0
./anamnesis.sh download foo/test.txt /tmp/test2.txt
./bin/hdfs dfs -mkdir /user/hamersaw
./bin/hdfs dfs -D dfs.block.size=1024 -copyFromLocal /tmp/test.txt /user/hamersaw/test.txt
./bin/hdfs dfs -copyToLocal /user/hamersaw/test.txt /tmp/test2.txt
$bin/spark-shell
scala> val rdd = sc.textFile("hdfs://localhost/user/hamersaw/MOCK_DATA.csv")
- change DatanodeService to add Datanode not it's elements
- setup some rock solid logging
- rpc response error handling (fix up)
- fix everything (works with hdfs native client)