I have 3 data nodes running, while running a job i am getting the following given below error ,
java.io.IOException: File /user/ashsshar/olhcache/load
I had this problem and I solved it as bellow:
Find where are your datanode and namenode metadata/data saved; if you cannot find it, simply do this command on mac to find it (there are located in a folder called "tmp")
find /usr/local/Cellar/ -name "tmp";
find command is like this: find <"directory"> -name <"any string clue for that directory or file">
After finding that file, cd into it. /usr/local/Cellar//hadoop/hdfs/tmp
then cd to dfs
then using -ls command see that data and name directories are located there.
Using remove command, remove them both:
rm -R data . and rm -R name
Go to bin folder and end everything if you already have not done it:
sbin/end-dfs.sh
Exit from the server or localhost.
Log into the server again: ssh <"server name">
start the dfs:
sbin/start-dfs.sh
Format the namenode for being sure:
bin/hdfs namenode -format
you can now use hdfs commands to upload your data into dfs and run MapReduce jobs.