I have 3 data nodes running, while running a job i am getting the following given below error ,
java.io.IOException: File /user/ashsshar/olhcache/load
In my case, this issue was resolved by opening the firewall port on 50010 on the datanodes.
I had the same issue, I was running very low on disk space. Freeing up disk solved it.
jps
.1.Stop all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x stop ; done
2.Remove all files from /var/lib/hadoop-hdfs/cache/hdfs/dfs/name
Eg: devan@Devan-PC:~$ sudo rm -r /var/lib/hadoop-hdfs/cache/
3.Format Namenode
sudo -u hdfs hdfs namenode -format
4.Start all Hadoop daemons
for x in `cd /etc/init.d ; ls hadoop*` ; do sudo service $x start ; done
Stop All Hadoop Service
What I usually do when this happens is that I go to tmp/hadoop-username/dfs/ directory and manually delete the data and name folders (assuming you are running in a Linux environment).
Then format the dfs by calling bin/hadoop namenode -format (make sure that you answer with a capital Y when you are asked whether you want to format; if you are not asked, then re-run the command again).
You can then start hadoop again by calling bin/start-all.sh
Very Simple fix for the same issue on Windows 8.1
I used Windows 8.1 OS and Hadoop 2.7.2, Did the following things to overcome this issue.