I\'m working with Ubuntu 12.04 LTS.
I\'m going through the hadoop quickstart manual to make a pseudo-distributed operation. It seems simple and straightforward (eas
Ran into the same issue on ubuntu LTS 16.04. Running bash -vx ./bin/hadoop
showed it tested whether java was a directory. So I changed JAVA_HOME to a folder and it worked.
++ [[ ! -d /usr/bin/java ]]
++ hadoop_error 'ERROR: JAVA_HOME /usr/bin/java does not exist.'
++ echo 'ERROR: JAVA_HOME /usr/bin/java does not exist.'
ERROR: JAVA_HOME /usr/bin/java does not exist.
So I changed JAVA_HOME in ./etc/hadoop/hadoop-env.sh
to
export JAVA_HOME=/usr/lib/jvm/java-8-oracle/jre/
and hadoop starts fine. This is also mentioned in this article.
Check if your alternatives is pointing to the right one, you might actually be pointing to a different version and trying to alter the hadoop-env.sh on another installed version.
-alternatives --install /etc/hadoop/conf [generic_name] [your correct path] priority {for further check man page of alternatives}
to set alternatives manually,
alternatives --set [generic name] [your current path].
The way to debug this is to put an "echo $JAVA_HOME" in start-all.sh. Are you running your hadoop environment under a different username, or as yourself? If the former, it's very likely that the JAVA_HOME environment variable is not set for that user.
The other potential problem is that you have specified JAVA_HOME incorrectly, and the value that you have provided doesn't point to a JDK/JRE. Note that "which java" and "java -version" will both work, even if JAVA_HOME is set incorrectly.
The way to solve this problem is to export the JAVA_HOME variable inside the conf/hadoop-env.sh file.
It doesn't matter if you already exported that variable in ~/.bashrc, it'll still show the error.
So edit conf/hadoop-env.sh and uncomment the line "export JAVA_HOME" and add a proper filesystem path to it, i.e. the path to your Java JDK.
# The Java implementation to use. Required.
export JAVA_HOME="/path/to/java/JDK/"
regardless of debian or any linux flavor, just know that ~/.bash_profile
belongs to specific user and is not system wide.
in pseudo-distributed environment hadoop works on localhost
so the $JAVA_HOME
in .bash_profile is no use anymore.
just export the JAVA_HOME in ~/.bashrc
and use it system wide.