问题
I wanted to run a MapReduce-Job on my FreeBSD-Cluster with two nodes but I get the following Exception
14/08/27 14:23:04 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
14/08/27 14:23:04 INFO Configuration.deprecation: session.id is deprecated. Instead, use dfs.metrics.session-id
14/08/27 14:23:04 INFO jvm.JvmMetrics: Initializing JVM Metrics with processName=JobTracker, sessionId=
14/08/27 14:23:04 WARN mapreduce.JobSubmitter: Hadoop command-line option parsing not performed. Implement the Tool interface and execute your application with ToolRunner to remedy this.
14/08/27 14:23:04 WARN mapreduce.JobSubmitter: No job jar file set. User classes may not be found. See Job or Job#setJar(String).
14/08/27 14:23:04 INFO mapreduce.JobSubmitter: Cleaning up the staging area file:/tmp/hadoop-otlam/mapred/staging/otlam968414084/.staging/job_local968414084_0001
Exception in thread "main" java.util.NoSuchElementException
at java.util.StringTokenizer.nextToken(StringTokenizer.java:349)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.loadPermissionInfo(RawLocalFileSystem.java:565)
at org.apache.hadoop.fs.RawLocalFileSystem$DeprecatedRawLocalFileStatus.getPermission(RawLocalFileSystem.java:534)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.checkPermissionOfOther(ClientDistributedCacheManager.java:276)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.isPublic(ClientDistributedCacheManager.java:240)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineCacheVisibilities(ClientDistributedCacheManager.java:162)
at org.apache.hadoop.mapreduce.filecache.ClientDistributedCacheManager.determineTimestampsAndCacheVisibilities(ClientDistributedCacheManager.java:58)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:265)
at org.apache.hadoop.mapreduce.JobSubmitter.copyAndConfigureFiles(JobSubmitter.java:301)
at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:389)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1285)
at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1282)
at java.security.AccessController.doPrivileged(Native Method)
at javax.security.auth.Subject.doAs(Subject.java:415)
at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1556)
at org.apache.hadoop.mapreduce.Job.submit(Job.java:1282)
at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1303)
...
This happens when I try to run job.watForCompletion(true);
on a new MapReduce-job. The NoSuchElementException should be thrown, because there where no more Elements in a StringTokenizer and next() was called on it.
I took a look into the source and found the following codepart in RawLocalFileSystem.java:
/// loads permissions, owner, and group from `ls -ld`
private void loadPermissionInfo() {
IOException e = null;
try {
String output = FileUtil.execCommand(new File(getPath().toUri()),
Shell.getGetPermissionCommand());
StringTokenizer t =
new StringTokenizer(output, Shell.TOKEN_SEPARATOR_REGEX);
//expected format
//-rw------- 1 username groupname ...
String permission = t.nextToken();
As far as I can see Hadoop tries to find out some permissions on a specific file with ls -ld
which works perfectly if I use it in console. Unfortunately I havn't found out yet, which files permissions it was looking for.
The Hadoop version is 2.4.1 and the HBase version is 0.98.4 and I am using the Java-API. Other operations like creating a table work fine. Did anyone experience similar problems or knows what to do?
EDIT: I just found out that this is a just hadoop related issue. Making the simplest MapReduce-Operation even without using the HDFS gives me the same exception.
回答1:
Can you please check if this can solve your problem.
If yours is a permission issue, then this works.
public static void main(String[] args) {
//set user group information
UserGroupInformation ugi = UserGroupInformation.createRemoteUser("hdfs");
//set privilege exception
ugi.doAs(new PrivilegedExceptionAction<Void>() {
public Void run() throws Exception {
//create configuration object
Configuration config = new Configuration();
config.set("fs.defaultFS", "hdfs://ip:port/");
config.set("hadoop.job.ugi", "hdfs");
FileSystem dfs = FileSystem.get(config);
.
.
来源:https://stackoverflow.com/questions/25364802/hadoop-mapreduce-nosuchelementexception