Client cannot authenticate via:[TOKEN, KERBEROS]

痞子三分冷 提交于 2020-01-01 06:13:06

问题


I'm using YarnClient to programmatically start a job. The cluster i'm running on has been kerberos-ized.

Normal map reduce jobs submitted via "yarn jar examples.jar wordcount..." work.

The job i'm trying to submit programmatically, does not. I get this error:

14/09/04 21:14:29 ERROR client.ClientService: Error happened during application submit: Application application_1409863263326_0002 failed 2 times due to AM Container for appattempt_1409863263326_0002_000002 exited with exitCode: -1000 due to: Failed on local exception: java.io.IOException: org.apache.hadoop.security.AccessControlException: Client cannot authenticate via:[TOKEN, KERBEROS]; Host Details : local host is: "yarn-c1-n1.clouddev.snaplogic.com/10.184.28.108"; destination host is: "yarn-c1-cdh.clouddev.snaplogic.com":8020; .Failing this attempt.. Failing the application. 14/09/04 21:14:29 ERROR client.YClient: Application submission failed

The code looks something like this:

ClientContext context = createContextFrom(args);
YarnConfiguration configuration = new YarnConfiguration();
YarnClient yarnClient = YarnClient.createYarnClient();
yarnClient.init(configuration);
ClientService client = new ClientService(context, yarnClient, new InstallManager(FileSystem.get(configuration)));
LOG.info(Messages.RUNNING_CLIENT_SERVICE);
boolean result = client.execute();

I had thought that perhaps adding something to the effect of:

yarnClient.getRMDelegationToken(new Text(InetAddress.getLocalHost().getHostAddress()));

Could perhaps assuage my woes, but that doesn't seem to help either. Any help would be greatly appreciated.


回答1:


Alright, well after hours and hours and hours we have this figured out. For all following generations of coders, forever plagued by hadoop's lack of documentation:

You must grab the tokens from UserGroupInformation object with a call to get credentials. Then you must set the tokens on the ContainerLaunchContext.




回答2:


You will also get this error if you are using the actual name node instead of logical URI of HA in any of the hdfs path.

This is because if it finds a namenode uri instead of logical uri then it will create non-HA file system which will try to use simple UGI instead of kerberos UGI.




回答3:


The same error with incompatible hadoop artifact versions.

Working example:

public static final String CONF_CORE_SITE = "/etc/hadoop/conf/core-site.xml";
public static final String CONF_HDFS_SITE = "/etc/hadoop/conf/hdfs-site.xml";

/**
 * Pick the config files from class path
 */
private static Configuration getHdfsConfiguration() throws IOException {
    Configuration configuration = new Configuration();

    configuration.set("fs.file.impl", org.apache.hadoop.fs.LocalFileSystem.class.getName());
    configuration.set("fs.hdfs.impl", org.apache.hadoop.hdfs.DistributedFileSystem.class.getName());

    File hadoopCoreConfig = new File(CONF_CORE_SITE);
    File hadoopHdfsConfig = new File(CONF_HDFS_SITE);

    if (! hadoopCoreConfig.exists() || ! hadoopHdfsConfig.exists()) {
        throw new FileNotFoundException("Files core-site.xml or hdfs-site.xml are not found. Check /etc/hadoop/conf/ path.");
    }

    configuration.addResource(new Path(hadoopCoreConfig.toURI()));
    configuration.addResource(new Path(hadoopHdfsConfig.toURI()));

    //Use existing security context created by $ kinit
    UserGroupInformation.setConfiguration(configuration);
    UserGroupInformation.loginUserFromSubject(null);
    return configuration;
}

pom.xml

<properties>
  <hadoop.version>2.6.0</hadoop.version>
  <hadoop.release>cdh5.14.2</hadoop.release>
</properties>


<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-core</artifactId>
  <version>${hadoop.version}-mr1-${hadoop.release}</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-hdfs</artifactId>
  <version>${hadoop.version}-${hadoop.release}</version>
</dependency>
<dependency>
  <groupId>org.apache.hadoop</groupId>
  <artifactId>hadoop-common</artifactId>
  <version>${hadoop.version}-${hadoop.release}</version>
</dependency>


来源:https://stackoverflow.com/questions/25755479/client-cannot-authenticate-viatoken-kerberos

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!