hortonworks-data-platform

Hive Warehouse Connector + Spark = signer information does not match signer information of other classes in the same package

旧时模样 提交于 2020-01-15 19:13:56
问题 I'm trying to use hive warehouse connector and spark on hdp 3.1 and getting exception even with simplest example (below). The class causing problems: JaninoRuntimeException - is in org.codehaus.janino:janino:jar:3.0.8 (dependency of spark_sql) and in com.hortonworks.hive:hive-warehouse-connector_2.11:jar . I've tried to exclude janino library from spark_sql, but this resulted in missing other classes from janino. And I need hwc to for the new functionality. Anyone had same error? Any ideas

Hive Warehouse Connector + Spark = signer information does not match signer information of other classes in the same package

孤人 提交于 2020-01-15 19:13:09
问题 I'm trying to use hive warehouse connector and spark on hdp 3.1 and getting exception even with simplest example (below). The class causing problems: JaninoRuntimeException - is in org.codehaus.janino:janino:jar:3.0.8 (dependency of spark_sql) and in com.hortonworks.hive:hive-warehouse-connector_2.11:jar . I've tried to exclude janino library from spark_sql, but this resulted in missing other classes from janino. And I need hwc to for the new functionality. Anyone had same error? Any ideas

Log4j RollingFileAppender not adding mapper and reducer logs to file

爱⌒轻易说出口 提交于 2020-01-04 21:41:05
问题 We would like our application logs to be printed to files on the local nodes. We're using Log4j's RollingFileAppender. Our log4j.properties file is as follows: ODS.LOG.DIR=/var/log/appLogs ODS.LOG.INFO.FILE=application.log ODS.LOG.ERROR.FILE=application_error.log # Root logger option log4j.rootLogger=ERROR, console log4j.logger.com.ournamespace=ERROR, APP_APPENDER, ERROR_APPENDER # # console # Add "console" to rootlogger above if you want to use this # log4j.appender.console=org.apache.log4j

Workflow error logs disabled in Oozie 4.2

半城伤御伤魂 提交于 2020-01-02 03:41:37
问题 I am using Oozie 4.2 that comes bundled with HDP 2.3. while working with a few example workflow's that comes with the oozie package, I noticed that the "job error log is disabled" and this makes debugging really difficult in the event of a failure. I tried running the below commands, # oozie job -config /home/santhosh/examples/apps/hive/job.properties -run job: 0000063-150904123805993-oozie-oozi-W # oozie job -errorlog 0000063-150904123805993-oozie-oozi-W Error Log is disabled!! Can someone

How to install libraries to python in zeppelin-spark2 in HDP

懵懂的女人 提交于 2020-01-01 07:27:31
问题 I am using HDP Version: 2.6.4 Can you provide a step by step instructions on how to install libraries to the following python directory under spark2 ? The sc.version (spark version) returns res0: String = 2.2.0.2.6.4.0-91 The spark2 interpreter name and value is as following zeppelin.pyspark.python: /usr/local/Python-3.4.8/bin/python3.4 The python version and current libraries are %spark2.pyspark import pip import sys sorted(["%s==%s" % (i.key, i.version) for i in pip.get_installed

why HDP install mysql while I have chose existing mysql

∥☆過路亽.° 提交于 2019-12-25 08:58:53
问题 I am installing HDP 2.6 via Ambari 2.5.0.3,the error shows in Hive client install.the error logs: resource_management.core.exceptions.ExecutionFailed: Execution of '/usr/bin/zypper --quiet install --auto-agree-with-licenses --no-confirm mysql-client' returned 4. Problem: mysql-community-client-5.7.17-1.sles11.x86_64 conflicts with namespace:otherproviders(mysql-client) provided by mysql-client-5.5.31-0.7.10.x86_64 Solution 1: Following actions will be done: deinstallation of mysql-community

NiFi GenerateTableFetch does not store state per database.name

不问归期 提交于 2019-12-25 03:14:41
问题 I am testing out NiFi to replace our current ingestion setup which imports data from multiple MySQL shards of a table and store it in HDFS. I am using GenerateTableFetch and ExecuteSQL to achieve this. Each incoming flow file will have a database.name attribute which is being used by DBCPConnectionPoolLookup to select the relevant shard. Issue is that, let's say I have 2 shards to pull data from, shard_1 and shard_2 for table accounts and also I have updated_at as Maximum Value Columns , it

Hortonworks CREATE-USER FAILURE The password does not meet the password policy requirements

白昼怎懂夜的黑 提交于 2019-12-25 01:53:57
问题 Trying to install Hadoop Hortonworks 2.0.6.0 GA release. Installation failed ,the installation log file contains the following error CREATE-USER FAILURE: Exception calling "SetInfo" with "0" argument(s): "The password does not meet the password policy requirements. Check the minimum password length, password complexity and password history requirements. I have taken care so that password is not similar to username. Passwords is 1Lifepo4 Full log is WINPKG: Logging to existing log C:

Ambari shows service as stopped

 ̄綄美尐妖づ 提交于 2019-12-24 15:31:08
问题 We are using Hortonworks HDP 2.1 with Ambari 1.6.1 After a crash in our underlying hardware we restarted our cluster some days ago. We got everything back up again, however, Ambari shows that two services are still down, the YARN Resource Manager and the MapReduce History Server. Both of those services are running, verified both by checking running processes on the server as well as checking the provided functionality. Nagios healthchecks are also ok. Still, Ambari shows the services as being

Ambari server setup: OSError: [Errno 2] No such file or directory

旧巷老猫 提交于 2019-12-24 15:24:39
问题 I'm trying to setup Hadoop on my EC2 instance using this tutorial. I'm trying to setup the ambari server when I get this error: [root@ip-xxx-xxx-xxx-xxx ec2-user]# ambari-server setup Using python /usr/bin/python2.6 Setup ambari-server Checking SELinux... WARNING: Could not run /usr/sbin/sestatus: OK Ambari-server daemon is configured to run under user 'root'. Change this setting [y/n] (n)? Adjusting ambari-server permissions and ownership... Checking iptables... Checking JDK... JCE Policy