cloudera-cdh

FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask. com/yammer/metrics/core/MetricsRegistry

随声附和 提交于 2020-04-17 22:11:50
问题 We facing some issue in beeline while we connecting via beeline to hbase table. We have two hiveserver2, one of the node we got this error like: INFO : Query ID = hive_20190719154444_babd2ce5-4d41-400b-9be5-313acaffc9bf INFO : Total jobs = 1 INFO : Launching Job 1 out of 1 INFO : Starting task [Stage-0:MAPRED] in serial mode INFO : Number of reduce tasks is set to 0 since there's no reduce operator ERROR : FAILED: Execution Error, return code -101 from org.apache.hadoop.hive.ql.exec.mr

Is there any way to run impala shell with sql script with parameters?

|▌冷眼眸甩不掉的悲伤 提交于 2020-02-03 08:59:27
问题 Is there any way to run impala shell with SQL script with parameters? For example: impala-shell -f /home/john/sql/load.sql /dir1/dir2/dir3/data_file I got errors: Error, could not parse arguments "-f /home/john/sql/load.sql /dir1/dir2/dir3/data_file” 回答1: No, you can specify a file of sql statements with -f , but it does not take a file of parameters. See the impala-shell documentation for more details: http://www.cloudera.com/content/cloudera/en/documentation/core/latest/topics/impala_impala

HBase Mapreduce Dependency Issue when using TableMapper

空扰寡人 提交于 2020-01-23 11:41:46
问题 I am using CDH5.3 and I am trying to write a mapreduce program to scan a table and do some proccessing. I have created a mapper which extends TableMapper and exception that i am getting is : java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/usr/local/hadoop-2.5-cdh-3.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1093) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

HBase Mapreduce Dependency Issue when using TableMapper

流过昼夜 提交于 2020-01-23 11:41:22
问题 I am using CDH5.3 and I am trying to write a mapreduce program to scan a table and do some proccessing. I have created a mapper which extends TableMapper and exception that i am getting is : java.io.FileNotFoundException: File does not exist: hdfs://localhost:54310/usr/local/hadoop-2.5-cdh-3.0/share/hadoop/common/lib/protobuf-java-2.5.0.jar at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall(DistributedFileSystem.java:1093) at org.apache.hadoop.hdfs.DistributedFileSystem$17.doCall

java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0

梦想的初衷 提交于 2020-01-15 09:30:33
问题 I cannot solve this exception, I've read the hadoop docu and all related stackoverflow questions that I could find. My fileSystem.mkdirs(***) throws: Exception in thread "main" java.lang.UnsatisfiedLinkError: org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Ljava/lang/String;I)V at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode0(Native Method) at org.apache.hadoop.io.nativeio.NativeIO$Windows.createDirectoryWithMode(NativeIO.java:524) at org

CDH5.2: MR, Unable to initialize any output collector

守給你的承諾、 提交于 2020-01-13 04:42:07
问题 Cloudera CDH5.2 Quickstart VM Cloudera Manager showing all nodes state = GREEN I've jared on Eclipse a MR job including all relevant cloudera jars in the Build Path: avro-1.7.6-cdh5.2.0.jar, avro-mapred-1.7.6-cdh5.2.0-hadoop2.jar, hadoop-common-2.5.0-cdh5.2.0.jar, hadoop-mapreduce-client-core-2.5.0-cdh5.2.0.jar I've run the following job hadoop jar jproject1.jar avro00.AvroUserPrefCount -libjars ${LIBJARS} avro/00/in avro/00/out I get the following error, is it a Java heap problem, any

How do the hive sql queries are submitted as mr job from hive cli

时光总嘲笑我的痴心妄想 提交于 2020-01-11 09:41:32
问题 I have deployed a CDH-5.9 cluster with MR as hive execution engine. I have a hive table named "users" with 50 rows. Whenever I execute the query select * from users works fine as follows : hive> select * from users; OK Adam 1 38 ATK093 CHEF Benjamin 2 24 ATK032 SERVANT Charles 3 45 ATK107 CASHIER Ivy 4 30 ATK384 SERVANT Linda 5 23 ATK132 ASSISTANT . . . Time taken: 0.059 seconds, Fetched: 50 row(s) But issuing select max(age) from users failed after submitting as mr job. The container log

Using Hive UDF in Impala gives erroneous results in Impala 1.2.4

血红的双手。 提交于 2020-01-06 14:54:47
问题 I have two Hive UDFs in Java which work perfectly well in Hive. Both functions are complimentary to each other. String myUDF(BigInt) BigInt myUDFReverso(String) myUDF("myInput") gives some output which when myUDFReverso(myUDF("myInput")) should give back myInput This works in Hive but when I try to use it in Impala (version 1.2.4) it gives expected answer for myUDF(BigInt) (the answer printed is correct) but the answer when passed to myUDFReverso(String) doesn't give back original answer). I

Using Hive UDF in Impala gives erroneous results in Impala 1.2.4

生来就可爱ヽ(ⅴ<●) 提交于 2020-01-06 14:54:05
问题 I have two Hive UDFs in Java which work perfectly well in Hive. Both functions are complimentary to each other. String myUDF(BigInt) BigInt myUDFReverso(String) myUDF("myInput") gives some output which when myUDFReverso(myUDF("myInput")) should give back myInput This works in Hive but when I try to use it in Impala (version 1.2.4) it gives expected answer for myUDF(BigInt) (the answer printed is correct) but the answer when passed to myUDFReverso(String) doesn't give back original answer). I

Cloudera/CDH v6.1.x + Python HappyBase v1.1.0: TTransportException(type=4, message='TSocket read 0 bytes')

烈酒焚心 提交于 2020-01-02 00:15:24
问题 EDIT: This question and answer applies to anyone who is experiencing the exception stated in the subject line: TTransportException(type=4, message='TSocket read 0 bytes') ; whether or not Cloudera and/or HappyBase is involved. The root issue (as it turned out) stems from mismatching protocol and/or transport formats on the client-side with what the server-side is implementing, and this can happen with any client/server paring. Mine just happened to be Cloudera and HappyBase, but yours needn't