Hive

Insert overwrite on partitioned table is not deleting the existing data

被刻印的时光 ゝ 提交于 2020-06-08 20:01:28
问题 I am trying to run insert overwrite over a partitioned table. The select query of insert overwrite omits one partition completely. Is it the expected behavior? Table definition CREATE TABLE `cities_red`( `cityid` int, `city` string) PARTITIONED BY ( `state` string) ROW FORMAT SERDE 'org.apache.hadoop.hive.ql.io.orc.OrcSerde' STORED AS INPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat' OUTPUTFORMAT 'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat' TBLPROPERTIES ( 'auto.purge'='true

On HDFS, I want to display normal text for a hive table stored in ORC format

拟墨画扇 提交于 2020-05-31 04:45:08
问题 I have saved json dataframe in Hive using orc format jsonDF.write.format("orc").saveAsTable(hiveExamples.jsonTest) Now I need to display the file as a normal text on HDFS. Is there away to do this? I have used hdfs dfs -text /path-of-table , but it displays the data in ORC format. 回答1: From the linux shell command there is an utility called "hive --orcfiledump" To see the metadata of an ORC file in HDFS you can invoke the command like: [@localhost ~ ]$ hive --orcfiledump <path to HDFS ORC

On HDFS, I want to display normal text for a hive table stored in ORC format

无人久伴 提交于 2020-05-31 04:42:22
问题 I have saved json dataframe in Hive using orc format jsonDF.write.format("orc").saveAsTable(hiveExamples.jsonTest) Now I need to display the file as a normal text on HDFS. Is there away to do this? I have used hdfs dfs -text /path-of-table , but it displays the data in ORC format. 回答1: From the linux shell command there is an utility called "hive --orcfiledump" To see the metadata of an ORC file in HDFS you can invoke the command like: [@localhost ~ ]$ hive --orcfiledump <path to HDFS ORC

Hive crashing with java.lang.IncompatibleClassChangeError

只愿长相守 提交于 2020-05-29 09:58:44
问题 Running hive 3.1.1 against Hadoop 3.2.0 crashes when running 'select * from employee' with java.lang.IncompatibleClassChangeError: Class com.google.common.collect.ImmutableSortedMap does not implement the requested interface java.util.NavigableMap Commands like show tables all run fine and data is loaded ok from the CLI as well. Checked various other commands and e.g. data is loaded etc. Uses MySQL as metastore with MySQL-connector-java-5.1.47.jar . The only other observation is that

Creating sample Avro data for bytes type

╄→尐↘猪︶ㄣ 提交于 2020-05-28 07:19:13
问题 I am trying to create a sample .avro file containing bytes as type and decimal as logicalType, But the avro file when loaded to hive table results in a different value. What could be the reason? schema.avsc: { "type" : "record", "name" : "example", "namespace" : "com.xyz.avro", "fields" : [ { "name" : "cost", "type" : { "type" : "bytes", "logicalType" : "decimal", "precision" : 38, "scale" : 10 } }] } data.json: { "cost" : "0.0" } Converted to .avro using avro-tools : java -jar avro-tools-1.8

Creating sample Avro data for bytes type

落爺英雄遲暮 提交于 2020-05-28 07:16:46
问题 I am trying to create a sample .avro file containing bytes as type and decimal as logicalType, But the avro file when loaded to hive table results in a different value. What could be the reason? schema.avsc: { "type" : "record", "name" : "example", "namespace" : "com.xyz.avro", "fields" : [ { "name" : "cost", "type" : { "type" : "bytes", "logicalType" : "decimal", "precision" : 38, "scale" : 10 } }] } data.json: { "cost" : "0.0" } Converted to .avro using avro-tools : java -jar avro-tools-1.8

Creating sample Avro data for bytes type

随声附和 提交于 2020-05-28 07:16:13
问题 I am trying to create a sample .avro file containing bytes as type and decimal as logicalType, But the avro file when loaded to hive table results in a different value. What could be the reason? schema.avsc: { "type" : "record", "name" : "example", "namespace" : "com.xyz.avro", "fields" : [ { "name" : "cost", "type" : { "type" : "bytes", "logicalType" : "decimal", "precision" : 38, "scale" : 10 } }] } data.json: { "cost" : "0.0" } Converted to .avro using avro-tools : java -jar avro-tools-1.8

message:Hive Schema version 1.2.0 does not match metastore's schema version 2.1.0 Metastore is not upgraded or corrupt

故事扮演 提交于 2020-05-26 04:30:46
问题 enviroment: spark2.11 hive2.2 hadoop2.8.2 hive shell run successfully! and hava no error or warning. but when run application.sh, start failed /usr/local/spark/bin/spark-submit \ --class cn.spark.sql.Demo \ --num-executors 3 \ --driver-memory 512m \ --executor-memory 512m \ --executor-cores 3 \ --files /usr/local/hive/conf/hive-site.xml \ --driver-class-path /usr/local/hive/lib/mysql-connector-java.jar \ /usr/local/java/sql/sparkstudyjava.jar \ and the error tips: Exception in thread "main"

message:Hive Schema version 1.2.0 does not match metastore's schema version 2.1.0 Metastore is not upgraded or corrupt

拈花ヽ惹草 提交于 2020-05-26 04:30:07
问题 enviroment: spark2.11 hive2.2 hadoop2.8.2 hive shell run successfully! and hava no error or warning. but when run application.sh, start failed /usr/local/spark/bin/spark-submit \ --class cn.spark.sql.Demo \ --num-executors 3 \ --driver-memory 512m \ --executor-memory 512m \ --executor-cores 3 \ --files /usr/local/hive/conf/hive-site.xml \ --driver-class-path /usr/local/hive/lib/mysql-connector-java.jar \ /usr/local/java/sql/sparkstudyjava.jar \ and the error tips: Exception in thread "main"

message:Hive Schema version 1.2.0 does not match metastore's schema version 2.1.0 Metastore is not upgraded or corrupt

筅森魡賤 提交于 2020-05-26 04:29:06
问题 enviroment: spark2.11 hive2.2 hadoop2.8.2 hive shell run successfully! and hava no error or warning. but when run application.sh, start failed /usr/local/spark/bin/spark-submit \ --class cn.spark.sql.Demo \ --num-executors 3 \ --driver-memory 512m \ --executor-memory 512m \ --executor-cores 3 \ --files /usr/local/hive/conf/hive-site.xml \ --driver-class-path /usr/local/hive/lib/mysql-connector-java.jar \ /usr/local/java/sql/sparkstudyjava.jar \ and the error tips: Exception in thread "main"