How to handle fields enclosed within quotes(CSV) in importing data from S3 into DynamoDB using EMR/Hive

泪湿孤枕 提交于 2019-11-30 03:23:42

If you're stuck with the CSV file format, you'll have to use a custom SerDe; and here's some work based on the opencsv libarary.

But, if you can modify the source files, you can either select a new delimiter so that the quoted fields aren't necessary (good luck), or rewrite to escape any embedded commas with a single escape character, e.g. '\', which can be specified within the ROW FORMAT with ESCAPED BY:

CREATE EXTERNAL TABLE emrS3_import_1(col1 string, col2 string, col3 string, col4 string)  ROW FORMAT DELIMITED FIELDS TERMINATED BY ',' ESCAPED BY '\\' LOCATION 's3://emrTest/folder';

I was also stuck with the same issue as my fields are enclosed with double quotes and separated by semicolon(;). My table name is employee1.

So I have searched with links and I have found perfect solution for this.

We have to use serde for this. Please download serde jar using this link : https://github.com/downloads/IllyaYalovyy/csv-serde/csv-serde-0.9.1.jar

then follow below steps using hive prompt :

add jar path/to/csv-serde.jar;

create table employee1(id string, name string, addr string)
row format serde 'com.bizo.hive.serde.csv.CSVSerde'
with serdeproperties(
"separatorChar" = "\;",
"quoteChar" = "\"")
stored as textfile
;

and then load data from your given path using below query:

load data local inpath 'path/xyz.csv' into table employee1;

and then run :

select * from employee1;

Now you will see the magic. Thanks.

Shankar

Following code solved same type of problem

CREATE TABLE TableRowCSV2(    
    CODE STRING,        
    PRODUCTCODE STRING, 
    PRICE STRING     
)
    COMMENT 'row data csv'    
ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

WITH SERDEPROPERTIES (
   "separatorChar" = "\,",
   "quoteChar"     = "\""
)
STORED AS TEXTFILE
tblproperties("skip.header.line.count"="1");

Hive now includes an OpenCSVSerde which will properly parse those quoted fields without adding additional jars or error prone and slow regex.

ROW FORMAT SERDE 'org.apache.hadoop.hive.serde2.OpenCSVSerde'

Hive doesn't support quoted strings right out of the box. There are two approaches to solving this:

  1. Use a different field separator (e.g. a pipe).
  2. Write a custom InputFormat based on OpenCSV.

The faster (and arguably more sane) approach is to modify your initial the export process to use a different delimiter so you can avoid quoted strings. This way you can tell Hive to use an external table with a tab or pipe delimiter:

CREATE TABLE foo (
  col1 INT,
  col2 STRING
) ROW FORMAT DELIMITED FIELDS TERMINATED BY '|';
Amit

Use the csv-serde-0.9.1.jar file in your hive query, see http://illyayalovyy.github.io/csv-serde/

add jar /path/to/jar_file

Create external table emrS3_import_1(col1 string, col2 string, col3 string, col4 string) row format serde 'com.bizo.hive.serde.csv.CSVSerde'
with serdeproperties
(
  "separatorChar" = "\;",
  "quoteChar" = "\"
) stored as textfile
tblproperties("skip.header.line.count"="1") ---to skip if have any header file
LOCATION 's3://emrTest/folder';

There can be multiple solutions to this problem.

  1. Write custom SerDe class
  2. Use RegexSerde
  3. Remove escaped delimiter chars from data

Read more at http://grokbase.com/t/hive/user/117t2c6zhe/urgent-hive-not-respecting-escaped-delimiter-characters

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!