Import from MySQL to Hive using Sqoop

限于喜欢 提交于 2019-12-11 13:57:15

问题


I have to import > 400 million rows from a MySQL table(having a composite primary key) into a PARTITIONED Hive table Hive via Sqoop. The table has data for two years with a column departure date ranging from 20120605 to 20140605 and thousands of records for one day. I need to partition the data based on the departure date.

The versions :

Apache Hadoop - 1.0.4

Apache Hive - 0.9.0

Apache Sqoop - sqoop-1.4.2.bin__hadoop-1.0.0

As per my knowledge, there are 3 approaches:

  1. MySQL -> Non-partitioned Hive table -> INSERT from Non-partitioned Hive table into Partitioned Hive table
  2. MySQL -> Partitioned Hive table
  3. MySQL -> Non-partitioned Hive table -> ALTER Non-partitioned Hive table to add PARTITION

    1. is the current painful one that I’m following

    2. I read that the support for this is added in later(?) versions of Hive and Sqoop but was unable to find an example

    3. The syntax dictates to specify partitions as key value pairs – not feasible in case of millions of records where one cannot think of all the partition key-value pairs 3.

Can anyone provide inputs for approaches 2 and 3?


回答1:


I guess you can create a hive partitioned table.

Then write the sqoop import code for it.

for example:

sqoop import --hive-overwrite --hive-drop-import-delims --warehouse-dir "/warehouse" --hive-table \ --connect jdbc< mysql path>/DATABASE=xxxx\ --table --username xxxx --password xxxx --num-mappers 1 --hive-partition-key --hive-partition-value --hive-import \ --fields-terminated-by ',' --lines-terminated-by '\n'




回答2:


You have to create a partitioned table structure first, before you move your data to table into partitioned table. While sqoop, no need to specify --hive-partition-key and --hive-partition-value, use --hcatalog-table instead of --hive-table.

Manu




回答3:


If this is still something people wanted to understand, they can use

sqoop import --driver <driver name> --connect <connection url> --username <user name> -P --table employee  --num-mappers <numeral> --warehouse-dir <hdfs dir> --hive-import --hive-table table_name --hive-partition-key departure_date --hive-partition-value $departure_date

Notes from the patch:

sqoop import [all other normal command line options] --hive-partition-key ds --hive-partition-value "value"

Some limitations:

  • It only allows for one partition key/value
  • hardcoded the type for the partition key to be a string
  • With auto partitioning in hive 0.7 we may want to adjust this to just have one command line option for the key name and use that column from the db table to partition.


来源:https://stackoverflow.com/questions/17334509/import-from-mysql-to-hive-using-sqoop

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!