spark streaming write data to Hbase with python blocked on saveAsNewAPIHadoopDataset

泪湿孤枕 提交于 2019-12-08 09:12:15

问题


I’m using spark-streaming python read kafka and write to hbase, I found the job on stage of saveAsNewAPIHadoopDataset very easily get blocked. As the below picture: You will find the duration is 8 hours on this stage. Does the spark write data by Hbase api or directly write the data via HDFS api please?


回答1:


A bit late , but here is a similar example To save an RDD to hbase :

Consider an RDD containing a single line :

{"id":3,"name":"Moony","color":"grey","description":"Monochrome kitty"}

Transform the RDD
We neet to transform the RDD into a (key,value) pair having the following contents:

( rowkey , [ row key , column family , column name , value ] )

datamap = rdd.map(lambda x: (str(json.loads(x)["id"]),[str(json.loads(x)["id"]),"cfamily","cats_json",x]))

Save to HBase
We can make use of the RDD.saveAsNewAPIHadoopDataset function as used in this example: PySpark Hbase example to save the RDD to HBase ?

datamap.saveAsNewAPIHadoopDataset(conf=conf,keyConverter=keyConv,valueConverter=valueConv)

You can refer to my blog :pyspark-sparkstreaming hbase for the complete code of the working example.



来源:https://stackoverflow.com/questions/29853879/spark-streaming-write-data-to-hbase-with-python-blocked-on-saveasnewapihadoopdat

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!