Decompressing LZ4 compressed data in Spark

荒凉一梦 提交于 2019-12-10 17:39:13

问题


I have LZ4 compressed data in HDFS and I'm trying to decompress it in Apache Spark into a RDD. As far as I can tell, the only method in JavaSparkContext to read data from HDFS is textFile which only reads data as it is in HDFS. I have come across articles on CompressionCodec but all of them explain how to compress output to HDFS whereas I need to decompress what is already on HDFS.

I am new to Spark so I apologize in advance if I missed something obvious or if my conceptual understanding is incorrect but it would be great if someone could point me in the right direction.


回答1:


Spark 1.1.0 supports reading LZ4 compressed files via sc.textFile. I've got it working by using Spark that is built with Hadoop that supports LZ4 (2.4.1 in my case)

After that, I've built native libraries for my platform as described in Hadoop docs and linked them them to Spark via --driver-library-path option.

Without linking there were native lz4 library not loaded exceptions.

Depending on Hadoop distribution you are using building native libraries step may be optional.



来源:https://stackoverflow.com/questions/24985704/decompressing-lz4-compressed-data-in-spark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!