How to transfer binary file into rdd in spark?

馋奶兔 提交于 2019-12-11 04:06:28

问题


I am trying to load seg-Y type files into spark, and transfer them into rdd for mapreduce operation. But I failed to transfer them into rdd. Does anyone who can offer help?


回答1:


You could use binaryRecords() pySpark call to convert binary file's content into an RDD

http://spark.apache.org/docs/latest/api/python/pyspark.html#pyspark.SparkContext.binaryRecords

binaryRecords(path, recordLength)

Load data from a flat binary file, assuming each record is a set of numbers with the specified numerical format (see ByteBuffer), and the number of bytes per record is constant.

Parameters: path – Directory to the input data files recordLength – The length at which to split the records

Then you could map() that RDD into a structure by using, for example, struct.unpack()

https://docs.python.org/2/library/struct.html

We use this approach to ingest propitiatory fixed-width records binary files. There is a bit of Python code that generates Format string (1st argument to struct.unpack), but if your files layout is static, it's fairly simple to do manually one time.

Similarly is possible to do using pure Scala:

http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext@binaryRecords(path:String,recordLength:Int,conf:org.apache.hadoop.conf.Configuration):org.apache.spark.rdd.RDD[Array[Byte]]




回答2:


You've not really given much detail, but you can start with using the SparkContextbinaryFiles() API

http://spark.apache.org/docs/latest/api/scala/index.html#org.apache.spark.SparkContext



来源:https://stackoverflow.com/questions/32602489/how-to-transfer-binary-file-into-rdd-in-spark

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!