Is it possible to read pdf/audio/video files(unstructured data) using Apache Spark?

女生的网名这么多〃 提交于 2019-11-28 07:52:15

问题


Is it possible to read pdf/audio/video files(unstructured data) using Apache Spark? For example, I have thousands of pdf invoices and I want to read data from those and perform some analytics on that. What steps must I do to process unstructured data?


回答1:


Yes, it is. Use sparkContext.binaryFiles to load files in binary format and then use map to map value to some other format - for example, parse binary with Apache Tika or Apache POI.

Pseudocode:

val rawFile = sparkContext.binaryFiles(...
val ready = rawFile.map ( here parsing with other framework

What is important, parsing must be done with other framework like mentioned previously in my answer. Map will get InputStream as an argument




回答2:


We had a scenario where we needed to use a custom decryption algorithm on the input files. We didn't want to rewrite that code in Scala or Python. Python-Spark code follows:

from pyspark import SparkContext, SparkConf, HiveContext, AccumulatorParam

def decryptUncompressAndParseFile(filePathAndContents):
    '''each line of the file becomes an RDD record'''
    global acc_errCount, acc_errLog
    proc = subprocess.Popen(['custom_decrypt_program','--decrypt'], 
             stdin=subprocess.PIPE, stdout=subprocess.PIPE, stderr=subprocess.PIPE)
    (unzippedData, err) = proc.communicate(input=filePathAndContents[1])
    if len(err) > 0:  # problem reading the file
        acc_errCount.add(1)
        acc_errLog.add('Error: '+str(err)+' in file: '+filePathAndContents[0]+
            ', on host: '+ socket.gethostname()+' return code:'+str(returnCode))
        return []  # this is okay with flatMap
    records   = list()
    iterLines = iter(unzippedData.splitlines())
    for line in iterLines:
        #sys.stderr.write('Line: '+str(line)+'\n')
        values = [x.strip() for x in line.split('|')]
        ...
        records.append( (... extract data as appropriate from values into this tuple ...) )
    return records

class StringAccumulator(AccumulatorParam):
    ''' custom accumulator to holds strings '''
    def zero(self,initValue=""):
        return initValue
    def addInPlace(self,str1,str2):
        return str1.strip()+'\n'+str2.strip()

def main():
    ...
    global acc_errCount, acc_errLog
    acc_errCount  = sc.accumulator(0)
    acc_errLog    = sc.accumulator('',StringAccumulator())
    binaryFileTup = sc.binaryFiles(args.inputDir)
    # use flatMap instead of map, to handle corrupt files
    linesRdd = binaryFileTup.flatMap(decryptUncompressAndParseFile, True)
    df = sqlContext.createDataFrame(linesRdd, ourSchema())
    df.registerTempTable("dataTable")
    ...

The custom string accumulator was very useful in identifying corrupt input files.



来源:https://stackoverflow.com/questions/44890381/is-it-possible-to-read-pdf-audio-video-filesunstructured-data-using-apache-spa

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!