Spark fastest way for creating RDD of numpy arrays

后端 未结 3 1806
长情又很酷
长情又很酷 2020-12-18 11:28

My spark application is using RDD\'s of numpy arrays.
At the moment, I\'m reading my data from AWS S3, and its represented as a simple text file where each line is a ve

相关标签:
3条回答
  • 2020-12-18 11:40

    It would be a little bit more idiomatic and slightly faster to simply map with numpy.fromstring as follows:

    import numpy as np.
    
    path = ...
    initial_num_of_partitions = ...
    
    data = (sc.textFile(path, initial_num_of_partitions)
       .map(lambda s: np.fromstring(s, dtype=np.float64, sep=" ")))
    

    but ignoring that there is nothing particularly wrong with your approach. As far as I can tell, with basic configuration, it is roughly twice a slow a simply reading the data and slightly slower than creating dummy numpy arrays.

    So it looks like the problem is somewhere else. It could be cluster misconfiguration, cost of fetching data from S3 or even unrealistic expectations.

    0 讨论(0)
  • 2020-12-18 12:00

    You shouldn't use numpy while working with Spark. Spark has its own methodology of processing data assuring that your sometimes really big files aren't loaded into memory at once, exceeding the memory limit. You should load your file like this with Spark:

    data = sc.textFile("s3_url", initial_num_of_partitions) \
        .map(lambda row: map(lambda x: float(x), row.split(' ')))
    

    Now this will output an RDD like this, based on your example:

    >>> print(data.collect())
    [[1.0, 2.0, 3.0], [5.1, 3.6, 2.1], [3.0, 0.24, 1.333]]
    

    @edit Some suggestions on file formats and numpy usage:

    Text files are just as good as CSV, TSV, Parquet or anything you feel comfortable with. Binary files are not preferred, according to the Spark docs on binary files loading:

    binaryFiles(path, minPartitions=None)

    Note: Experimental

    Read a directory of binary files from HDFS, a local file system (available on all nodes), or any Hadoop-supported file system URI as a byte array. Each file is read as a single record and returned in a key-value pair, where the key is the path of each file, the value is the content of each file.

    Note: Small files are preferred, large file is also allowable, but may cause bad performance.

    As for numpy usage, if I were you I'd deffinitely tried to replace any external package with native Spark, for example pyspark.mlib.random for randomization: http://spark.apache.org/docs/latest/api/python/pyspark.mllib.html#module-pyspark.mllib.random

    0 讨论(0)
  • 2020-12-18 12:01

    The best thing to do under these circumstances is to use pandas library for io.
    Please refer to this question : pandas read_csv() and python iterator as input .
    There you will see how to replace the np.loadtxt() function so it would be much faster to
    create a RDD of numpy array.

    0 讨论(0)
提交回复
热议问题