How to determine “preferred location” for partitions of PySpark dataframe?

牧云@^-^@ 提交于 2019-12-13 03:41:36

问题


I'm trying to understand how coalesce determines how to join initial partitions into final questions, and apparently the "preferred location" has something to do with it.

According to this question, Scala Spark has a function preferredLocations(split: Partition) that can identify this. But I'm not at all familiar with the Scala side of Spark. Is there a way to determine the preferred location of a given row or partition ID at the PySpark level?


回答1:


Yes, it is theoretically possible. Example data to force some form of preference (there could be a simpler example):

rdd1 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)
rdd2 = sc.range(10).map(lambda x: (x % 4, None)).partitionBy(8)

# Force caching so downstream plan has preferences
rdd1.cache().count()

rdd3 = rdd1.union(rdd2)

Now you can define a helper:

from pyspark import SparkContext

def prefered_locations(rdd):
    def to_py_generator(xs):
        """Convert Scala List to Python generator"""
        j_iter = xs.iterator()
        while j_iter.hasNext():
            yield j_iter.next()

    # Get JVM
    jvm =  SparkContext._active_spark_context._jvm
    # Get Scala RDD
    srdd = jvm.org.apache.spark.api.java.JavaRDD.toRDD(rdd._jrdd)
    # Get partitions
    partitions = srdd.partitions()
    return {
        p.index(): list(to_py_generator(srdd.preferredLocations(p)))
        for p in partitions
    }

Applied:

prefered_locations(rdd3)

# {0: ['...'],
#  1: ['...'],
#  2: ['...'],
#  3: ['...'],
#  4: [],
#  5: [],
#  6: [],
#  7: []}


来源:https://stackoverflow.com/questions/50872579/how-to-determine-preferred-location-for-partitions-of-pyspark-dataframe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!