Explain the aggregate functionality in Spark

前端 未结 9 2178
误落风尘
误落风尘 2020-12-07 12:23

I am looking for some better explanation of the aggregate functionality that is available via spark in python.

The example I have is as follows (using pyspark from

相关标签:
9条回答
  • 2020-12-07 12:59

    I wasn't fully convinced from the accepted answer, and JohnKnight's answer helped, so here's my point of view:

    First, let's explain aggregate() in my own words:

    Prototype:

    aggregate(zeroValue, seqOp, combOp)

    Description:

    aggregate() lets you take an RDD and generate a single value that is of a different type than what was stored in the original RDD.

    Parameters:

    1. zeroValue: The initialization value, for your result, in the desired format.
    2. seqOp: The operation you want to apply to RDD records. Runs once for every record in a partition.
    3. combOp: Defines how the resulted objects (one for every partition), gets combined.

    Example:

    Compute the sum of a list and the length of that list. Return the result in a pair of (sum, length).

    In a Spark shell, I first created a list with 4 elements, with 2 partitions:

    listRDD = sc.parallelize([1,2,3,4], 2)
    

    then I defined my seqOp:

    seqOp = (lambda local_result, list_element: (local_result[0] + list_element, local_result[1] + 1) )
    

    and my combOp:

    combOp = (lambda some_local_result, another_local_result: (some_local_result[0] + another_local_result[0], some_local_result[1] + another_local_result[1]) )
    

    and then I aggregated:

    listRDD.aggregate( (0, 0), seqOp, combOp)
    Out[8]: (10, 4)
    

    As you can see, I gave descriptive names to my variables, but let me explain it further:

    The first partition has the sublist [1, 2]. We will apply the seqOp to each element of that list and this will produce a local result, a pair of (sum, length), that will reflect the result locally, only in that first partition.

    So, let's start: local_result gets initialized to the zeroValue parameter we provided the aggregate() with, i.e. (0, 0) and list_element is the first element of the list, i.e. 1. As a result this is what happens:

    0 + 1 = 1
    0 + 1 = 1
    

    Now, the local result is (1, 1), that means, that so far, for the 1st partition, after processing only the first element, the sum is 1 and the length 1. Notice, that local_result gets updated from (0, 0), to (1, 1).

    1 + 2 = 3
    1 + 1 = 2
    

    and now the local result is (3, 2), which will be the final result from the 1st partition, since they are no other elements in the sublist of the 1st partition.

    Doing the same for 2nd partition, we get (7, 2).

    Now we apply the combOp to each local result, so that we can form, the final, global result, like this: (3,2) + (7,2) = (10, 4)


    Example described in 'figure':

                (0, 0) <-- zeroValue
    
    [1, 2]                  [3, 4]
    
    0 + 1 = 1               0 + 3 = 3
    0 + 1 = 1               0 + 1 = 1
    
    1 + 2 = 3               3 + 4 = 7
    1 + 1 = 2               1 + 1 = 2       
        |                       |
        v                       v
      (3, 2)                  (7, 2)
          \                    / 
           \                  /
            \                /
             \              /
              \            /
               \          / 
               ------------
               |  combOp  |
               ------------
                    |
                    v
                 (10, 4)
    

    Inspired by this great example.


    So now if the zeroValue is not (0, 0), but (1, 0), one would expect to get (8 + 4, 2 + 2) = (12, 4), which doesn't explain what you experience. Even if we alter the number of partitions of my example, I won't be able to get that again.

    The key here is JohnKnight's answer, which state that the zeroValue is not only analogous to the number of partitions, but may be applied more times than you expect.

    0 讨论(0)
  • 2020-12-07 12:59

    For people looking for Scala Equivalent code for the above example - here it is. Same logic, same input/result.

    scala> val listRDD = sc.parallelize(List(1,2,3,4), 2)
    listRDD: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[2] at parallelize at <console>:21
    
    scala> listRDD.collect()
    res7: Array[Int] = Array(1, 2, 3, 4)
    
    scala> listRDD.aggregate((0,0))((acc, value) => (acc._1+value,acc._2+1),(acc1,acc2) => (acc1._1+acc2._1,acc1._2+acc2._2))
    res10: (Int, Int) = (10,4)
    
    0 讨论(0)
  • 2020-12-07 12:59

    I will explain the concept of Aggregate operation in Spark as follows:

    Definition of the aggregate function

    **def aggregate** (initial value)(an intra-partition sequence operation)(an inter-partition combination operation)
    

    val flowers = sc.parallelize(List(11, 12, 13, 24, 25, 26, 35, 36, 37, 24, 25, 16), 4) --> 4 represents the number of partitions available in our Spark cluster.

    Hence, the rdd is distributed into 4 partitions as:

    11, 12, 13
    24, 25, 26
    35, 36, 37
    24, 25, 16
    

    we divide the problem statement into two parts: The first part of the problem is to aggregate the total number of flowers picked in each quadrant; that's the intra-partition sequence aggregation

    11+12+13 = 36
    24+25+26 = 75
    35+36+37 = 108
    24+25 +16 = 65
    

    The second part of the problem is to sum these individual aggregates across the partitions; that's the inter-partition aggregation.

    36 + 75 + 108 + 65 = 284
    

    The sum, stored in an RDD can further be used and processed for any kind of transformation or other action

    So the code becomes like:

    val sum = flowers.aggregate(0)((acc, value) => (acc + value), (x,y) => (x+y)) or val sum = flowers.aggregate(0)(_+_, _+_)
    Answer: 284

    Explanation: (0) - is the accumulator The first + is the intra-partition sum, adding the total number of flowers picked by each picker in each quadrant of the garden. The second + is the inter-partition sum, which aggregates the total sums from each quadrant.

    Case 1:

    Suppose, if we need to reduce functions after the initial value. What would happen if initial value weren't zero??. If it were 4, for example:

    The number would added to each intra-partition aggregate, and also to the inter-partition aggregate:

    So the first calculation would be:

    11+12+13 = 36 + 5 = 41
    24+25+26 = 75 + 5 = 80
    35+36+37 = 108 + 5 = 113
    24+25 +16 = 65 + 5 = 70
    

    Here's the inter-partition aggregation calculation with the initial value of 5:

    partition1 + partition2 + partition3+ partition4 + 5 = 41 + 80 + 113 + 70 = 309
    

    So, coming to your query: The sum can calculated based on the number of partitions the rdd data is distributed. i thought that your data is distributed as below and that's why you have the result as (19, 4). So, when doing aggregate operation be specific with number of partition value:

    val list = sc.parallelize(List(1,2,3,4))
    val list2 = list.glom().collect
    val res12 = list.aggregate((1,0))(
          (acc, value) => (acc._1 + value, acc._2 + 1),
          (acc1, acc2) => (acc1._1 + acc2._1, acc1._2 + acc2._2)
    )
    

    result:

    list: org.apache.spark.rdd.RDD[Int] = ParallelCollectionRDD[19] at parallelize at command-472682101230301:1
    list2: Array[Array[Int]] = Array(Array(), Array(1), Array(), Array(2), Array(), Array(3), Array(), Array(4))
    res12: (Int, Int) = (19,4)
    

    Explanation: As your data is distributed in 8 partitions, the result is like (by using the above explained logic)

    intra-partition addition:

    0+1=1
    1+1=2
    0+1=1
    2+1=3
    0+1=1
    3+1=4
    0+1=1
    4+1=5
    
    total=18
    

    inter-partition calculation:

    18+1 (1+2+1+3+1+4+1+5+1) = 19
    

    Thank you

    0 讨论(0)
提交回复
热议问题