Explain the aggregate functionality in Spark

前端 未结 9 2177
误落风尘
误落风尘 2020-12-07 12:23

I am looking for some better explanation of the aggregate functionality that is available via spark in python.

The example I have is as follows (using pyspark from

相关标签:
9条回答
  • 2020-12-07 12:38

    I don't have enough reputation points to comment on the previous answer by Maasg. Actually the zero value should be 'neutral' towards the seqop, meaning it wouldn't interfere with the seqop result, like 0 towards add, or 1 towards *;

    You should NEVER try with non-neutral values as it might be applied arbitrary times. This behavior is not only tied to num of partitions.

    I tried the same experiment as stated in the question. with 1 partition, the zero value was applied 3 times. with 2 partitions, 6 times. with 3 partitions, 9 times and this will go on.

    0 讨论(0)
  • 2020-12-07 12:42

    Aggregate lets you transform and combine the values of the RDD at will.

    It uses two functions:

    The first one transforms and adds the elements of the original collection [T] in a local aggregate [U] and takes the form: (U,T) => U. You can see it as a fold and therefore it also requires a zero for that operation. This operation is applied locally to each partition in parallel.

    Here is where the key of the question lies: The only value that should be used here is the ZERO value for the reduction operation. This operation is executed locally on each partition, therefore, adding anything to that zero value will add to the result multiplied by the number of partitions of the RDD.

    The second operation takes 2 values of the result type of the previous operation [U] and combines it in to one value. This operation will reduce the partial results of each partition and produce the actual total.

    For example: Given an RDD of Strings:

    val rdd:RDD[String] = ???
    

    Let's say you want to the aggregate of the length of the strings in that RDD, so you would do:

    1) The first operation will transform strings into size (int) and accumulate the values for size.

    val stringSizeCummulator: (Int, String) => Int  = (total, string) => total + string.lenght`
    

    2) provide the ZERO for the addition operation (0)

    val ZERO = 0
    

    3) an operation to add two integers together:

    val add: (Int, Int) => Int = _ + _
    

    Putting it all together:

    rdd.aggregate(ZERO, stringSizeCummulator, add)
    

    So, why is the ZERO needed? When the cummulator function is applied to the first element of a partition, there's no running total. ZERO is used here.

    Eg. My RDD is: - Partition 1: ["Jump", "over"] - Partition 2: ["the", "wall"]

    This will result:

    P1:

    1. stringSizeCummulator(ZERO, "Jump") = 4
    2. stringSizeCummulator(4, "over") = 8

    P2:

    1. stringSizeCummulator(ZERO, "the") = 3
    2. stringSizeCummulator(3, "wall") = 7

    Reduce: add(P1, P2) = 15

    0 讨论(0)
  • 2020-12-07 12:52

    You can use the following code (in scala) to see precisely what aggregate is doing. It builds a tree of all the addition and merge operations:

    sealed trait Tree[+A]
    case class Leaf[A](value: A) extends Tree[A]
    case class Branch[A](left: Tree[A], right: Tree[A]) extends Tree[A]
    
    val zero : Tree[Int] = Leaf(0)
    val rdd = sc.parallelize(1 to 4).repartition(3)
    

    And then, in the shell:

    scala> rdd.glom().collect()
    res5: Array[Array[Int]] = Array(Array(4), Array(1, 2), Array(3))
    

    So, we have these 3 partitions: [4], [1,2], and [3].

    scala> rdd.aggregate(zero)((l,r)=>Branch(l, Leaf(r)), (l,r)=>Branch(l,r))
    res11: Tree[Int] = Branch(Branch(Branch(Leaf(0),Branch(Leaf(0),Leaf(4))),Branch(Leaf(0),Leaf(3))),Branch(Branch(Leaf(0),Leaf(1)),Leaf(2)))
    

    You can represent the result as a tree:

    +
    | \__________________
    +                    +
    | \________          | \
    +          +         +   2
    | \        | \       | \         
    0  +       0  3      0  1
       | \
       0  4
    

    You can see that a first zero element is created on the driver node (at the left of the tree), and then, the results for all the partitions are merged one by one. You also see that if you replace 0 by 1 as you did in your question, it will add 1 to each result on each partition, and also add 1 to the initial value on the driver. So, the total number of time the zero value you give is used is:

    number of partitions + 1.

    So, in your case, the result of

    aggregate(
      (X, Y),
      (lambda acc, value: (acc[0] + value, acc[1] + 1)),
      (lambda acc1, acc2: (acc1[0] + acc2[0], acc1[1] + acc2[1])))
    

    will be:

    (sum(elements) + (num_partitions + 1)*X, count(elements) + (num_partitions + 1)*Y)
    

    The implementation of aggregate is quite simple. It is defined in RDD.scala, line 1107:

      def aggregate[U: ClassTag](zeroValue: U)(seqOp: (U, T) => U, combOp: (U, U) => U): U = withScope {
        // Clone the zero value since we will also be serializing it as part of tasks
        var jobResult = Utils.clone(zeroValue, sc.env.serializer.newInstance())
        val cleanSeqOp = sc.clean(seqOp)
        val cleanCombOp = sc.clean(combOp)
        val aggregatePartition = (it: Iterator[T]) => it.aggregate(zeroValue)(cleanSeqOp, cleanCombOp)
        val mergeResult = (index: Int, taskResult: U) => jobResult = combOp(jobResult, taskResult)
        sc.runJob(this, aggregatePartition, mergeResult)
        jobResult
    }
    
    0 讨论(0)
  • 2020-12-07 12:53

    Thanks to gsamaras.

    My viewgraph is as below,

    0 讨论(0)
  • 2020-12-07 12:54

    Great explanations, it really helped me to understand the underneath working of the aggregate function. I have played with it for some time and found out as below.

    • if you are using the acc as (0,0) then it will not change the result of the out put of the function.

    • if the initial accumulator is changed then it will process the result something as below

    [ sum of RDD elements + acc initial value * No. of RDD partitions + acc initial value ]

    for the question here, I would suggest to check the partitions as the number of partitions should be 8 as per my understanding as every time we process the seq op on a partition of RDD it will start with the initial sum of acc result and also when it is going to do the comb Op it will again use the acc initial value once.

    for e.g. List (1,2,3,4) & acc (1,0)

    Get partitions in scala by RDD.partitions.size

    if Partitions are 2 & number of elements is 4 then => [ 10 + 1 * 2 + 1 ] => (13,4)

    if Partition are 4 & number of elements is 4 then => [ 10 + 1 * 4 + 1 ] => (15,4)

    Hope this helps, you can check here for explanation. Thanks.

    0 讨论(0)
  • 2020-12-07 12:55

    I try many experiments about this question. It is better to set num of partition for aggregate. the seqOp will process each partion and apply the initial value, what' more, combOp will also apply the initial value when combines all partitions. So, I present the format for this question:

    final result = sum(list) + num_Of_Partitions * initial_Value + 1
    
    0 讨论(0)
提交回复
热议问题