reduce

Reduce, fold or scan (Left/Right)?

做~自己de王妃 提交于 2019-11-26 21:08:37
When should I use reduceLeft , reduceRight , foldLeft , foldRight , scanLeft or scanRight ? I want an intuition/overview of their differences - possibly with some simple examples. In general, all 6 fold functions apply a binary operator to each element of a collection. The result of each step is passed on to the next step (as input to one of the binary operator's two arguments). This way we can cumulate a result. reduceLeft and reduceRight cumulate a single result. foldLeft and foldRight cumulate a single result using a start value. scanLeft and scanRight cumulate a collection of intermediate

How to map/reduce/filter a Set in JavaScript?

非 Y 不嫁゛ 提交于 2019-11-26 19:22:57
问题 Is there any way to map / reduce / filter /etc a Set in JavaScript or will I have to write my own? Here's some sensible Set.prototype extensions Set.prototype.map = function map(f) { var newSet = new Set(); for (var v of this.values()) newSet.add(f(v)); return newSet; }; Set.prototype.reduce = function(f,initial) { var result = initial; for (var v of this) result = f(result, v); return result; }; Set.prototype.filter = function filter(f) { var newSet = new Set(); for (var v of this) if(f(v))

What is the 'pythonic' equivalent to the 'fold' function from functional programming?

时光怂恿深爱的人放手 提交于 2019-11-26 19:13:23
问题 What is the most idiomatic way to achieve something like the following, in Haskell: foldl (+) 0 [1,2,3,4,5] --> 15 Or its equivalent in Ruby: [1,2,3,4,5].inject(0) {|m,x| m + x} #> 15 Obviously, Python provides the reduce function, which is an implementation of fold, exactly as above, however, I was told that the 'pythonic' way of programming was to avoid lambda terms and higher-order functions, preferring list-comprehensions where possible. Therefore, is there a preferred way of folding a

Reductions in parallel in logarithmic time

你说的曾经没有我的故事 提交于 2019-11-26 19:01:55
Given n partial sums it's possible to sum all the partial sums in log2 parallel steps. For example assume there are eight threads with eight partial sums: s0, s1, s2, s3, s4, s5, s6, s7 . This could be reduced in log2(8) = 3 sequential steps like this; thread0 thread1 thread2 thread4 s0 += s1 s2 += s3 s4 += s5 s6 +=s7 s0 += s2 s4 += s6 s0 += s4 I would like to do this with OpenMP but I don't want to use OpenMP's reduction clause. I have come up with a solution but I think a better solution can be found maybe using OpenMP's task clause. This is more general than scalar addition. Let me choose a

Sorting Array with JavaScript reduce function

我怕爱的太早我们不能终老 提交于 2019-11-26 16:52:14
问题 Often I study some JavaScript interview questions, suddenly I saw a question about usage of reduce function for sorting an Array , I read about it in MDN and the usage of it in some medium articles, But sorting an Array is so Innovative: const arr = [91,4,6,24,8,7,59,3,13,0,11,98,54,23,52,87,4]; I thought a lot, but I've no idea about how answer this question, how must be the reduce call back function? what is the initialValue of reduce function? and what are the accumulator and currentValue

Is there any real benefit for using javascript Array reduce() method?

巧了我就是萌 提交于 2019-11-26 16:41:57
问题 Most use cases of the reduce() method can be easily rewritten with a for loop. And testing on JSPerf shows that reduce() is usually 60%-75% slower, depending on the operations performed inside each iteration. Is there any real reason to use reduce() then, other than being able to write code in a 'functional style'? If you can have a 60% performance gain by writing just a little bit more code, why would you ever use reduce()? EDIT: In fact, other functional methods like forEach() and map() all

How to generalize outer to n dimensions?

给你一囗甜甜゛ 提交于 2019-11-26 16:39:30
问题 The standard R expression outer(X, Y, f) evaluates to a matrix whose (i, j)-th entry has the value f(X[i], Y[j]) . I would like to implement the function multi.outer , an n-dimensional generalization of outer : multi.outer(f, X_1, ..., X_n) , where f is some n-ary function, would produce a (length(X_1) * ... * length(X_n)) array whose (i_1,...,i_n)-th entry has the value f(X_1[i_1], ..., X_n[i_n]) for all valid index sets (i_1,...,i_n). Clearly, for each i in {1, ..., n}, all the elements of

Why is the fold action necessary in Spark?

谁说胖子不能爱 提交于 2019-11-26 16:19:09
问题 I've a silly question involving fold and reduce in PySpark . I understand the difference between these two methods, but, if both need that the applied function is a commutative monoid, I cannot figure out an example in which fold cannot be substituted by reduce`. Besides, in the PySpark implementation of fold it is used acc = op(obj, acc) , why this operation order is used instead of acc = op(acc, obj) ? (this second order sounds more closed to a leftFold to me) Cheers Tomas 回答1: Empty RDD It

Scala : fold vs foldLeft

好久不见. 提交于 2019-11-26 16:04:54
问题 I am trying to understand how fold and foldLeft and the respective reduce and reduceLeft work. I used fold and foldLeft as my example scala> val r = List((ArrayBuffer(1, 2, 3, 4),10)) scala> r.foldLeft(ArrayBuffer(1,2,4,5))((x,y) => x -- y._1) scala> res28: scala.collection.mutable.ArrayBuffer[Int] = ArrayBuffer(5) scala> r.fold(ArrayBuffer(1,2,4,5))((x,y) => x -- y._1) <console>:11: error: value _1 is not a member of Serializable with Equals r.fold(ArrayBuffer(1,2,4,5))((x,y) => x -- y._1)

Spark groupByKey alternative

主宰稳场 提交于 2019-11-26 12:27:15
问题 According to Databricks best practices, Spark groupByKey should be avoided as Spark groupByKey processing works in a way that the information will be first shuffled across workers and then the processing will occur. Explanation So, my question is, what are the alternatives for groupByKey in a way that it will return the following in a distributed and fast way? // want this {\"key1\": \"1\", \"key1\": \"2\", \"key1\": \"3\", \"key2\": \"55\", \"key2\": \"66\"} // to become this {\"key1\": [\"1