reduce

Swift generic array function to find all indexes of elements not matching item

你。 提交于 2019-12-06 07:18:25
问题 Swift 3 Trying to write a generic array extension that gets all indexes of items that DON'T equal value example let arr: [String] = ["Empty", "Empty", "Full", "Empty", "Full"] let result: [Int] = arr.indexes(ofItemsNotEqualTo item: "Empty") //returns [2, 4] I tried to make a generic function: extension Array { func indexes<T: Equatable>(ofItemsNotEqualTo item: T) -> [Int]? { var result: [Int] = [] for (n, elem) in self.enumerated() { if elem != item { result.append(n) } } return result

Inversion sum is not correct using Stream.reduce

邮差的信 提交于 2019-12-06 07:11:32
Inversion sum is not correct using Stream.reduce, what is going wrong here ? double[] array = {1.0, 2.0}; double iunversionSum = Arrays.stream(array).reduce(0.0, (a, b) -> Double.sum(1.0 / a, 1.0 / b)); output is .5 but expected is 1.5 (1/1 + 1/2) I think using map() it could be simplier. double inversionSum = Arrays.stream(arr).map(val -> 1 / val).sum(); The error in your reduce is: Double.sum(1.0 / a, 1.0 / b), starting the series with 0.0. Now check why your outcome is .5. Use Double.sum(a, 1.0 / b), if you want to use reduce. 来源: https://stackoverflow.com/questions/54144110/inversion-sum

Reduce function doesn't handle an empty list

∥☆過路亽.° 提交于 2019-12-06 05:02:24
问题 I previously created a recursive function to find the product of a list. Now I've created the same function, but using the reduce function and lamdba . When I run this code, I get the correct answer. items = [1, 2, 3, 4, 10] print(reduce(lambda x, y: x*y, items)) However, when I give an empty list, an error occurs - reduce() of empty sequence with no initial value . Why is this? When I created my recursive function, I created code to handle an empty list, is the issue with the reduce function

when calculate a^b why parallel not work but parallelStream could

微笑、不失礼 提交于 2019-12-06 02:41:05
I want to calculate a^b , e.g. 2^30, public long pow(final int a, final int b) first I used this manner return LongStream.range(0, b).reduce(1, (acc, x) -> a * acc); // 1073741824 Got right result. Then I want to calculate it parallelly, so naturally I changed it to return LongStream.range(0, b).parallel().reduce(1, (acc, x) -> a * acc); // 32 but in this case the result is just 32 . Why? So for supporting parallel I changed it again return Collections.nCopies(b,a).parallelStream().reduce(1, (acc, x) -> acc * x); // 1073741824 in this case it works. So what's wrong with parallel manner? reduce

Hadoop Spill failure

好久不见. 提交于 2019-12-06 02:30:13
问题 I'am currently working on a project using Hadoop 0.21.0, 985326 and a cluster of 6 worker nodes and a head node. Submitting a regular mapreduce job fails, but I have no idea why. Has anybody seen this exception before? org.apache.hadoop.mapred.Child: Exception running child : java.io.IOException: Spill failed at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.checkSpillException(MapTask.java:1379) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.access$200(MapTask.java:711) at org.apache

How to compute letter frequency in a string using pythons built-in map and reduce functions

爷,独闯天下 提交于 2019-12-06 01:54:18
I would like to compute the frequency of letters in a string using pythons map and reduce built-in functions. Could anyone offer some insight into how I might do this? What I've got so far: s = "the quick brown fox jumped over the lazy dog" # Map function m = lambda x: (x,1) # Reduce # Add the two frequencies if they are the same # else.... Not sure how to put both back in the list # in the case where they are not the same. r = lambda x,y: (x[0], x[1] + y[1]) if x[0] == y[0] else ???? freq = reduce(r, map(m, s)) This works great when all the letters are the same. >>> s 'aaaaaaa' >>> map(m, s)

group by, and sum, and generate a object for each array javascript

假如想象 提交于 2019-12-06 01:37:20
i need help with this, i need group by id and sum, but i need a new object for each result let data =[ {"id":"2018", "name":"test", "total":1200}, {"id":"2019", "name":"wath", "total":1500}, {"id":"2019", "name":"wath", "total":1800}, {"id":"2020", "name":"zooi", "total":1000}, ] i have this code that return just one object with the result let result = data.reduce(function (r, o) { (r[o.id])? r[o.id] += o.total: r[o.id] = o.total; return r; }); but i need some like this [ {"id":"2018", "name":"test", "total":1200}, {"id":"2019", "name":"wath", "total":2300}, {"id":"2020", "name":"zooi", "total

Hadoop - Reducer is waiting for Mapper inputs?

烂漫一生 提交于 2019-12-05 21:43:06
as explained in the title, when i execute my Hadoop Program (and debug it in local mode) the following happens: 1. All 10 csv-lines in my test data are handled correctly in the Mapper, the Partitioner and the RawComperator(OutputKeyComparatorClass) that is called after the map-step. But the OutputValueGroupingComparatorClass's and the ReduceClass's functions do NOT get executed afterwards. 2. My application looks like the following. (due to space constraints i omit the implementation of the classes i used as configuration parameters, til somebody has an idea, that involves them): public class

Swift: Reduce Function with a closure

放肆的年华 提交于 2019-12-05 21:23:59
Below is the code that I'm struggling to understand: let rectToDisplay = self.treasures.reduce(MKMapRectNull) { //1 (mapRect: MKMapRect, treasure: Treasure) -> MKMapRect in //2 let treasurePointRect = MKMapRect(origin: treasure.location.mapPoint, size: MKMapSize(width: 0, height: 0)) //3 return MKMapRectUnion(mapRect, treasurePointRect) } My understanding of reduce function is: var people [] // an array of objects var ageSum = 0 ageSum = people.reduce(0) { $0 + $1.age} //(0) = initial value //$0 = running total //$1 = an object in an array My understanding of closure is: { (params) ->

Spark dataframe reduceByKey

左心房为你撑大大i 提交于 2019-12-05 18:44:24
I am using Spark 1.5/1.6, where I want to do reduceByKey operation in DataFrame, I don't want to convert the df to rdd. Each row looks like and I have multiple rows for id1. id1, id2, score, time I want to have something like: id1, [ (id21, score21, time21) , ((id22, score22, time22)) , ((id23, score23, time23)) ] So, for each "id1", I want all records in a list By the way, the reason why don't want to convert df to rdd is because I have to join this (reduced) dataframe to another dataframe, and I am doing re-partitioning on the join key, which makes it faster, I guess the same cannot be done