reduce

Swift 3 - Reduce a collection of objects by an Int property

北城以北 提交于 2019-12-07 07:38:46
问题 I have an array containing 3 objects like so: class AClass { var distance: Int? } let obj0 = AClass() obj0.distance = 0 let obj1 = AClass() obj1.distance = 1 let obj2 = AClass() obj2.distance = 2 let arr = [obj0, obj1, obj2] When I reduce the array and assign it to a variable, I can only sum the last element in the array. let total = arr.reduce(0, {$1.distance! + $1.distance!}) //returns 4 If I try $0.distance! it errors with "expression is ambiguous without more context". I tried being more

Sorting (small) arrays by key in CUDA

左心房为你撑大大i 提交于 2019-12-07 02:59:27
问题 I'm trying to write a function that takes a block of unsorted key/value pairs such as <7, 4> <2, 8> <3, 1> <2, 2> <1, 5> <7, 1> <3, 8> <7, 2> and sorts them by key while reducing the values of pairs with the same key: <1, 5> <2, 10> <3, 9> <7, 7> Currently, I'm using a __device__ function like the one below which is essentially a bitonic sort that will combine values of the same key and set the old data to an infinitely large value (just using 99 for now) so that a subsequent bitonic sort

在YARN中,如何控制和监控map/reduce的并发数

戏子无情 提交于 2019-12-06 18:24:08
配置建议: 1. In MR1, the mapred.tasktracker.map.tasks.maximum and mapred.tasktracker.reduce.tasks.maximum properties dictated how many map and reduce slots each TaskTracker had. These properties no longer exist in YARN. Instead, YARN uses yarn.nodemanager.resource.memory-mb and yarn.nodemanager.resource.cpu-vcores, which control the amount of memory and CPU on each node, both available to both maps and reduces Essentially: YARN has no TaskTrackers, but just generic NodeManagers. Hence, there's no more Map slots and Reduce slots separation. Everything depends on the amount of memory in use/demanded

How to reduce App's CPU usage in Android phone?

吃可爱长大的小学妹 提交于 2019-12-06 16:05:55
I developed an auto-call application. The app reads a text file that includes a phone number list and calls for a few second, ends the call and then repeats. My problem is that the app does not send calls after 10~16 hours. I don't know the reason exactly, but I guess that the problem is the CPU usage. My app's CPU usage is almost 50%! How do I reduce CPU usage? Here is part of source code: if(r_count.compareTo("0")!=0) { while(index < repeat_count) { count = 1; time_count = 2; while(count < map.length) { performDial(); //start call reject(); //end call finishActivity(1); TimeDelay("60"); //

Difference between fold and reduce revisted

三世轮回 提交于 2019-12-06 13:40:40
I've been reading a nice answer to Difference between reduce and foldLeft/fold in functional programming (particularly Scala and Scala APIs)? provided by samthebest and I am not sure if I understand all the details: According to the answer ( reduce vs foldLeft ): A big big difference (...) is that reduce should be given a commutative monoid, (...) This distinction is very important for Big Data / MPP / distributed computing, and the entire reason why reduce even exists. and Reduce is defined formally as part of the MapReduce paradigm, I am not sure how this two statements combine. Can anyone

Thrust: How to directly control where an algorithm invocation executes?

荒凉一梦 提交于 2019-12-06 13:25:21
问题 The following code has no information that may lead it to run at CPU or GPU. I wonder where is the "reduce" operation executed? #include <thrust/iterator/counting_iterator.h> ... // create iterators thrust::counting_iterator<int> first(10); thrust::counting_iterator<int> last = first + 3; first[0] // returns 10 first[1] // returns 11 first[100] // returns 110 // sum of [first, last) thrust::reduce(first, last); // returns 33 (i.e. 10 + 11 + 12) Furthermore, thrust::transform_reduce( thrust:

PySpark Dataframe cast two columns into new column of tuples based value of a third column

允我心安 提交于 2019-12-06 12:28:36
问题 As the subject describes, I have a PySpark Dataframe that I need to cast two columns into a new column that is a list of tuples based the value of a third column. This cast will reduce or flatten the dataframe by a key value, product id in this case, and the result os one row per key. There are hundreds of millions of rows in this dataframe, with 37M unique product ids. Therefore I need a way to do the transformation on the spark cluster without bringing back any data to the driver (Jupyter

How to re-create Underscore.js _.reduce method?

三世轮回 提交于 2019-12-06 09:11:47
For education purposes, I was trying to re-create Underscore.js's _.reduce() method. While I was able to do this in an explicit style using for loops. But this is far from ideal because it mutates the original list that was supplied as an argument, which is dangerous. I also realized that creating such method using functional programming style is harder, since it is not possible to explicitly set i value for looping. // Explicit style var reduce = function(list, iteratee, initial) { if (Array.isArray(list)) { var start; if (arguments.length === 3) { start = initial; for (var i = 0; i < list

UIImage reduce byte size

别说谁变了你拦得住时间么 提交于 2019-12-06 07:50:31
I am using following code to resize an image - it all works well and as expected... Resize UIImage the right way I use interpolation quality as kCGInterpolationLow and UIImageJPEGRepresentation(image,0.0) to get the NSData of that image. The problem is that the image size is still a bit high in size at around 100kb. My question is can I reduce it further? The images originate from the iPhone Photo Album and are selected via a imagePickerController . Many thanks, 来源: https://stackoverflow.com/questions/4013871/uiimage-reduce-byte-size

Native implementation of reduceRight in JavaScript is wrong

狂风中的少年 提交于 2019-12-06 07:35:45
For an associative operation f over the elements of array a , the following relation should hold true: a.reduce(f) should be equivalent to a.reduceRight(f) . Indeed, it does hold true for operations that are both associative and commutative. For example: var a = [1,2,3,4,5,6,7,8,9,0]; alert(a.reduce(add) === a.reduceRight(add)); function add(a, b) { return a + b; } However it doesn't hold true for operations that are associative but not commutative. For example: var a = [[1,2],[3,4],[5,6],[7,8],[9,0]]; alert(equals(a.reduce(concat), a.reduceRight(concat))); function concat(a, b) { return a