reduce

Requesting an insight on reduce ruby code

有些话、适合烂在心里 提交于 2019-12-13 20:21:28
问题 I started solving exercises in hackerrank in the enumerable section. The exercise asks to complete the sum method which takes an integer n and returns the sum to the n terms of the series. I found the solution from another source but I don't quite understand how the reduce works in this case and the output. def sum_terms(n) series = [] 1.upto(n) do |i| series.push(i ** 2 + 1) end series.reduce(0, :+) end puts sum_terms(5) # outputs 60 回答1: We can write this method as follows: def sum_terms(n)

Finding the minimum in an array (but skipping some elements) using reduction in CUDA

瘦欲@ 提交于 2019-12-13 19:41:46
问题 I have a large array of floating point numbers and I want to find out the minimum value of the array (ignoring -1 s wherever present) as well as its index, using reduction in CUDA. I have written the following code to do this, which in my opinion should work: __global__ void get_min_cost(float *d_Cost,int n,int *last_block_number,int *number_in_last_block,int *d_index){ int tid = threadIdx.x; int myid = blockDim.x * blockIdx.x + threadIdx.x; int s; if(result == (*last_block_number)-1){ s = (

couchdb erlang reduce - aggregate object

橙三吉。 提交于 2019-12-13 18:12:04
问题 Say I have a map that emits the following objects {"basePoints": 2000, "bonusPoints": 1000} {"basePoints": 1000, "bonusPoints": 50} {"basePoints": 10000, "bonusPoints": 5000} How could I write a reduce in Erlang (not javascript) that would return an aggregate object like this: {"basePoints": 13000, "bonusPoints": 6050} (I would rather not have to write 2 separate views that emits each value separately if I can help it) Many Thanks! 回答1: You actually do not need special reduce, in this case

Hadoop - Globally sort mean and when is happen in MapReduce

早过忘川 提交于 2019-12-13 13:22:39
问题 I am using Hadoop streaming JAR for WordCount , I want to know how can I get Globally Sort , according to answer on another question in SO, I found that when we use of just one reducer we can get Globally sort but in my result with numReduceTasks=1 (one reducer) it is not sort. For example, my input to mapper is: file 1: A long time ago in a galaxy far far away file 2: Another episode for Star Wars Result is: A 1 a 1 Star 1 ago 1 for 1 far 2 away 1 time 1 Wars 1 long 1 Another 1 in 1 episode

Function using reduce not working; returning false when should be true

只谈情不闲聊 提交于 2019-12-13 09:22:35
问题 var atLeast = function (tab,n,k){ var frequencyK = tab.reduce(function (acc, val, array){ if (val == k){ return acc+1; } }); return frequencyK >= n; }; console.log(atLeast([1,2,3,2,2,4,2,2],4,2)); This function is meant to return true if the argument k is repeated in the array tab at least n times. To do this I used reduce, and incremented the accumulator by 1 each time that the current value was equal to k . I then I compared the frequency of k calculated with the reduce function with n .

Hadoop: Output file has double output

六月ゝ 毕业季﹏ 提交于 2019-12-13 06:09:33
问题 I am running a Hadoop program and have the following as my input file, input.txt : 1 2 mapper.py : import sys for line in sys.stdin: print line, print "Test" reducer.py : import sys for line in sys.stdin: print line, When I run it without Hadoop: $ cat ./input.txt | ./mapper.py | ./reducer.py , the output is as expected: 1 2 Test However, running it through Hadoop via the streaming API (as described here), the latter part of the output seems somewhat "doubled": 1 2 Test Test Aditionally, when

Scala assumes wrong type when using foldLeft

ⅰ亾dé卋堺 提交于 2019-12-13 03:07:54
问题 I am trying to create a cross product function in Scala, where k is the number of times I build the cross product. val l = List(List(1), List(2), List(3)) (1 to k).foldLeft[List[List[Int]]](l) { (acc: List[List[Int]], _) => for (x <- acc; y <- l) yield x ::: l } However, this code does not compile: test.scala:9: error: type mismatch; found : List[List[Any]] required: List[List[Int]] for (x <- acc; y <- l) ^ Why does it ever think I have a List[Any] 's there? Clearly everything I am dealing

Faster implementation for reduceByKey on Seq of pairs possible?

馋奶兔 提交于 2019-12-12 18:42:02
问题 The code below contains various single-threaded implementations of reduceByKeyXXX methods and a few helper methods to create input sets and measure execution times. (Feel free to run the main -method) The main purpose of reduceByKey (as in Spark) is to reduce key-value-pairs with the same key. Example: scala> val xs = Seq( "a" -> 2, "b" -> 3, "a" -> 5) xs: Seq[(String, Int)] = List((a,2), (b,3), (a,5)) scala> ReduceByKeyComparison.reduceByKey(xs, (x:Int, y:Int) ⇒ x+y ) res8: Seq[(String, Int)

Is there a better way for reduce operation on RDD[Array[Double]]

走远了吗. 提交于 2019-12-12 18:22:03
问题 I want to reduce a RDD[Array[Double]] in order to each element of the array will be add with the same element of the next array. I use this code for the moment : var rdd1 = RDD[Array[Double]] var coord = rdd1.reduce( (x,y) => { (x, y).zipped.map(_+_) }) Is there a better way to make this more efficiently because it cost a harm. 回答1: Using zipped.map is very inefficient, because it creates a lot of temporary objects and boxes the doubles. If you use spire, you can just do this > import spire

Runtimeexception: java.lang.NoSuchMethodException: tfidf$Reduce.<init>()

不打扰是莪最后的温柔 提交于 2019-12-12 11:48:21
问题 how to solve this problem:tfidf is my main class why this error coming after running jar file? java.lang.RuntimeException: java.lang.NoSuchMethodException: tfidf$Reduce.<init>() at org.apache.hadoop.util.ReflectionUtils.newInstance(ReflectionUtils.java:115) at org.apache.hadoop.mapred.Task$OldCombinerRunner.combine(Task.java:1423) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.sortAndSpill(MapTask.java:1436) at org.apache.hadoop.mapred.MapTask$MapOutputBuffer.flush(MapTask.java:1298) at