问题
I have a simple code:
test("0153") {
val c = Seq(1,8,4,2,7)
val max = (x:Int, y:Int)=> if (x > y) x else y
c.reduce(max)
}
It works fine. But, when I follow the same way to use Dataset.reduce
,
test("SparkSQLTest") {
def max(x: Int, y: Int) = if (x > y) x else y
val spark = SparkSession.builder().master("local").appName("SparkSQLTest").enableHiveSupport().getOrCreate()
val ds = spark.range(1, 100).map(_.toInt)
ds.reduce(max) //compiling error:Error:(20, 15) missing argument list for method max
}
Compiler complains that missing argument list for method max
, I don't what's going on here.
回答1:
Change to a function instead of a method and it should work, i.e. instead of
def max(x: Int, y: Int) = if (x > y) x else y
use
val max = (x: Int, y: Int) => if (x > y) x else y
Using the function, using ds.reduce(max)
should work directly. More about the differences can be found here.
Otherwise, as hadooper pointed out you can use the method by supplying the arguments,
def max(x: Int, y: Int) = if (x > y) x else y
ds.reduce((x, y) => max(x,y))
回答2:
As per spark scala doc, reduce function signature is reduce(func: ReduceFunction[T]): T and reduce(func: (T, T) ⇒ T): T So either of the following will work
Approach 1:
scala> val ds = spark.range(1, 100).map(_.toInt)
ds: org.apache.spark.sql.Dataset[Int] = [value: int]
scala> def max(x: Int, y: Int) = if (x > y) x else y
max: (x: Int, y: Int)Int
scala> ds.reduce((x, y) => max(x,y))
res1: Int = 99
Approach 2 [If you insist on short hand notation like reduce(max)]:
scala> val ds = spark.range(1, 100).map(_.toInt)
ds: org.apache.spark.sql.Dataset[Int] = [value: int]
scala> object max extends org.apache.spark.api.java.function.ReduceFunction[Int]{
| def call(x:Int, y:Int) = {if (x > y) x else y}
| }
defined object max
scala> ds.reduce(max)
res3: Int = 99
Hope, this helps!
来源:https://stackoverflow.com/questions/51296655/dataset-reduce-doesnt-support-shorthand-function