udf

UDF returns the same value everywhere

元气小坏坏 提交于 2019-11-26 21:58:22
问题 I am trying to code in moving average in vba but the following returns the same value everywhere. Function trial1(a As Integer) As Variant Application.Volatile Dim rng As Range Set rng = Range(Cells(ActiveCell.Row, 2), Cells(ActiveCell.Row - a + 1, 2)) trial1 = (Application.Sum(rng)) * (1 / a) End Function 回答1: The ActiveCell property does not belong in a UDF because it changes . Sometimes, it is not even on the same worksheet. If you need to refer to the cell in which the custom UDF function

Spark UDF with varargs

好久不见. 提交于 2019-11-26 20:21:25
Is it an only option to list all the arguments up to 22 as shown in documentation? https://spark.apache.org/docs/1.5.0/api/scala/index.html#org.apache.spark.sql.UDFRegistration Anyone figured out how to do something similar to this? sc.udf.register("func", (s: String*) => s...... (writing custom concat function that skips nulls, had to 2 arguments at the time) Thanks UDFs don't support varargs* but you can pass an arbitrary number of columns wrapped using an array function: import org.apache.spark.sql.functions.{udf, array, lit} val myConcatFunc = (xs: Seq[Any], sep: String) => xs.filter(_ !=

Excel2011: Vlookup and Combine

这一生的挚爱 提交于 2019-11-26 18:38:04
问题 I'm having some difficulty combining several functions to do what I want in a 70000+ line excel file. ANY tips or pointers or advice greatly appreciated. I have 2 columns (about 70000 lines worth). In column 1 I have account numbers for clients (there are duplicates), next to it in column 2 I have the data I want to extract. I also have a third column (column 3) which is a list of the account numbers but has been stripped of duplicates. I am trying to us Vlookup to look at line one of Column

Spark SQL nested withColumn

被刻印的时光 ゝ 提交于 2019-11-26 16:43:17
问题 I have a DataFrame that has multiple columns of which some of them are structs. Something like this root |-- foo: struct (nullable = true) | |-- bar: string (nullable = true) | |-- baz: string (nullable = true) |-- abc: array (nullable = true) | |-- element: struct (containsNull = true) | | |-- def: struct (nullable = true) | | | |-- a: string (nullable = true) | | | |-- b: integer (nullable = true) | | | |-- c: string (nullable = true) I want to apply a UserDefinedFunction on the column baz

Spark UDF with varargs

不问归期 提交于 2019-11-26 09:33:50
问题 Is it an only option to list all the arguments up to 22 as shown in documentation? https://spark.apache.org/docs/1.5.0/api/scala/index.html#org.apache.spark.sql.UDFRegistration Anyone figured out how to do something similar to this? sc.udf.register(\"func\", (s: String*) => s...... (writing custom concat function that skips nulls, had to 2 arguments at the time) Thanks 回答1: UDFs don't support varargs* but you can pass an arbitrary number of columns wrapped using an array function: import org