问题
How to implement a custom explode function using udfs, so we can have extra information on items? For example, along with items, I want to have items' indices.
The part I do not know how to do is when a udf returns multiple values and we should place those values as separate rows.
回答1:
If you need custom explode function, then you need to write UDF that gets array and returns array. For example for this DF:
df = spark.createDataFrame([(['a', 'b', 'c'], ), (['d', 'e'],)], ['array'])
df.show()
+---------+
| array|
+---------+
|[a, b, c]|
| [d, e]|
+---------+
The function that adds index and explodes the results can look like this:
from pyspark.sql.types import *
value_with_index = StructType([
StructField('index', IntegerType()),
StructField('letter', StringType())
])
add_indices = udf(lambda arr: list(zip(range(len(arr)), arr)), ArrayType(value_with_index))
df.select(explode(add_indices('array'))).select('col.index', 'col.letter').show()
+-----+------+
|index|letter|
+-----+------+
| 0| a|
| 1| b|
| 2| c|
| 0| d|
| 1| e|
+-----+------+
回答2:
In Spark v. 2.1+, there is pyspark.sql.functions.posexplode() which will explode the array and provide the index:
Using the same example as @Mariusz:
df.show()
#+---------+
#| array|
#+---------+
#|[a, b, c]|
#| [d, e]|
#+---------+
df.select(f.posexplode('array')).show()
#+---+---+
#|pos|col|
#+---+---+
#| 0| a|
#| 1| b|
#| 2| c|
#| 0| d|
#| 1| e|
#+---+---+
来源:https://stackoverflow.com/questions/46183908/pyspark-dataframe-custom-explode-function