PySpark DataFrame: Custom Explode Function

[亡魂溺海] 提交于 2020-12-31 05:08:41

问题


How to implement a custom explode function using udfs, so we can have extra information on items? For example, along with items, I want to have items' indices.

The part I do not know how to do is when a udf returns multiple values and we should place those values as separate rows.


回答1:


If you need custom explode function, then you need to write UDF that gets array and returns array. For example for this DF:

df = spark.createDataFrame([(['a', 'b', 'c'], ), (['d', 'e'],)], ['array'])
df.show()
+---------+
|    array|
+---------+
|[a, b, c]|
|   [d, e]|
+---------+

The function that adds index and explodes the results can look like this:

from pyspark.sql.types import *
value_with_index = StructType([
    StructField('index', IntegerType()),
    StructField('letter', StringType())
])
add_indices = udf(lambda arr: list(zip(range(len(arr)), arr)), ArrayType(value_with_index))
df.select(explode(add_indices('array'))).select('col.index', 'col.letter').show()
+-----+------+
|index|letter|
+-----+------+
|    0|     a|
|    1|     b|
|    2|     c|
|    0|     d|
|    1|     e|
+-----+------+



回答2:


In Spark v. 2.1+, there is pyspark.sql.functions.posexplode() which will explode the array and provide the index:

Using the same example as @Mariusz:

df.show()
#+---------+
#|    array|
#+---------+
#|[a, b, c]|
#|   [d, e]|
#+---------+

df.select(f.posexplode('array')).show()
#+---+---+
#|pos|col|
#+---+---+
#|  0|  a|
#|  1|  b|
#|  2|  c|
#|  0|  d|
#|  1|  e|
#+---+---+


来源:https://stackoverflow.com/questions/46183908/pyspark-dataframe-custom-explode-function

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!