问题
I currently have a UDF that takes a column of xml strings and parses it into lists of dictionaries. I then want to explode that list of dictionaries column out into additional columns based on the key-value pairs.
Input looks like this:
id type length parsed
0 1 A 144 [{'key1':'value1'},{'key1':'value2', 'key2':'value3'},...]
1 1 B 20 [{'key1':'value4'},{'key2':'value5'},...]
2 4 A 54 [{'key3':'value6'},...]
And I want the output to look like this:
id type length key1 key2 key3
0 1 A 144 [value1,value2] value3
1 1 B 20 value4 value5
2 4 A 54 value6
I have been able to do this in Pandas like so:
s = data['parsed xml'].explode()
df_join = (pd.DataFrame(s.tolist(), index = s.index)
.stack()
.groupby(level=[0,1])
.agg(list)
.apply(lambda x: x[0] if len(x)==1 else x)
.unstack(fill_value='')
)
t = data.join(df_join, lsuffix = '_x', rsuffix = '_y')
The issue is I am having trouble converting this Pandas code in Spark (won't have Pandas available to me) that would give me the same result.
The Spark I will have available is 1.6.0.
回答1:
You can do this using explode
twice - once to explode the array and once to explode the map elements of the array. Thereafter, you can use pivot
with a collect_list
aggregation.
from pyspark.sql.functions import explode,collect_list
#explode array
df_1 = df.withColumn('exploded_arr',explode('parsed'))
#explode maps of array elements
df_2 = df_1.select(*df_1.columns,explode('exploded_arr')) #the default column names returned after exploding a map are `key`,`value`. change them as needed
#pivot with aggregation
df_2.groupBy("id","length","type").pivot("key").agg(collect_list("value")).show()
来源:https://stackoverflow.com/questions/62646542/explode-list-of-dictionaries-into-additional-columns-in-spark