问题
We have a DataFrame that looks like this:
DataFrame[event: string, properties: map<string,string>]
Notice that there are two columns: event and properties. How do we split or flatten the properties column into multiple columns based on the key values in the map?
I notice I can do something like this:
newDf = df.withColumn("foo", col("properties")["foo"])
which produce a Dataframe of
DataFrame[event: string, properties: map<string,string>, foo: String]
But then I would have to do these for all the keys one by one. Is there a way to do them all automatically? For example, if there are foo, bar, baz as the keys in the properties, can we flatten the map:
DataFrame[event: string, foo: String, bar: String, baz: String]
回答1:
You can use explode() function - it flattens the map by creating two additional columns - key and value for each entry:
>>> df.printSchema()
root
|-- event: string (nullable = true)
|-- properties: map (nullable = true)
| |-- key: string
| |-- value: string (valueContainsNull = true)
>>> df.select('event', explode('properties')).printSchema()
root
|-- event: string (nullable = true)
|-- key: string (nullable = false)
|-- value: string (nullable = true)
You can use pivot if you have a column with unique value you can group by. For example:
df.withColumn('id', monotonically_increasing_id()) \
.select('id', 'event', explode('properties')) \
.groupBy('id', 'event').pivot('key').agg(first('value'))
来源:https://stackoverflow.com/questions/48993176/flatten-spark-dataframe-column-of-map-dictionary-into-multiple-columns