问题
I am using pyspark and I have a dataframe object df
and this is what the output of df.printSchema()
looks like
root
|-- M_MRN: string (nullable = true)
|-- measurements: array (nullable = true)
| |-- element: struct (containsNull = true)
| | |-- Observation_ID: string (nullable = true)
| | |-- Observation_Name: string (nullable = true)
| | |-- Observation_Result: string (nullable = true)
I would like to filter out all the arrays in 'measurements' where the Observation_ID is not '5' or '10'. So currently when I run df.select('measurements').take(2)
I get
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='11', Observation_Name='ABC', Observation_Result='70'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029'),
Row(Observation_ID='14', Observation_Name='XYZ', Observation_Result='23.1')]),
Row(measurements=[Row(Observation_ID='2', Observation_Name='ZZZ', Observation_Result='3/4'),
Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
I would like that after I do the above filtering and run df.select('measurements').take(2)
I get
[Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='108/72'),
Row(Observation_ID='10', Observation_Name='ABC', Observation_Result='73.029')]),
Row(measurements=[Row(Observation_ID='5', Observation_Name='ABC', Observation_Result='7')])]
Is there a way to do this in pyspark? Thank you in anticipation for your help!
回答1:
Since Spark 2.4, you can use Higher Order Function FILTER
to filter out elements from an array. So if you want to remove elements where Observation_ID
is not 5 or 10, you can do it as follows:
from pyspark.sql.functions import expr
df.withColumn('measurements', expr("FILTER(measurements, x -> x.Observation_ID = 5 OR x.Observation_ID = 10)"))
来源:https://stackoverflow.com/questions/61495448/removing-rows-in-a-nested-struct-in-a-spark-dataframe-using-pyspark-details-in