I am trying to filter a dataframe in pyspark using a list. I want to either filter based on the list or include only those records with a value in the list. My code below
I found the join
implementation to be significantly faster than where
for large dataframes:
def filter_spark_dataframe_by_list(df, column_name, filter_list):
""" Returns subset of df where df[column_name] is in filter_list """
spark = SparkSession.builder.getOrCreate()
filter_df = spark.createDataFrame(filter_list, df.schema[column_name].dataType)
return df.join(filter_df, df[column_name] == filter_df["value"])