问题
I am trying to bucketize columns that contain the word "road" in a 5k dataset. And create a new dataframe.
I am not sure how to do that, here is what I have tried far :
from pyspark.ml.feature import Bucketizer
spike_cols = [col for col in df.columns if "road" in col]
for x in spike_cols :
bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")],
inputCol=x, outputCol=x + "bucket")
bucketedData = bucketizer.transform(df)
回答1:
Either modify df
in the loop:
from pyspark.ml.feature import Bucketizer
for x in spike_cols :
bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")],
inputCol=x, outputCol=x + "bucket")
df = bucketizer.transform(df)
or use Pipeline
:
from pyspark.ml import Pipeline
from pyspark.ml.feature import Bucketizer
model = Pipeline(stages=[
Bucketizer(
splits=[-float("inf"), 10, 100, float("inf")],
inputCol=x, outputCol=x + "bucket") for x in spike_cols
]).fit(df)
model.transform(df)
来源:https://stackoverflow.com/questions/51402369/how-to-bucketize-a-group-of-columns-in-pyspark