How to bucketize a group of columns in pyspark?

时光怂恿深爱的人放手 提交于 2019-12-24 17:04:24

问题


I am trying to bucketize columns that contain the word "road" in a 5k dataset. And create a new dataframe.

I am not sure how to do that, here is what I have tried far :

from pyspark.ml.feature import Bucketizer

spike_cols = [col for col in df.columns if "road" in col]

for x in spike_cols :

    bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")],
                        inputCol=x, outputCol=x + "bucket")

bucketedData = bucketizer.transform(df)

回答1:


Either modify df in the loop:

from pyspark.ml.feature import Bucketizer

for x in spike_cols :
    bucketizer = Bucketizer(splits=[-float("inf"), 10, 100, float("inf")],
                    inputCol=x, outputCol=x + "bucket")
    df = bucketizer.transform(df)

or use Pipeline:

from pyspark.ml import Pipeline
from pyspark.ml.feature import Bucketizer 

model = Pipeline(stages=[
    Bucketizer(
        splits=[-float("inf"), 10, 100, float("inf")],
        inputCol=x, outputCol=x + "bucket") for x in spike_cols
]).fit(df)

model.transform(df)


来源:https://stackoverflow.com/questions/51402369/how-to-bucketize-a-group-of-columns-in-pyspark

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!