maxCategories not working as expected in VectorIndexer when using RandomForestClassifier in pyspark.ml

被刻印的时光 ゝ 提交于 2019-12-01 17:56:28

It looks like, that contrary to the documentation, which lists:

Preserve metadata in transform; if a feature's metadata is already present, do not recompute.

among TODO, metadata is already preserved.

from pyspark.sql.functions import col
from pyspark.ml import Pipeline
from pyspark.ml.feature import  *

df = spark.range(10)

stages = [StringIndexer(inputCol="id", outputCol="idx"), VectorAssembler(inputCols=["idx"], outputCol="features"), VectorIndexer(inputCol="features", outputCol="features_indexed", maxCategories=5)]
Pipeline(stages=stages).fit(df).transform(df).schema["features"].metadata
# {'ml_attr': {'attrs': {'nominal': [{'vals': ['8',
#       '4',
#       '9',
#       '5',
#       '6',
#       '1',
#       '0',
#       '2',
#       '7',
#       '3'],
#      'idx': 0,
#      'name': 'idx'}]},
#   'num_attrs': 1}}

Pipeline(stages=stages).fit(df).transform(df).schema["features_indexed"].metadata

# {'ml_attr': {'attrs': {'nominal': [{'ord': False,
#      'vals': ['0.0',
#       '1.0',
#       '2.0',
#       '3.0',
#       '4.0',
#       '5.0',
#       '6.0',
#       '7.0',
#       '8.0',
#       '9.0'],
#      'idx': 0,
#      'name': 'idx'}]},
#   'num_attrs': 1}}

Under normal circumstances it is a desired behavior. You shouldn't use indexed categorical features as continuous variables

But if still want to circumvent this behavior, you'll have to reset metadata, for example:

pipeline1 = Pipeline(stages=stages[:1])
pipeline2 = Pipeline(stages=stages[1:])

dft1 = pipeline1.fit(df).transform(df).withColumn("idx", col("idx").alias("idx", metadata={}))
dft2 = pipeline2.fit(dft1).transform(dft1)


dft2.schema["features_indexed"].metadata

# {'ml_attr': {'attrs': {'numeric': [{'idx': 0, 'name': 'idx'}]},
#   'num_attrs': 1}}
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!