How to standardize ONE column in Spark using StandardScaler?

ぐ巨炮叔叔 提交于 2019-12-06 01:13:21

Just use plain aggregation:

from pyspark.sql.functions import stddev, mean, col

sample17 = spark.createDataFrame([(1, ), (2, ), (3, )]).toDF("age")

(sample17
  .select(mean("age").alias("mean_age"), stddev("age").alias("stddev_age"))
  .crossJoin(sample17)
  .withColumn("age_scaled" , (col("age") - col("mean_age")) / col("stddev_age")))

# +--------+----------+---+----------+
# |mean_age|stddev_age|age|age_scaled|
# +--------+----------+---+----------+
# |     2.0|       1.0|  1|      -1.0|
# |     2.0|       1.0|  2|       0.0|
# |     2.0|       1.0|  3|       1.0|
# +--------+----------+---+----------+

or

mean_age, sttdev_age = sample17.select(mean("age"), stddev("age")).first()
sample17.withColumn("age_scaled", (col("age") - mean_age) / sttdev_age)

# +---+----------+
# |age|age_scaled|
# +---+----------+
# |  1|      -1.0|
# |  2|       0.0|
# |  3|       1.0|
# +---+----------+

If you want Transformer you can split vector into columns.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!