Scikit-learn: preprocessing.scale() vs preprocessing.StandardScalar()

安稳与你 提交于 2019-12-18 20:48:53

问题


I understand that scaling means centering the mean(mean=0) and making unit variance(variance=1).

But, What is the difference between preprocessing.scale(x)and preprocessing.StandardScalar() in scikit-learn?


回答1:


Those are doing exactly the same, but:

  • preprocessing.scale(x) is just a function, which transforms some data
  • preprocessing.StandardScaler() is a class supporting the Transformer API

I would always use the latter, even if i would not need inverse_transform and co. supported by StandardScaler().

Excerpt from the docs:

The function scale provides a quick and easy way to perform this operation on a single array-like dataset

The preprocessing module further provides a utility class StandardScaler that implements the Transformer API to compute the mean and standard deviation on a training set so as to be able to later reapply the same transformation on the testing set. This class is hence suitable for use in the early steps of a sklearn.pipeline.Pipeline




回答2:


My understanding is that scale will transform data in min-max range of the data, while standardscaler will transform data in range of [-1, 1].



来源:https://stackoverflow.com/questions/46257627/scikit-learn-preprocessing-scale-vs-preprocessing-standardscalar

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!