How to use sklearn fit_transform with pandas and return dataframe instead of numpy array?

后端 未结 5 1981
陌清茗
陌清茗 2020-12-04 08:38

I want to apply scaling (using StandardScaler() from sklearn.preprocessing) to a pandas dataframe. The following code returns a numpy array, so I lose all the column names a

相关标签:
5条回答
  • 2020-12-04 08:58

    You can try this code, this will give you a DataFrame with indexes

    import pandas as pd
    from sklearn.preprocessing import StandardScaler
    from sklearn.datasets import load_boston # boston housing dataset
    
    dt= load_boston().data
    col= load_boston().feature_names
    
    # Make a dataframe
    df = pd.DataFrame(data=dt, columns=col)
    
    # define a method to scale data, looping thru the columns, and passing a scaler
    def scale_data(data, columns, scaler):
        for col in columns:
            data[col] = scaler.fit_transform(data[col].values.reshape(-1, 1))
        return data
    
    # specify a scaler, and call the method on boston data
    scaler = StandardScaler()
    df_scaled = scale_data(df, col, scaler)
    
    # view first 10 rows of the scaled dataframe
    df_scaled[0:10]
    
    0 讨论(0)
  • 2020-12-04 09:01

    You can mix multiple data types in scikit-learn using Neuraxle:

    Option 1: discard the row names and column names

    from neuraxle.pipeline import Pipeline
    from neuraxle.base import NonFittableMixin, BaseStep
    
    class PandasToNumpy(NonFittableMixin, BaseStep):
        def transform(self, data_inputs, expected_outputs): 
            return data_inputs.values
    
    pipeline = Pipeline([
        PandasToNumpy(),
        StandardScaler(),
    ])
    

    Then, you proceed as you intended:

    features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
    pipeline, scaled_features = pipeline.fit_transform(features)
    

    Option 2: to keep the original column names and row names

    You could even do this with a wrapper as such:

    from neuraxle.pipeline import Pipeline
    from neuraxle.base import MetaStepMixin, BaseStep
    
    class PandasValuesChangerOf(MetaStepMixin, BaseStep):
        def transform(self, data_inputs, expected_outputs): 
            new_data_inputs = self.wrapped.transform(data_inputs.values)
            new_data_inputs = self._merge(data_inputs, new_data_inputs)
            return new_data_inputs
    
        def fit_transform(self, data_inputs, expected_outputs): 
            self.wrapped, new_data_inputs = self.wrapped.fit_transform(data_inputs.values)
            new_data_inputs = self._merge(data_inputs, new_data_inputs)
            return self, new_data_inputs
    
        def _merge(self, data_inputs, new_data_inputs): 
            new_data_inputs = pd.DataFrame(
                new_data_inputs,
                index=data_inputs.index,
                columns=data_inputs.columns
            )
            return new_data_inputs
    
    df_scaler = PandasValuesChangerOf(StandardScaler())
    

    Then, you proceed as you intended:

    features = df[["col1", "col2", "col3", "col4"]]  # ... your df data
    df_scaler, scaled_features = df_scaler.fit_transform(features)
    
    0 讨论(0)
  • 2020-12-04 09:03
    features = ["col1", "col2", "col3", "col4"]
    autoscaler = StandardScaler()
    df[features] = autoscaler.fit_transform(df[features])
    
    0 讨论(0)
  • 2020-12-04 09:07
    import pandas as pd    
    from sklearn.preprocessing import StandardScaler
    
    df = pd.read_csv('your file here')
    ss = StandardScaler()
    df_scaled = pd.DataFrame(ss.fit_transform(df),columns = df.columns)
    

    The df_scaled will be the 'same' dataframe, only now with the scaled values

    0 讨论(0)
  • 2020-12-04 09:15

    You could convert the DataFrame as a numpy array using as_matrix(). Example on a random dataset:

    Edit: Changing as_matrix() to values, (it doesn't change the result) per the last sentence of the as_matrix() docs above:

    Generally, it is recommended to use ‘.values’.

    import pandas as pd
    import numpy as np #for the random integer example
    df = pd.DataFrame(np.random.randint(0.0,100.0,size=(10,4)),
                  index=range(10,20),
                  columns=['col1','col2','col3','col4'],
                  dtype='float64')
    

    Note, indices are 10-19:

    In [14]: df.head(3)
    Out[14]:
        col1    col2    col3    col4
        10  3   38  86  65
        11  98  3   66  68
        12  88  46  35  68
    

    Now fit_transform the DataFrame to get the scaled_features array:

    from sklearn.preprocessing import StandardScaler
    scaled_features = StandardScaler().fit_transform(df.values)
    
    In [15]: scaled_features[:3,:] #lost the indices
    Out[15]:
    array([[-1.89007341,  0.05636005,  1.74514417,  0.46669562],
           [ 1.26558518, -1.35264122,  0.82178747,  0.59282958],
           [ 0.93341059,  0.37841748, -0.60941542,  0.59282958]])
    

    Assign the scaled data to a DataFrame (Note: use the index and columns keyword arguments to keep your original indices and column names:

    scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
    
    In [17]:  scaled_features_df.head(3)
    Out[17]:
        col1    col2    col3    col4
    10  -1.890073   0.056360    1.745144    0.466696
    11  1.265585    -1.352641   0.821787    0.592830
    12  0.933411    0.378417    -0.609415   0.592830
    

    Edit 2:

    Came across the sklearn-pandas package. It's focused on making scikit-learn easier to use with pandas. sklearn-pandas is especially useful when you need to apply more than one type of transformation to column subsets of the DataFrame, a more common scenario. It's documented, but this is how you'd achieve the transformation we just performed.

    from sklearn_pandas import DataFrameMapper
    
    mapper = DataFrameMapper([(df.columns, StandardScaler())])
    scaled_features = mapper.fit_transform(df.copy(), 4)
    scaled_features_df = pd.DataFrame(scaled_features, index=df.index, columns=df.columns)
    
    0 讨论(0)
提交回复
热议问题