Issue with OneHotEncoder for categorical features

前端 未结 7 1382
耶瑟儿~
耶瑟儿~ 2020-12-05 13:20

I want to encode 3 categorical features out of 10 features in my datasets. I use preprocessing from sklearn.preprocessing to do so as the following:

<         


        
相关标签:
7条回答
  • 2020-12-05 13:42

    If the dataset is in pandas data frame, using

    pandas.get_dummies

    will be more straightforward.

    *corrected from pandas.get_getdummies to pandas.get_dummies

    0 讨论(0)
  • 2020-12-05 13:42

    from the documentation:

    categorical_features : “all” or array of indices or mask
    Specify what features are treated as categorical.
    ‘all’ (default): All features are treated as categorical.
    array of indices: Array of categorical feature indices.
    mask: Array of length n_features and with dtype=bool.
    

    column names of pandas dataframe won't work. if you categorical features are column numbers 0, 2 and 6 use :

    from sklearn import preprocessing
    cat_features = [0, 2, 6]
    enc = preprocessing.OneHotEncoder(categorical_features=cat_features)
    enc.fit(dataset.values)
    

    It must also be noted that if these categorical features are not label encoded, you need to use LabelEncoder on these features before using OneHotEncoder

    0 讨论(0)
  • 2020-12-05 13:43

    If you read the docs for OneHotEncoder you'll see the input for fit is "Input array of type int". So you need to do two steps for your one hot encoded data

    from sklearn import preprocessing
    cat_features = ['color', 'director_name', 'actor_2_name']
    enc = preprocessing.LabelEncoder()
    enc.fit(cat_features)
    new_cat_features = enc.transform(cat_features)
    print new_cat_features # [1 2 0]
    new_cat_features = new_cat_features.reshape(-1, 1) # Needs to be the correct shape
    ohe = preprocessing.OneHotEncoder(sparse=False) #Easier to read
    print ohe.fit_transform(new_cat_features)
    

    Output:

    [[ 0.  1.  0.]
     [ 0.  0.  1.]
     [ 1.  0.  0.]]
    

    EDIT

    As of 0.20 this became a bit easier, not only because OneHotEncoder now handles strings nicely, but also because we can transform multiple columns easily using ColumnTransformer, see below for an example

    from sklearn.compose import ColumnTransformer
    from sklearn.preprocessing import LabelEncoder, OneHotEncoder
    import numpy as np
    
    X = np.array([['apple', 'red', 1, 'round', 0],
                  ['orange', 'orange', 2, 'round', 0.1],
                  ['bannana', 'yellow', 2, 'long', 0],
                  ['apple', 'green', 1, 'round', 0.2]])
    ct = ColumnTransformer(
        [('oh_enc', OneHotEncoder(sparse=False), [0, 1, 3]),],  # the column numbers I want to apply this to
        remainder='passthrough'  # This leaves the rest of my columns in place
    )
    print(ct2.fit_transform(X)) # Notice the output is a string
    

    Output:

    [['1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '0.0' '0.0' '1.0' '1' '0']
     ['0.0' '0.0' '1.0' '0.0' '1.0' '0.0' '0.0' '0.0' '1.0' '2' '0.1']
     ['0.0' '1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '1.0' '0.0' '2' '0']
     ['1.0' '0.0' '0.0' '1.0' '0.0' '0.0' '0.0' '0.0' '1.0' '1' '0.2']]
    
    0 讨论(0)
  • 2020-12-05 13:46

    @Medo,

    I encountered the same behavior and found it frustrating. As others have pointed out, Scikit-Learn requires all data to be numerical before it even considers selecting the columns provided in the categorical_features parameter.

    Specifically, the column selection is handled by the _transform_selected() method in /sklearn/preprocessing/data.py and the very first line of that method is

    X = check_array(X, accept_sparse='csc', copy=copy, dtype=FLOAT_DTYPES).

    This check fails if any of the data in the provided dataframe X cannot be successfully converted to a float.

    I agree that the documentation of sklearn.preprocessing.OneHotEncoder is very misleading in that regard.

    0 讨论(0)
  • 2020-12-05 13:46

    There is a simple fix if, like me, you get frustrated by this. Simply use Category Encoders' OneHotEncoder. This is a Sklearn Contrib package, so plays super nicely with the scikit-learn API.

    This works as a direct replacement and does the boring label encoding for you.

    from category_encoders import OneHotEncoder
    cat_features = ['color', 'director_name', 'actor_2_name']
    enc = OneHotEncoder(categorical_features=cat_features)
    enc.fit(dataset.values)
    
    0 讨论(0)
  • 2020-12-05 13:59

    You can apply both transformations (from text categories to integer categories, then from integer categories to one-hot vectors) in one shot using the LabelBinarizer class:

    cat_features = ['color', 'director_name', 'actor_2_name']
    encoder = LabelBinarizer()
    new_cat_features = encoder.fit_transform(cat_features)
    new_cat_features
    

    Note that this returns a dense NumPy array by default. You can get a sparse matrix instead by passing sparse_output=True to the LabelBinarizer constructor.

    Source Hands-On Machine Learning with Scikit-Learn and TensorFlow

    0 讨论(0)
提交回复
热议问题