Using a transformer (estimator) to transform the target labels in sklearn.pipeline

前端 未结 3 674
天涯浪人
天涯浪人 2020-12-10 14:38

I understand that one can chain several estimators that implement the transform method to transform X (the feature set) in sklearn.pipeline. However I have a use case where

相关标签:
3条回答
  • 2020-12-10 15:21

    There is now a nicer way to do this built into scikit-learn; using a compose.TransformedTargetRegressor.

    When constructing these objects you give them a regressor and a transformer. When you .fit() them they transform the targets before regressing, and when you .predict() them they transform their predicted targets back to the original space.

    It's important to note that you can pass them a pipeline object, so they should interface nicely with your existing setup. For example, take the following setup where I train a ridge regression to predict 1 target given 2 features:

    # Imports
    import numpy as np
    from sklearn import compose, linear_model, metrics, pipeline, preprocessing
    
    # Generate some training and test features and targets
    X_train = np.random.rand(200).reshape(100,2)
    y_train = 1.2*X_train[:, 0]+3.4*X_train[:, 1]+5.6
    X_test = np.random.rand(20).reshape(10,2)
    y_test = 1.2*X_test[:, 0]+3.4*X_test[:, 1]+5.6
    
    # Define my model and scalers
    ridge = linear_model.Ridge(alpha=1e-2)
    scaler = preprocessing.StandardScaler()
    minmax = preprocessing.MinMaxScaler(feature_range=(-1,1))
    
    # Construct a pipeline using these methods
    pipe = pipeline.make_pipeline(scaler, ridge)
    
    # Construct a TransformedTargetRegressor using this pipeline
    # ** So far the set-up has been standard **
    regr = compose.TransformedTargetRegressor(regressor=pipe, transformer=minmax)
    
    # Fit and train the regr like you would a pipeline
    regr.fit(X_train, y_train)
    y_pred = regr.predict(X_test)
    print("MAE: {}".format(metrics.mean_absolute_error(y_test, y_pred)))
    

    This still isn't quite as smooth as I'd like it to be, for example you can access the regressor that contained by a TransformedTargetRegressor using .regressor_ but the coefficients stored there are untransformed. This means there are some extra hoops to jump through if you want to work your way back to the equation that generated the data.

    0 讨论(0)
  • 2020-12-10 15:29

    No, pipelines will always pass y through unchanged. Do the transformation outside the pipeline.

    (This is a known design flaw in scikit-learn, but it's never been pressing enough to change or extend the API.)

    0 讨论(0)
  • 2020-12-10 15:42

    You could add the label column to the end of the training data, then you apply your transformation and you delete that column before training your model. That's not very pro but enough.

    0 讨论(0)
提交回复
热议问题