Multivariate polynomial regression with numpy

前端 未结 3 1330
死守一世寂寞
死守一世寂寞 2020-12-02 23:38

I have many samples (y_i, (a_i, b_i, c_i)) where y is presumed to vary as a polynomial in a,b,c up to a certain degree. For example f

3条回答
  •  轻奢々
    轻奢々 (楼主)
    2020-12-02 23:52

    sklearn provides a simple way to do this.

    Building off an example posted here:

    #X is the independent variable (bivariate in this case)
    X = array([[0.44, 0.68], [0.99, 0.23]])
    
    #vector is the dependent data
    vector = [109.85, 155.72]
    
    #predict is an independent variable for which we'd like to predict the value
    predict= [0.49, 0.18]
    
    #generate a model of polynomial features
    poly = PolynomialFeatures(degree=2)
    
    #transform the x data for proper fitting (for single variable type it returns,[1,x,x**2])
    X_ = poly.fit_transform(X)
    
    #transform the prediction to fit the model type
    predict_ = poly.fit_transform(predict)
    
    #here we can remove polynomial orders we don't want
    #for instance I'm removing the `x` component
    X_ = np.delete(X_,(1),axis=1)
    predict_ = np.delete(predict_,(1),axis=1)
    
    #generate the regression object
    clf = linear_model.LinearRegression()
    #preform the actual regression
    clf.fit(X_, vector)
    
    print("X_ = ",X_)
    print("predict_ = ",predict_)
    print("Prediction = ",clf.predict(predict_))
    

    And heres the output:

    >>> X_ =  [[ 0.44    0.68    0.1936  0.2992  0.4624]
    >>>  [ 0.99    0.23    0.9801  0.2277  0.0529]]
    >>> predict_ =  [[ 0.49    0.18    0.2401  0.0882  0.0324]]
    >>> Prediction =  [ 126.84247142]
    

提交回复
热议问题