Polynomial Regression nonsense Predictions

寵の児 提交于 2019-12-04 09:45:24

m1 is the right way to do this. m2 is entering a whole world of pain...

To do predictions from m2, the model needs to know it was fitted to an orthogonal set of basis functions, so that it uses the same basis functions for the extrapolated new data values. Compare: poly(1:10,2)[,2] with poly(1:12,2)[,2] - the first ten values are not the same. If you fit the model explicitly with poly(x,2) then predict understands all that and does the right thing.

What you have to do is make sure your predicted locations are transformed using the same set of basis functions as used to create the model in the first place. You can use predict.poly for this (note I call my explanatory variables x1 and x2 so that its easy to match the names up):

px = poly(x,2)
x1 = px[,1]
x2 = px[,2]

m3 = lm(y~x1+x2)

newx = 90:110
pnew = predict(px,newx) # px is the previous poly object, so this calls predict.poly

prd.3 = predict(m3, newdata=data.frame(x1=pnew[,1],x2=pnew[,2]))
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!