regression

Error in `contrasts' Error

爷,独闯天下 提交于 2019-12-01 21:18:01
I have trained a model and I am attempting to use the predict function but it returns the following error. Error in contrasts<- ( *tmp* , value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels There are several questions in SO and CrossValidated about this, and from what I interpret this error to be, is one factor in my model has only one level. This is a pretty simple model, with one continuous variable (driveTime) and one factor variable which has 3 levels driveTime Market.y transfer Min. : 5.100 Dallas :10 Min. :-11.205 1st Qu.: 6.192 McAllen: 6

Error: please supply starting values

倖福魔咒の 提交于 2019-12-01 19:32:40
问题 I am conducting a log binomial regression in R. I want to control for covariates in the model (age and BMI- both continuous variables) whereas the dependent variable is Outcome(Yes or No) and independent variable is Group (1 or 2). fit<-glm(Outcome~Group, data=data.1, family=binomial(link="log")) and it works fine. When I try putting age in the model, it still works fine. However, when I put BMI in the model, it gives me the following: Error: no valid set of coefficients has been found:

Keras regression clip values

南楼画角 提交于 2019-12-01 19:07:16
I want to clip values, how could I do that? I tried using this: from keras.backend.tensorflow_backend import clip from keras.layers.core import Lambda ... model.add(Dense(1)) model.add(Activation('linear')) model.add(Lambda(lambda x: clip(x, min_value=200, max_value=1000))) But it does not matter where I put my Lambda+clip, it does not affect anything? It actually has to be implemented as loss, at the model.compile step. from keras import backend as K def clipped_mse(y_true, y_pred): return K.mean(K.square(K.clip(y_pred, 0., 1900.) - K.clip(y_true, 0., 1900.)), axis=-1) model.compile(loss

Error: please supply starting values

試著忘記壹切 提交于 2019-12-01 18:43:18
I am conducting a log binomial regression in R. I want to control for covariates in the model (age and BMI- both continuous variables) whereas the dependent variable is Outcome(Yes or No) and independent variable is Group (1 or 2). fit<-glm(Outcome~Group, data=data.1, family=binomial(link="log")) and it works fine. When I try putting age in the model, it still works fine. However, when I put BMI in the model, it gives me the following: Error: no valid set of coefficients has been found: please supply starting values I have been tried different combination of starting values such as: fit<-glm

smooth.spline(): fitted model does not match user-specified degree of freedom

折月煮酒 提交于 2019-12-01 18:36:34
问题 Here is the code I ran fun <- function(x) {1 + 3*sin(4*pi*x-pi)} set.seed(1) num.samples <- 1000 x <- runif(num.samples) y <- fun(x) + rnorm(num.samples) * 1.5 fit <- smooth.spline(x, y, all.knots=TRUE, df=3) Despite df=3 , when I checked the fitted model, the output was Call: smooth.spline(x = x, y = y, df = 3, all.knots = TRUE) Smoothing Parameter spar= 1.499954 lambda= 0.002508571 (26 iterations) Equivalent Degrees of Freedom (Df): 9.86422 Could someone please help? Thanks! 回答1: Note that

smooth.spline(): fitted model does not match user-specified degree of freedom

非 Y 不嫁゛ 提交于 2019-12-01 18:17:10
Here is the code I ran fun <- function(x) {1 + 3*sin(4*pi*x-pi)} set.seed(1) num.samples <- 1000 x <- runif(num.samples) y <- fun(x) + rnorm(num.samples) * 1.5 fit <- smooth.spline(x, y, all.knots=TRUE, df=3) Despite df=3 , when I checked the fitted model, the output was Call: smooth.spline(x = x, y = y, df = 3, all.knots = TRUE) Smoothing Parameter spar= 1.499954 lambda= 0.002508571 (26 iterations) Equivalent Degrees of Freedom (Df): 9.86422 Could someone please help? Thanks! Note that from R-3.4.0 (2017-04-21), smooth.spline can accept direct specification of λ by a newly added argument

glmnet: How do I know which factor level of my response is coded as 1 in logistic regression

こ雲淡風輕ζ 提交于 2019-12-01 18:10:41
I have a logistic regression model that I made using the glmnet package. My response variable was coded as a factor, the levels of which I will refer to as "a" and "b". The mathematics of logistic regression label one of the two classes as "0" and the other as "1". The feature coefficients of a logistic regression model are either positive, negative, or zero. If a feature "f"'s coefficient is positive, then increasing the value of "f" for a test observation x increases the probability that the model classifies x as being of class "1". My question is: Given a glmnet model, how do you know how

R plot.gam Error “Error in 1:object$nsdf : argument of length 0”

佐手、 提交于 2019-12-01 18:04:47
I am trying to plot a gam object in R, which I made with the gam package. I receive the same error reported in Error in 1:object$nsdf : argument of length 0 when using plot.gam . However, the solution found there, updating to the latest versions (I think), is not working for me. I am running R 3.3.1, gam 1.12, and mgcv 1.8.12 (mgcv is where the plot.gam function is from). Unfortunately, I cannot share the data I am working with. However, the following code -- pulled directly from the p.294 of Intro. to Statistical Learning with R -- reproduces the error for me: library(gam) library(ISLR) #

geom_smooth on a subset of data

…衆ロ難τιáo~ 提交于 2019-12-01 15:27:23
Here is some data and a plot: set.seed(18) data = data.frame(y=c(rep(0:1,3),rnorm(18,mean=0.5,sd=0.1)),colour=rep(1:2,12),x=rep(1:4,each=6)) ggplot(data,aes(x=x,y=y,colour=factor(colour)))+geom_point()+ geom_smooth(method='lm',formula=y~x,se=F) As you can see the linear regression is highly influenced by the values where x=1. Can I get linear regressions calculated for x >= 2 but display the values for x=1 (y equals either 0 or 1). The resulting graph would be exactly the same except for the linear regressions. They would not "suffer" from the influence of the values on abscisse = 1 It's as

Multivariate LSTM Forecast Loss and evaluation

时光怂恿深爱的人放手 提交于 2019-12-01 14:12:25
I have a CNN-RNN model architecture with Bidirectional LSTMS for time series regression problem. My loss does not converge over 50 epochs. Each epoch has 20k samples. The loss keeps bouncing between 0.001 - 0.01 . batch_size=1 epochs = 50 model.compile(loss='mean_squared_error', optimizer='adam') trainingHistory=model.fit(trainX,trainY,epochs=epochs,batch_size=batch_size,shuffle=False) I tried to train the model with incorrectly paired X and Y data for which the loss stays around 0.5 , is it reasonable conclusion that my X and Y have a non linear relationship which can be learned by my model