predict

predict with Keras fails due to faulty environment setup

北城以北 提交于 2019-12-01 22:45:49
I can't get Keras to predict anything. Not even in this minimalistic model: from keras.models import Sequential from keras.layers import Dense import numpy as np inDim = 3 outDim = 1 model = Sequential() model.add(Dense(5, input_dim=inDim, activation='relu')) model.add(Dense(outDim, activation='sigmoid')) model.compile(loss='mse', optimizer='adam', metrics=['accuracy']) test_input = np.zeros((1,inDim)) test_output = np.zeros((1,outDim)) model.fit(test_input, test_output) prediction = model.predict(test_input) Everything goes as expected until the last line: Epoch 1/1 1/1 [=====================

Error in `contrasts' Error

爷,独闯天下 提交于 2019-12-01 21:18:01
I have trained a model and I am attempting to use the predict function but it returns the following error. Error in contrasts<- ( *tmp* , value = contr.funs[1 + isOF[nn]]) : contrasts can be applied only to factors with 2 or more levels There are several questions in SO and CrossValidated about this, and from what I interpret this error to be, is one factor in my model has only one level. This is a pretty simple model, with one continuous variable (driveTime) and one factor variable which has 3 levels driveTime Market.y transfer Min. : 5.100 Dallas :10 Min. :-11.205 1st Qu.: 6.192 McAllen: 6

how to solve predict.lm() error: variable 'affinity' was fitted with type “nmatrix.1” but type “numeric” was supplied

99封情书 提交于 2019-12-01 21:15:29
I have a simple linear model: mylm = lm(formula = prodRate~affinity, mydf) where mydf is a dataframe which looks like: prodRate affinity 1 2643.5744 0.005164040 2 2347.6923 0.004439970 3 1783.6819 0.003322830 when I use predict.lm() an error came up: my_pred= predict(mylm,newdata=data.frame(affinity=seq(0,1,0.1)) ) Error: variable 'affinity' was fitted with type "nmatrix.1" but type "numeric" was supplied. Why is that? how to fix it? Thanks! Thanks to the discussion with user20650 (see above), the bug was identified: The mydf in mylm = lm(formula = prodRate~affinity, mydf) was created by

predict.glm() with three new categories in the test data (r)(error)

倾然丶 夕夏残阳落幕 提交于 2019-12-01 14:13:34
I have a data set called data which has 481 092 rows. I split data into two equal halves: The first halve (row 1: 240 546) is called train and was used for the glm() ; the second halve (row 240 547 : 481 092) is called test and should be used to validate the model; Then I started the regression: testreg <- glm(train$returnShipment ~ train$size + train$color + train$price + train$manufacturerID + train$salutation + train$state + train$age + train$deliverytime, family=binomial(link="logit"), data=train) Now the prediction: prediction <- predict.glm(testreg, newdata=test, type="response") gives

tensorflow serving prediction as b64 output top result

这一生的挚爱 提交于 2019-12-01 14:05:12
I have a Keras model I converting to a tensorflow serving model. I can successfully convert my pretrained keras model to take b64 input, preprocess that input, and feed it to my model. My problem is that I don't know how to take the prediction data I am getting (which is enormous) and only export the top result. I am doing image segmentation so my output prediction is of shape (?, 473, 473, 3) and I'd like to get the top result and return it in b64 encoded format. What I have currently that just returns the entire prediction: sess = K.get_session() g = sess.graph g_def = graph_util.convert

predict.glm() with three new categories in the test data (r)(error)

一曲冷凌霜 提交于 2019-12-01 13:21:36
问题 I have a data set called data which has 481 092 rows. I split data into two equal halves: The first halve (row 1: 240 546) is called train and was used for the glm() ; the second halve (row 240 547 : 481 092) is called test and should be used to validate the model; Then I started the regression: testreg <- glm(train$returnShipment ~ train$size + train$color + train$price + train$manufacturerID + train$salutation + train$state + train$age + train$deliverytime, family=binomial(link="logit"),

tensorflow serving prediction as b64 output top result

这一生的挚爱 提交于 2019-12-01 12:34:54
问题 I have a Keras model I converting to a tensorflow serving model. I can successfully convert my pretrained keras model to take b64 input, preprocess that input, and feed it to my model. My problem is that I don't know how to take the prediction data I am getting (which is enormous) and only export the top result. I am doing image segmentation so my output prediction is of shape (?, 473, 473, 3) and I'd like to get the top result and return it in b64 encoded format. What I have currently that

R explain on Lime - Feature names stored in `object` and `newdata` are different

拜拜、爱过 提交于 2019-12-01 11:17:07
Hi I was working on using R explain on the LIME model. All is fine when I run this portion. # Library library(tm) library(SnowballC) library(caTools) library(RWeka) library(caret) library(text2vec) library(lime) # Importing the dataset dataset_original = read.delim('Restaurant_Reviews.tsv', quote = '', stringsAsFactors = FALSE) dataset_original$Liked = as.factor(dataset_original$Liked) # Splitting the dataset into the Training set and Test set set.seed(123) split = sample.split(dataset_original$Liked, SplitRatio = 0.8) training_set = subset(dataset_original, split == TRUE) test_set = subset

Rolling regression and prediction with lm() and predict()

好久不见. 提交于 2019-12-01 09:25:38
I need to apply lm() to an enlarging subset of my dataframe dat , while making prediction for the next observation. For example, I am doing: fit model predict ---------- ------- dat[1:3, ] dat[4, ] dat[1:4, ] dat[5, ] . . . . dat[-1, ] dat[nrow(dat), ] I know what I should do for a particular subset (related to this question: predict() and newdata - How does this work? ). For example to predict the last row, I do dat1 = dat[1:(nrow(dat)-1), ] dat2 = dat[nrow(dat), ] fit = lm(log(clicks) ~ log(v1) + log(v12), data=dat1) predict.fit = predict(fit, newdata=dat2, se.fit=TRUE) How can I do this

How to reproduce predict.svm in R? [closed]

百般思念 提交于 2019-11-30 23:42:39
I want to train an SVM classifier in R and be able to use it in other software by exporting the relevant parameters. To do so, I first want to be able to reproduce the behavior of predict.svm() in R (using the e1071 package). I trained the model based on the iris data. data(iris) # simplify the data by removing the third label ir <- iris[1:100,] ir$Species <- as.factor(as.integer(ir$Species)) # train the model m <- svm(Species ~ ., data=ir, cost=8) # the model internally uses a scaled version of the data, example: m$x.scale # # # # # # $`scaled:center` # Sepal.Length Sepal.Width Petal.Length