prediction

AttributeError: 'Model' object has no attribute 'predict_classes'

五迷三道 提交于 2019-11-27 16:23:56
问题 I'm trying to predict on the validation data with pre-trained and fine-tuned DL models. The code follows the example available in the Keras blog on "building image classification models using very little data". Here is the code: import numpy as np from keras.preprocessing.image import ImageDataGenerator import matplotlib.pyplot as plt from keras.models import Sequential from keras.models import Model from keras.layers import Flatten, Dense from sklearn.metrics import classification_report

Circle-Circle Collision Prediction

戏子无情 提交于 2019-11-27 15:09:16
问题 I'm aware of how to check if two circles are intersecting one another. However, sometimes the circles move too fast and end up avoiding collision on the next frame. My current solution to the problem is to check circle-circle collision an arbitrary amount of times between the previous position and it's current position. Is there a mathematical way to find the time it takes for the two circle to collide? If I was able to get that time value, I could move the circle to the position at that time

Predict next event occurrence, based on past occurrences

社会主义新天地 提交于 2019-11-27 11:53:09
问题 I'm looking for an algorithm or example material to study for predicting future events based on known patterns. Perhaps there is a name for this, and I just don't know/remember it. Something this general may not exist, but I'm not a master of math or algorithms, so I'm here asking for direction. An example, as I understand it would be something like this: A static event occurs on January 1st, February 1st, March 3rd, April 4th. A simple solution would be to average the days/hours/minutes

xgboost in R: how does xgb.cv pass the optimal parameters into xgb.train

安稳与你 提交于 2019-11-27 09:19:24
问题 I've been exploring the xgboost package in R and went through several demos as well as tutorials but this still confuses me: after using xgb.cv to do cross validation, how does the optimal parameters get passed to xgb.train ? Or should I calculate the ideal parameters (such as nround , max.depth ) based on the output of xgb.cv ? param <- list("objective" = "multi:softprob", "eval_metric" = "mlogloss", "num_class" = 12) cv.nround <- 11 cv.nfold <- 5 mdcv <-xgb.cv(data=dtrain,params = param

Working with neuralnet in R for the first time: get “requires numeric/complex matrix/vector arguments”

北城余情 提交于 2019-11-27 07:26:50
I'm in the process of attempting to learn to work with neural networks in R. As a learning problem, I've been using the following problem over at Kaggle : Don't worry, this problem is specifically designed for people to learn with, there's no reward tied to it. I started with a simple logistic regression, which was great for getting my feet wet. Now I'd like to learn to work with neural networks. My training data looks like this (Column:Row): - survived: 1 - pclass: 3 - sex: male - age: 22.0 - sibsp: 1 - parch: 0 - ticket: PC 17601 - fare: 7.25 - cabin: C85 - embarked: S My starting R code

Predicting a multiple forward time step of a time series using LSTM

独自空忆成欢 提交于 2019-11-27 05:36:13
问题 I want to predict certain values that are weekly predictable (low SNR). I need to predict the whole time series of a year formed by the weeks of the year (52 values - Figure 1) My first idea was to develop a many-to-many LSTM model (Figure 2) using Keras over TensorFlow. I'm training the model with a 52 input layer (the given time series of previous year) and 52 predicted output layer (the time series of next year). The shape of train_X is (X_examples, 52, 1), in other words, X_examples to

Prediction using Recurrent Neural Network on Time series dataset

我怕爱的太早我们不能终老 提交于 2019-11-27 05:13:32
问题 Description Given a dataset that has 10 sequences - a sequence corresponds to a day of stock value recordings - where each constitutes 50 sample recordings of stock values that are separated by 5 minute intervals starting from the morning or 9:05 am. However, there is one extra recording (the 51th sample) that is only available in the training set which is 2 hours later, not 5 minutes, than the last recorded sample in the 50 sample recordings. That 51th sample is required to be predicted for

Accuracy Score ValueError: Can't Handle mix of binary and continuous target

浪子不回头ぞ 提交于 2019-11-27 00:56:04
I'm using linear_model.LinearRegression from scikit-learn as a predictive model. It works and it's perfect. I have a problem to evaluate the predicted results using the accuracy_score metric. This is my true Data : array([1, 1, 0, 0, 0, 0, 1, 1, 0, 0, 1, 1, 0, 0, 0, 0, 0]) My predicted Data: array([ 0.07094605, 0.1994941 , 0.19270157, 0.13379635, 0.04654469, 0.09212494, 0.19952108, 0.12884365, 0.15685076, -0.01274453, 0.32167554, 0.32167554, -0.10023553, 0.09819648, -0.06755516, 0.25390082, 0.17248324]) My code: accuracy_score(y_true, y_pred, normalize=False) Error message: ValueError: Can't

Extract prediction band from lme fit

两盒软妹~` 提交于 2019-11-26 12:37:38
问题 I have following model x <- rep(seq(0, 100, by=1), 10) y <- 15 + 2*rnorm(1010, 10, 4)*x + rnorm(1010, 20, 100) id <- NULL for(i in 1:10){ id <- c(id, rep(i,101)) } dtfr <- data.frame(x=x,y=y, id=id) library(nlme) with(dtfr, summary( lme(y~x, random=~1+x|id, na.action=na.omit))) model.mx <- with(dtfr, (lme(y~x, random=~1+x|id, na.action=na.omit))) pd <- predict( model.mx, newdata=data.frame(x=0:100), level=0) with(dtfr, plot(x, y)) lines(0:100, predict(model.mx, newdata=data.frame(x=0:100),

How does predict.lm() compute confidence interval and prediction interval?

微笑、不失礼 提交于 2019-11-26 11:22:40
I ran a regression: CopierDataRegression <- lm(V1~V2, data=CopierData1) and my task was to obtain a 90% confidence interval for the mean response given V2=6 and 90% prediction interval when V2=6 . I used the following code: X6 <- data.frame(V2=6) predict(CopierDataRegression, X6, se.fit=TRUE, interval="confidence", level=0.90) predict(CopierDataRegression, X6, se.fit=TRUE, interval="prediction", level=0.90) and I got (87.3, 91.9) and (74.5, 104.8) which seems to be correct since the PI should be wider. The output for both also included se.fit = 1.39 which was the same. I don't understand what