machine-learning

Random_state's contribution to accuracy

江枫思渺然 提交于 2021-02-17 06:30:51
问题 Okay, this is interesting.. I executed the same code a couple of times and each time I got a different accuracy_score . I figured that I was not using any random_state value while train_test splitting . so I used random_state=0 and got consistent Accuracy_score of 82%. but... then I thought to give it a try with different random_state number and I set random_state=128 and Accuracy_score becomes 84%. Now I need to understand why is that and how random_state affects the accuracy of the model.

R error: “Error in check.data : Argument Should be Numeric”

微笑、不失礼 提交于 2021-02-17 05:14:45
问题 I am learning about the "kohonen" library for the R programming language. I created some artificial data to try some of the functions on. I tried using the "supersom()" function on only continuous (i.e type = as.numeric) data and everything works well. However, when I tried to run the "supersom()" function on both continuous and categorical (type = as.factor), I start to run into some errors ("Argument data should be numeric"). The "supersom()" function has an argument called "dist.fct"

predicitng new value through a model trained on one hot encoded data

送分小仙女□ 提交于 2021-02-17 04:44:05
问题 This might look like a trivial problem. But I am getting stuck in predicting results from a model. My problem is like this: I have a dataset of shape 1000 x 19 (except target feature) but after one hot encoding it becomes 1000 x 141. Since I trained the model on the data which is of shape 1000 x 141, so I need data of shape 1 x 141 (at least) for prediction. I also know in python, I can make future prediction using model.predict(data) But, since I am getting data from an end user through a

predicitng new value through a model trained on one hot encoded data

人盡茶涼 提交于 2021-02-17 04:41:48
问题 This might look like a trivial problem. But I am getting stuck in predicting results from a model. My problem is like this: I have a dataset of shape 1000 x 19 (except target feature) but after one hot encoding it becomes 1000 x 141. Since I trained the model on the data which is of shape 1000 x 141, so I need data of shape 1 x 141 (at least) for prediction. I also know in python, I can make future prediction using model.predict(data) But, since I am getting data from an end user through a

predicitng new value through a model trained on one hot encoded data

余生长醉 提交于 2021-02-17 04:41:38
问题 This might look like a trivial problem. But I am getting stuck in predicting results from a model. My problem is like this: I have a dataset of shape 1000 x 19 (except target feature) but after one hot encoding it becomes 1000 x 141. Since I trained the model on the data which is of shape 1000 x 141, so I need data of shape 1 x 141 (at least) for prediction. I also know in python, I can make future prediction using model.predict(data) But, since I am getting data from an end user through a

predicitng new value through a model trained on one hot encoded data

删除回忆录丶 提交于 2021-02-17 04:41:25
问题 This might look like a trivial problem. But I am getting stuck in predicting results from a model. My problem is like this: I have a dataset of shape 1000 x 19 (except target feature) but after one hot encoding it becomes 1000 x 141. Since I trained the model on the data which is of shape 1000 x 141, so I need data of shape 1 x 141 (at least) for prediction. I also know in python, I can make future prediction using model.predict(data) But, since I am getting data from an end user through a

Unable to make predictions on google cloud ml, whereas same model is working on the local machine

梦想与她 提交于 2021-02-17 03:55:06
问题 I am trying to train a machine learning model usinf tensorflow library in the google cloud. I am able to train the model in the cloud after creating a bucket. I am facing the issue when I am tring to make predictions using the existing model. The code and the data is available in the following Github directory. https://github.com/terminator172/game-price-predictions The tensorflow version on the cloud is 1.8 and the tensorflow version on my system is also 1.8 I tried to make predictions by

Unable to make predictions on google cloud ml, whereas same model is working on the local machine

落花浮王杯 提交于 2021-02-17 03:54:51
问题 I am trying to train a machine learning model usinf tensorflow library in the google cloud. I am able to train the model in the cloud after creating a bucket. I am facing the issue when I am tring to make predictions using the existing model. The code and the data is available in the following Github directory. https://github.com/terminator172/game-price-predictions The tensorflow version on the cloud is 1.8 and the tensorflow version on my system is also 1.8 I tried to make predictions by

TypeError: tuple indices must be integers or slices, not list - While loading a model Keras

£可爱£侵袭症+ 提交于 2021-02-16 22:03:14
问题 In short, i have 2 trained models, one trained on 2 classes, the other on 3 classes. My code loads a model, loads an image, and predicts a classification result. finetune_model = tf.keras.models.load_model(modelPath) model = load_model(my_file) img = image.load_img(img_path, target_size=(img_width, img_height)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input(x) preds = model.predict(x) The model file is of .h5 type. When loading the 2-class trained model, it

Is dropout layer still active in a freezed Keras model (i.e. trainable=False)?

|▌冷眼眸甩不掉的悲伤 提交于 2021-02-16 21:27:25
问题 I have two trained models ( model_A and model_B ), and both of them have dropout layers. I have freezed model_A and model_B and merged them with a new dense layer to get model_AB (but I have not removed model_A 's and model_B 's dropout layers). model_AB 's weights will be non-trainable, except for the added dense layer. Now my question is: are the dropout layers in model_A and model_B active (i.e. drop neurons) when I am training model_AB ? 回答1: Short answer: The dropout layers will continue