neural-network

Keras model accuracy not improving

北城以北 提交于 2020-12-15 05:35:38
问题 I'm trying to train a neural network to predict the ratings for players in FIFA 18 by easports (ratings are between 64-99). I'm using their players database (https://easports.com/fifa/ultimate-team/api/fut/item?page=1) and I've processed the data into training_x, testing_x, training_y, testing_y. Each of the training samples is a numpy array containing 7 values...the first 6 are the different stats of the player (shooting, passing, dribbling, etc) and the last value is the position of the

ModuleNotFoundError: No module named 'six.moves.collections_abc'

ⅰ亾dé卋堺 提交于 2020-12-15 05:34:23
问题 i am just starting with machine learning i am following this tutorial from Weights&Bias where they gave us some code and asked to run it i am unable to run the code First I was getting the error Keras requires TensorFlow 2.2 or higher for which I tried this method Following the advice given here, downgrading Keras did the trick for me without having to touch any other packages. Just do: pip install keras==2.3.0 from this link Error "Keras requires TensorFlow 2.2 or higher" then I started

QLearning network in a custom environment is choosing the same action every time, despite the heavy negative reward

牧云@^-^@ 提交于 2020-12-15 04:35:09
问题 So I plugged QLearningDiscreteDense into a dots and boxes game I made. I created a custom MDP environment for it. The problem is that it chooses action 0 each time, the first time it works but then it's not an available action anymore so it's an illegal move. I give illegal moves a reward of Integer.MIN_VALUE , but it doesn't affect anything. Here's the MDP class: public class testEnv implements MDP<testState, Integer, DiscreteSpace> { final private int maxStep; DiscreteSpace actionSpace =

QLearning network in a custom environment is choosing the same action every time, despite the heavy negative reward

*爱你&永不变心* 提交于 2020-12-15 04:35:06
问题 So I plugged QLearningDiscreteDense into a dots and boxes game I made. I created a custom MDP environment for it. The problem is that it chooses action 0 each time, the first time it works but then it's not an available action anymore so it's an illegal move. I give illegal moves a reward of Integer.MIN_VALUE , but it doesn't affect anything. Here's the MDP class: public class testEnv implements MDP<testState, Integer, DiscreteSpace> { final private int maxStep; DiscreteSpace actionSpace =

Multi-Label Image Classification

假装没事ソ 提交于 2020-12-15 02:03:24
问题 I tried myself but couldn't reach the final point that's why posting here, please guide me. I am working in multi-label image classification and have slightly different scenarios. Actually I am confused, how we will map labels and their attribute with Id etc So we can use for training and testing. Here is code on which I am working import os import numpy as np import pandas as pd from keras.utils import to_categorical from collections import Counter from keras.callbacks import Callback from

Multi-Label Image Classification

夙愿已清 提交于 2020-12-15 02:01:38
问题 I tried myself but couldn't reach the final point that's why posting here, please guide me. I am working in multi-label image classification and have slightly different scenarios. Actually I am confused, how we will map labels and their attribute with Id etc So we can use for training and testing. Here is code on which I am working import os import numpy as np import pandas as pd from keras.utils import to_categorical from collections import Counter from keras.callbacks import Callback from

Multi-Label Image Classification

陌路散爱 提交于 2020-12-15 02:01:12
问题 I tried myself but couldn't reach the final point that's why posting here, please guide me. I am working in multi-label image classification and have slightly different scenarios. Actually I am confused, how we will map labels and their attribute with Id etc So we can use for training and testing. Here is code on which I am working import os import numpy as np import pandas as pd from keras.utils import to_categorical from collections import Counter from keras.callbacks import Callback from

Pytorch equivalent features in tensorflow?

跟風遠走 提交于 2020-12-13 03:37:48
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then

Pytorch equivalent features in tensorflow?

社会主义新天地 提交于 2020-12-13 03:31:46
问题 I recently was reading a Pytorch code and came across loss.backward() and optimizer.step() functions, are there any equivalent of these using tensorflow/keras? 回答1: loss.backward() equivalent in tensorflow is tf.GradientTape() . TensorFlow provides the tf.GradientTape API for automatic differentiation - computing the gradient of a computation with respect to its input variables. Tensorflow "records" all operations executed inside the context of a tf.GradientTape onto a "tape". Tensorflow then

How to plot ROC_AUC curve for each folds in KFold Cross Validation using Keras Neural Network Classifier

Deadly 提交于 2020-12-13 03:12:57
问题 I really need to find ROC plot for each folds in a 5 fold cross-validation using Keras ANN. I have tried the code from the following link [https://scikit-learn.org/stable/auto_examples/model_selection/plot_roc_crossval.html#sphx-glr-auto-examples-model-selection-plot-roc-crossval-py][1] It works perfectly fine when I'm using the svm classifier as shown here. But when I want to use wrapper to use Keras ANN model it shows errors. I am stuck with this for months now. Can anyone please help me