deep-learning

what should be the target in this deep learning image classification problem

断了今生、忘了曾经 提交于 2020-05-17 07:07:02
问题 I am doing a image classification project using CNN in keras. I have a dataset of about 900 photos of about 70 people .Each person has multiple photos of his different age. My goal is to predict the correct ID of the person if any one of his photo is in the input. Here is the glimpse of the data. My questions are: What should be my target column ?Is Target 'AGE' or 'ID'? 2-Do I need to do hot-encoding of the target column? For example if I used ID as my target,then do I have to do one-hot

Error “IndexError: How to predict input image using trained model in Keras?

孤者浪人 提交于 2020-05-17 07:05:44
问题 I trained a model to classify images from 9 classes and saved it using model.save(). Here is the code I used: from keras.applications.resnet50 import ResNet50, preprocess_input from keras.layers import Dense, Dropout from keras.models import Model from keras.optimizers import Adam, SGD from keras.preprocessing.image import ImageDataGenerator, image from keras.callbacks import EarlyStopping, ModelCheckpoint from sklearn.metrics import confusion_matrix, classification_report, accuracy_score

Error “IndexError: How to predict input image using trained model in Keras?

末鹿安然 提交于 2020-05-17 07:05:07
问题 I trained a model to classify images from 9 classes and saved it using model.save(). Here is the code I used: from keras.applications.resnet50 import ResNet50, preprocess_input from keras.layers import Dense, Dropout from keras.models import Model from keras.optimizers import Adam, SGD from keras.preprocessing.image import ImageDataGenerator, image from keras.callbacks import EarlyStopping, ModelCheckpoint from sklearn.metrics import confusion_matrix, classification_report, accuracy_score

My model doesn't seem to work, as accuracy and loss are 0

会有一股神秘感。 提交于 2020-05-17 06:42:26
问题 I tried to design an LSTM network using keras but the accuracy is 0.00 while the loss value is 0.05 the code which I wrote is below. model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu)) model.add(tf.keras.layers.Dense(1, activation = tf.nn.relu)) def percentage_difference(y_true, y_pred): return K.mean(abs(y_pred/y_true - 1) * 100) model.compile

My model doesn't seem to work, as accuracy and loss are 0

落花浮王杯 提交于 2020-05-17 06:42:13
问题 I tried to design an LSTM network using keras but the accuracy is 0.00 while the loss value is 0.05 the code which I wrote is below. model = tf.keras.models.Sequential() model.add(tf.keras.layers.Flatten()) model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu)) model.add(tf.keras.layers.Dense(128, activation = tf.nn.relu)) model.add(tf.keras.layers.Dense(1, activation = tf.nn.relu)) def percentage_difference(y_true, y_pred): return K.mean(abs(y_pred/y_true - 1) * 100) model.compile

How inverting the dropout compensates the effect of dropout and keeps expected values unchanged?

南笙酒味 提交于 2020-05-16 04:42:25
问题 I'm learning regularization in Neural networks from deeplearning.ai course. Here in dropout regularization, the professor says that if dropout is applied, the calculated activation values will be smaller then when the dropout is not applied (while testing). So we need to scale the activations in order to keep the testing phase simpler. I understood this fact, but I don't understand how scaling is done. Here is a code sample which is used to implement inverted dropout. keep_prob = 0.8 # 0 <=

Ray Tune: How do schedulers and search algorithms interact?

二次信任 提交于 2020-05-16 03:20:51
问题 It seems to me that the natural way to integrate hyperband with a bayesian optimization search is to have the search algorithm determine each bracket and have the hyperband scheduler run the bracket. That is to say, the bayesian optimization search runs only once per bracket. Looking at Tune's source code for this, it's not clear to me whether the Tune library applies this strategy or not. In particular, I want to know how the Tune library handles passing between the search algorithm and

Ray Tune: How do schedulers and search algorithms interact?

心不动则不痛 提交于 2020-05-16 03:20:21
问题 It seems to me that the natural way to integrate hyperband with a bayesian optimization search is to have the search algorithm determine each bracket and have the hyperband scheduler run the bracket. That is to say, the bayesian optimization search runs only once per bracket. Looking at Tune's source code for this, it's not clear to me whether the Tune library applies this strategy or not. In particular, I want to know how the Tune library handles passing between the search algorithm and

How to set and track weight decays?

匆匆过客 提交于 2020-05-16 02:31:47
问题 What is a guideline for setting weight decays (e.g. l2 penalty) - and mainly, how do I track whether it's "working" throughout training? (i.e. whether weights are actually decaying, and by how much , compared to no l2-penalty). 回答1: A common approach is "try a range of values, see what works" - but its pitfall is a lack of orthogonality ; l2=2e-4 may work best in a network X , but not network Y . A workaround is to guide weight decays in a subnetwork manner: (1) group layers (e.g. Conv1D

Pytorch: RuntimeError: reduce failed to synchronize: cudaErrorAssert: device-side assert triggered

亡梦爱人 提交于 2020-05-15 05:09:25
问题 I am running into the following error when trying to train this on this dataset. Since this is the configuration published in the paper, I am assuming I am doing something incredibly wrong. This error arrives on a different image every time I try to run training. C:/w/1/s/windows/pytorch/aten/src/THCUNN/ClassNLLCriterion.cu:106: block: [0,0,0], thread: [6,0,0] Assertion `t >= 0 && t < n_classes` failed. Traceback (most recent call last): File "C:\Program Files\JetBrains\PyCharm Community