deep-learning

Problem Using Keras Sequential Model for “reinforcelearn” Package in R

大城市里の小女人 提交于 2020-05-30 12:19:38
问题 I am trying to use a keras (version 2.2.50) neural network / sequential model to create a simple agent in a reinforcement learning setting using the reinforcelearn package (version 0.2.1) according to this vignette: https://cran.r-project.org/web/packages/reinforcelearn/vignettes/agents.html . This is the code I use: library('reinforcelearn') library('keras') model = keras_model_sequential() %>% layer_dense(units = 10, input_shape = 4, activation = "linear") %>% compile(optimizer = optimizer

Convolutional Neural Network seems to be randomly guessing

旧城冷巷雨未停 提交于 2020-05-29 06:39:26
问题 So I am currently trying to build a race recognition program using a convolution neural network. I'm inputting 200px by 200px versions of the UTKFaceRegonition dataset (put my dataset on a google drive if you want to take a look). Im using 8 different classes (4 races * 2 genders) using keras and tensorflow, each having about 700 images but I have done it with 1000. The problem is when I run the network it gets at best 13.5% accuracy and about 11-12.5% validation accuracy, with a loss around

What does “model.trainable = False” mean in Keras?

烈酒焚心 提交于 2020-05-29 05:23:55
问题 I want to freeze a pre-trained network in Keras. I found base.trainable = False in the documentation. But I didn't understand how it works. With len(model.trainable_weights) I found out that I have 30 trainable weights. How can that be? The network shows total trainable params: 16,812,353. After freezing I have 4 trainable weights. Maybe I don't understand the difference between params and weights. Unfortunately I am a beginner in Deep Learning. Maybe someone can help me. 回答1: A Keras Model

Why doesn't Keras need the gradient of a custom loss function?

≯℡__Kan透↙ 提交于 2020-05-29 04:19:25
问题 To my understanding, in order to update model parameters through gradient descend, the algorithm needs to calculate at some point the derivative of the error function E with respect of the output y: dE/dy. Nevertheless, I've seen that if you want to use a custom loss function in Keras, you simply need to define E and you don't need to define its derivative. What am I missing? Each lost function will have a different derivative, for example: If loss function is the mean square error: dE/dy = 2

What should be the Input types for Earth Mover Loss when images are rated in decimals from 0 to 9 (Keras, Tensorflow)

旧巷老猫 提交于 2020-05-29 03:19:08
问题 I am trying to implement the NIMA Research paper by Google where they rate the image quality. I am using the TID2013 data set. I have 3000 images each one having a score from 0.00 to 9.00 df.head() >> Image Name Score 0 I01_01_1.bmp 5.51429 1 i01_01_2.bmp 5.56757 2 i01_01_3.bmp 4.94444 3 i01_01_4.bmp 4.37838 4 i01_01_5.bmp 3.86486 I FOUND the code for loss function given below def earth_mover_loss(y_true, y_pred): cdf_true = K.cumsum(y_true, axis=-1) cdf_pred = K.cumsum(y_pred, axis=-1) emd =

What is the difference between sparse_categorical_crossentropy and categorical_crossentropy?

别说谁变了你拦得住时间么 提交于 2020-05-29 02:51:32
问题 What is the difference between sparse_categorical_crossentropy and categorical_crossentropy ? When should one loss be used as opposed to the other? For example, are these losses suitable for linear regression? 回答1: Simply: categorical_crossentropy ( cce ) uses a one-hot array to calculate the probability, sparse_categorical_crossentropy ( scce ) uses a category index Consider a classification problem with 5 categories (or classes). In the case of cce , the one-hot target may be [0, 1, 0, 0, 0

How to use K.get_session in Tensorflow 2.0 or how to migrate it?

夙愿已清 提交于 2020-05-29 02:30:10
问题 def __init__(self, **kwargs): self.__dict__.update(self._defaults) # set up default values self.__dict__.update(kwargs) # and update with user overrides self.class_names = self._get_class() self.anchors = self._get_anchors() self.sess = K.get_session() RuntimeError: get_session is not available when using TensorFlow 2.0. 回答1: Tensorflow 2.0 does not expose the backend.get_session directly any more but the code still there and expose for tf1. https://github.com/tensorflow/tensorflow/blob/r2.0

How to use K.get_session in Tensorflow 2.0 or how to migrate it?

偶尔善良 提交于 2020-05-29 02:29:44
问题 def __init__(self, **kwargs): self.__dict__.update(self._defaults) # set up default values self.__dict__.update(kwargs) # and update with user overrides self.class_names = self._get_class() self.anchors = self._get_anchors() self.sess = K.get_session() RuntimeError: get_session is not available when using TensorFlow 2.0. 回答1: Tensorflow 2.0 does not expose the backend.get_session directly any more but the code still there and expose for tf1. https://github.com/tensorflow/tensorflow/blob/r2.0

using output from one LSTM as input into another lstm in tensorflow

回眸只為那壹抹淺笑 提交于 2020-05-28 07:25:06
问题 I want to build an LSTM based neural network which takes two kinds of inputs and predicts two kinds of outputs. A rough structure can be seen in following figure.. The output 2 is dependent upon output 1 and as described in answer to a similar question here, I have tried to implement this by setting the initial state of LSTM 2 from hidden states of LSTM 1. I have implemented this using tensorflow using following code. import tensorflow as tf from tensorflow.keras.layers import Input from

using output from one LSTM as input into another lstm in tensorflow

时光毁灭记忆、已成空白 提交于 2020-05-28 07:25:06
问题 I want to build an LSTM based neural network which takes two kinds of inputs and predicts two kinds of outputs. A rough structure can be seen in following figure.. The output 2 is dependent upon output 1 and as described in answer to a similar question here, I have tried to implement this by setting the initial state of LSTM 2 from hidden states of LSTM 1. I have implemented this using tensorflow using following code. import tensorflow as tf from tensorflow.keras.layers import Input from