deep-learning

How to make TensorFlow use 100% of GPU?

Deadly 提交于 2020-01-21 19:17:24
问题 I have a laptop that has an RTX 2060 GPU and I am using Keras and TF 2 to train an LSTM on it. I am also monitoring the gpu use by nvidia-smi and I noticed that the jupyter notebook and TF are using maximum 35% and usually the gpu is being used between 10-25%. With current conditions, it took more than 7 hours to train this model, I want to know if I am doing something wrong or it is a limitation of Keras and TF? My nvidia-smi output: Sun Nov 3 00:07:37 2019 +---------------------------------

How to make TensorFlow use 100% of GPU?

和自甴很熟 提交于 2020-01-21 19:16:23
问题 I have a laptop that has an RTX 2060 GPU and I am using Keras and TF 2 to train an LSTM on it. I am also monitoring the gpu use by nvidia-smi and I noticed that the jupyter notebook and TF are using maximum 35% and usually the gpu is being used between 10-25%. With current conditions, it took more than 7 hours to train this model, I want to know if I am doing something wrong or it is a limitation of Keras and TF? My nvidia-smi output: Sun Nov 3 00:07:37 2019 +---------------------------------

Batch normalization when batch size=1

心不动则不痛 提交于 2020-01-21 18:59:44
问题 What will happen when I use batch normalization but set batch_size = 1 ? Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when batch_size = 1 , variance will be 0. And (x-mean)/variance will lead to error because of division by 0. But why did errors not occur when I set batch_size = 1 ? Why my network was trained as good as I expected? Could anyone explain it? Some people argued that: The

Batch normalization when batch size=1

夙愿已清 提交于 2020-01-21 18:59:24
问题 What will happen when I use batch normalization but set batch_size = 1 ? Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when batch_size = 1 , variance will be 0. And (x-mean)/variance will lead to error because of division by 0. But why did errors not occur when I set batch_size = 1 ? Why my network was trained as good as I expected? Could anyone explain it? Some people argued that: The

Batch normalization when batch size=1

狂风中的少年 提交于 2020-01-21 18:59:08
问题 What will happen when I use batch normalization but set batch_size = 1 ? Because I am using 3D medical images as training dataset, the batch size can only be set to 1 because of GPU limitation. Normally, I know, when batch_size = 1 , variance will be 0. And (x-mean)/variance will lead to error because of division by 0. But why did errors not occur when I set batch_size = 1 ? Why my network was trained as good as I expected? Could anyone explain it? Some people argued that: The

What is the difference between the predict and predict_on_batch methods of a Keras model?

半腔热情 提交于 2020-01-21 10:38:48
问题 According to the keras documentation: predict_on_batch(self, x) Returns predictions for a single batch of samples. However, there does not seem to be any difference with the standard predict method when called on a batch, whether it being with one or multiple elements. model.predict_on_batch(np.zeros((n, d_in))) is the same as model.predict(np.zeros((n, d_in))) (a numpy.ndarray of shape (n, d_out ) 回答1: The difference lies in when you pass as x data that is larger than one batch. predict will

What is the difference between the predict and predict_on_batch methods of a Keras model?

橙三吉。 提交于 2020-01-21 10:38:34
问题 According to the keras documentation: predict_on_batch(self, x) Returns predictions for a single batch of samples. However, there does not seem to be any difference with the standard predict method when called on a batch, whether it being with one or multiple elements. model.predict_on_batch(np.zeros((n, d_in))) is the same as model.predict(np.zeros((n, d_in))) (a numpy.ndarray of shape (n, d_out ) 回答1: The difference lies in when you pass as x data that is larger than one batch. predict will

What is the difference between the predict and predict_on_batch methods of a Keras model?

狂风中的少年 提交于 2020-01-21 10:38:15
问题 According to the keras documentation: predict_on_batch(self, x) Returns predictions for a single batch of samples. However, there does not seem to be any difference with the standard predict method when called on a batch, whether it being with one or multiple elements. model.predict_on_batch(np.zeros((n, d_in))) is the same as model.predict(np.zeros((n, d_in))) (a numpy.ndarray of shape (n, d_out ) 回答1: The difference lies in when you pass as x data that is larger than one batch. predict will

What is “metrics” in Keras?

不羁岁月 提交于 2020-01-20 14:16:53
问题 It is not yet clear for me what metrics are (as given in the code below). What exactly are they evaluating? Why do we need to define them in the model ? Why we can have multiple metrics in one model? And more importantly what is the mechanics behind all this? Any scientific reference is also appreciated. model.compile(loss='mean_squared_error', optimizer='sgd', metrics=['mae', 'acc']) 回答1: So in order to understand what metrics are, it's good to start by understanding what a loss function is.

Tensorflow's tensorflow variable_scope values parameter meaning

爷,独闯天下 提交于 2020-01-20 08:10:46
问题 I am currently reading a source code for slim library that is based on Tensorflow and they use values argument for variable_scope method alot, like here. From the API page I can see: This context manager validates that the (optional) values are from the same graph, ensures that graph is the default graph, and pushes a name scope and a variable scope. My question is: variables from values are only being checked if they are from the same graph? What are the use cases for this and why someone