deep-learning

Google Colab: Why is CPU faster than TPU?

情到浓时终转凉″ 提交于 2020-07-19 06:44:06
问题 I'm using Google colab TPU to train a simple Keras model. Removing the distributed strategy and running the same program on the CPU is much faster than TPU . How is that possible? import timeit import os import tensorflow as tf from sklearn.datasets import load_iris from sklearn.model_selection import train_test_split from tensorflow.keras.models import Sequential from tensorflow.keras.layers import Dense from tensorflow.keras.optimizers import Adam # Load Iris dataset x = load_iris().data y

Recalling function: Tensor 'object' is not callable

*爱你&永不变心* 提交于 2020-07-18 21:11:38
问题 Suppose I have a function named test as follows: def test(X,W): ..do stuff return stuff which I call using model = test(X,W) . When I call the function the first time, I do not get an error. But, if I call the function again, I get the error 'Tensor' object is not callable . Essentially the calling code looks like this: model = test(X,W) model1 = test(X,W) and I get the error at the call for model1 . I would like to not need to redefine the function again before making another call to that

Recalling function: Tensor 'object' is not callable

ぐ巨炮叔叔 提交于 2020-07-18 21:08:16
问题 Suppose I have a function named test as follows: def test(X,W): ..do stuff return stuff which I call using model = test(X,W) . When I call the function the first time, I do not get an error. But, if I call the function again, I get the error 'Tensor' object is not callable . Essentially the calling code looks like this: model = test(X,W) model1 = test(X,W) and I get the error at the call for model1 . I would like to not need to redefine the function again before making another call to that

Inputs to eager execution function cannot be Keras symbolic tensors

百般思念 提交于 2020-07-17 10:12:36
问题 I am trying to implement sample- and pixel-dependent dependent loss weighting in tf.Keras (TensorFlow 2.0.0rc0) for a 3-D U-Net with sparse annotation data (Cicek 2016, arxiv:1606.06650). This is my code: import numpy as np import tensorflow as tf from tensorflow.keras import layers, losses, models # disabling eager execution makes this example work: # tf.python.framework_ops.disable_eager_execution() def get_loss_fcn(w): def loss_fcn(y_true, y_pred): loss = w * losses.mse(y_true, y_pred)

How to find how many Image Generated By ImageDataGenerator

强颜欢笑 提交于 2020-07-17 05:50:49
问题 Hi I want to ask you a question about Keras ImageDataGenerator. Can I determine how many augmented image will create? or how can find training image set size after augmentation.In Keras documentation flow function description is : "Takes numpy data & label arrays, and generates batches of augmented/normalized data. Yields batches indefinitely, in an infinite loop." But how many images generated ? For example the following code how many image generates ? Infinite ? from keras.preprocessing

BatchNorm momentum convention PyTorch

自闭症网瘾萝莉.ら 提交于 2020-07-17 05:46:04
问题 Is the batchnorm momentum convention (default=0.1) correct as in other libraries e.g. Tensorflow it seems to usually be 0.9 or 0.99 by default? Or maybe we are just using a different convention? 回答1: It seems that the parametrization convention is different in pytorch than in tensorflow, so that 0.1 in pytorch is equivalent to 0.9 in tensorflow. To be more precise: In Tensorflow: running_mean = decay*running_mean + (1-decay)*new_value In PyTorch: running_mean = (1-decay)*running_mean + decay

BatchNorm momentum convention PyTorch

╄→гoц情女王★ 提交于 2020-07-17 05:45:11
问题 Is the batchnorm momentum convention (default=0.1) correct as in other libraries e.g. Tensorflow it seems to usually be 0.9 or 0.99 by default? Or maybe we are just using a different convention? 回答1: It seems that the parametrization convention is different in pytorch than in tensorflow, so that 0.1 in pytorch is equivalent to 0.9 in tensorflow. To be more precise: In Tensorflow: running_mean = decay*running_mean + (1-decay)*new_value In PyTorch: running_mean = (1-decay)*running_mean + decay

keras: what is the difference between model.predict and model.predict_proba

我们两清 提交于 2020-07-17 03:27:06
问题 I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row. What is the difference of the two functions? 回答1: predict predict(self, x, batch_size=32, verbose=0) Generates output predictions for the input samples, processing the samples in a batched way. Arguments x: the input data, as a Numpy array. batch_size: integer. verbose: verbosity mode, 0 or 1. Returns A Numpy array of predictions. predict_proba predict

keras: what is the difference between model.predict and model.predict_proba

≡放荡痞女 提交于 2020-07-17 03:25:51
问题 I found model.predict and model.predict_proba both give an identical 2D matrix representing probabilities at each categories for each row. What is the difference of the two functions? 回答1: predict predict(self, x, batch_size=32, verbose=0) Generates output predictions for the input samples, processing the samples in a batched way. Arguments x: the input data, as a Numpy array. batch_size: integer. verbose: verbosity mode, 0 or 1. Returns A Numpy array of predictions. predict_proba predict

In Tensorflow, what is the difference between sampled_softmax_loss and softmax_cross_entropy_with_logits

回眸只為那壹抹淺笑 提交于 2020-07-16 16:11:11
问题 In tensorflow, there are methods called softmax_cross_entropy_with_logits and sampled_softmax_loss. I read the tensorflow document and searched google for more information but I couldn't find the difference. It looks like to me both calculates the loss using softmax function. Using sampled_softmax_loss to calculate the loss loss = tf.reduce_mean(tf.nn.sampled_softmax_loss(...)) Using softmax_cross_entropy_with_logits to calculate the loss loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with