keras

How can I express this custom loss function in tensorflow?

放肆的年华 提交于 2021-01-28 22:03:12
问题 I've got a loss function that fulfills my needs, but is only in PyTorch. I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line: y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid You can see the whole code in following and it is indeed pretty straight forward except for that line: def get_loss(y_hat, y): # No loss on diagonal B, N, _ = y_hat.shape y_hat[:, torch

Splitting TensorFlow Dataset created with make_csv_dataset into 3 parts (X1_Train, X2_Train and Y_Train) for multi-input model

ぐ巨炮叔叔 提交于 2021-01-28 21:58:56
问题 I am training a deep learning model with Tensorflow 2 and Keras. I read my big CSV file with tf.data.experimental.make_csv_dataset and then split it into train and test datasets. However, I need to split my train dataset into three parts since my deep learning model takes two set of inputs in different layers so I need to pass [x1_train, x2_train],y_train to model.fit . My question is that how can I split train_dataset into x1_train,x2_train and y_train ? (some features shall be in x1_train

Keras Nan value when computing the loss

耗尽温柔 提交于 2021-01-28 21:10:38
问题 My question is related to this one I am working to implement the method described in the article https://drive.google.com/file/d/1s-qs-ivo_fJD9BU_tM5RY8Hv-opK4Z-H/view . The final algorithm to use is here (it is on page 6): d are units vector xhi is a non-null number D is the loss function (sparse cross-entropy in my case) The idea is to do an adversarial training, by modifying the data in the direction where the network is the most sensible to small changes and training the network with the

Why do I have to call model.predict(x) instead of model(x)?

会有一股神秘感。 提交于 2021-01-28 20:50:31
问题 I have the following keras model: def model_1(vocab_size, output_dim, batch_input_dims, rnn_units, input_shape_LSTM, name='model_1'): model = Sequential(name=name) model.add(Embedding(input_dim=vocab_size+1, output_dim=output_dim, mask_zero=True, batch_input_shape=batch_input_dims)) model.add(LSTM(units=rnn_units, input_shape=input_shape_LSTM, stateful=True, return_sequences=True, recurrent_initializer='glorot_uniform', recurrent_activation='sigmoid')) model.add(Dense(units=vocab_size))

Arrange each pixel of a Tensor according to another Tensor

浪子不回头ぞ 提交于 2021-01-28 20:31:19
问题 Now, I am working on a work about registration using deep learning with the Keras backends. The state of task is that finish the registration between two images fixed and moving . Finally I get a deformation field D(200,200,2) where 200 is image size and 2 represents the offset of each pixel dx, dy, dz .I should apply D on moving and calculate loss with fixed . The problem is that is there a way that I can arrange the pixels in moving according to D in Keras model? 回答1: You should be able to

Resource localhost/total/N10tensorflow3VarE does not exist

六眼飞鱼酱① 提交于 2021-01-28 20:11:43
问题 I'm working with Google Colab and trying to train a model using VGG blocks. Like this: METRICS = [ keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.BinaryAccuracy(name='accuracy'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), ] # function for creating a vgg block def vgg_block(layer_in, n_filters, n

Tensorflow error in Colab - ValueError: Shapes (None, 1) and (None, 10) are incompatible

倖福魔咒の 提交于 2021-01-28 19:49:31
问题 I'm trying to execute a small code for NN using the MNIST dataset for characters recognition. When it comes to the fit line I get ValueError: Shapes (None, 1) and (None, 10) are incompatible import numpy as np #Install Tensor Flow try: #Tensorflow_version solo existe en Colab %tensorflow_version 2.x except Exception: pass import tensorflow as tf tf.__version__ mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() print(x_train.shape) print(x_test.shape)

Cannot convert tf.keras.layers.ConvLSTM2D layer to open vino intermediate representation

狂风中的少年 提交于 2021-01-28 19:06:17
问题 I am trying to convert a trained model in tensorflow to Open VINO Intermediate Representation. I have a model of the form given below class Conv3DModel(tf.keras.Model): def __init__(self): super(Conv3DModel, self).__init__() # Convolutions self.conv1 = tf.compat.v2.keras.layers.Conv3D(32, (3, 3, 3), activation='relu', name="conv1", data_format='channels_last') self.pool1 = tf.keras.layers.MaxPool3D(pool_size=(2, 2, 2), data_format='channels_last') self.conv2 = tf.compat.v2.keras.layers.Conv3D

Why is the per sample prediction time on Tensorflow (and Keras) lower when predicting on batches than on individual samples?

∥☆過路亽.° 提交于 2021-01-28 18:54:04
问题 I am using my trained model to make predictions (CPU only). I observe that both on Tensorflow and Keras with Tensorflow backend, the prediction time per sample is much lower when a batch of samples is used as compared to an individual sample. Moreover, the time per sample seems to go down with increasing batch size up to the limits imposed by memory. As an example, on pure Tensorflow, prediction of a single sample takes ~ 1.5 seconds , on 100 samples it is ~ 17 seconds (per sample time ~ 0

One Hot Encoding giving same number for different words in keras

大憨熊 提交于 2021-01-28 18:15:31
问题 Why I am getting same results for different words? import keras keras.__version__ '1.0.0' import theano theano.__version__ '0.8.1' from keras.preprocessing.text import one_hot one_hot('START', 43) [26] one_hot('children', 43) [26] 回答1: unicity non-guaranteed in one hot encoding see one hot keras documentation 回答2: From the Keras source code, you can see that the words are hashed modulo the output dimension (43, in your case): def one_hot(text, n, filters='!"#$%&()*+,-./:;<=>?@[\\]^_`{|}~\t\n'