tensorflow

Alternative function for tf.contrib.layers.flatten(x) Tensor Flow

孤者浪人 提交于 2021-01-28 23:57:18
问题 i am using Tensor flow 0.8.0 verison on Jetson TK1 with Cuda 6.5 on 32 bit arm architecture. For that i can't upgrade the Tensor Flow version and i am facing trouble in Flatten function x = tf.placeholder(dtype = tf.float32, shape = [None, 28, 28]) y = tf.placeholder(dtype = tf.int32, shape = [None]) images_flat = tf.contrib.layers.flatten(x) The error i am getting at this point is AttributeError: 'module' object has no attribute 'flatten' is there any alternative to this function that may be

How can I express this custom loss function in tensorflow?

放肆的年华 提交于 2021-01-28 22:03:12
问题 I've got a loss function that fulfills my needs, but is only in PyTorch. I need to implement it into my TensorFlow code, but while most of it can trivially be "translated" I am stuck with a particular line: y_hat[:, torch.arange(N), torch.arange(N)] = torch.finfo(y_hat.dtype).max # to be "1" after sigmoid You can see the whole code in following and it is indeed pretty straight forward except for that line: def get_loss(y_hat, y): # No loss on diagonal B, N, _ = y_hat.shape y_hat[:, torch

Changing tf.Variable value in Estimator SessionRunHook

给你一囗甜甜゛ 提交于 2021-01-28 22:02:58
问题 I have a tf.Estimator whose model_fn contains a tf.Variable initialized to 1.0. I would like to change the variable value at every epoch based on the accuracy on the dev set. I implemented a SessionRunHook to achieve this, but when I try to change the value I receive the following error: raise RuntimeError("Graph is finalized and cannot be modified.") This is the code for the Hook: class DynamicWeightingHook(tf.train.SessionRunHook): def __init__(self, epoch_size, gamma_value): self.gamma =

Splitting TensorFlow Dataset created with make_csv_dataset into 3 parts (X1_Train, X2_Train and Y_Train) for multi-input model

ぐ巨炮叔叔 提交于 2021-01-28 21:58:56
问题 I am training a deep learning model with Tensorflow 2 and Keras. I read my big CSV file with tf.data.experimental.make_csv_dataset and then split it into train and test datasets. However, I need to split my train dataset into three parts since my deep learning model takes two set of inputs in different layers so I need to pass [x1_train, x2_train],y_train to model.fit . My question is that how can I split train_dataset into x1_train,x2_train and y_train ? (some features shall be in x1_train

reading a protobuf created with TF2 using TF1

生来就可爱ヽ(ⅴ<●) 提交于 2021-01-28 21:13:21
问题 I have a model stored as an hdf5 which I export to a protobuf (PB) file using saved_model.save, like this: from tensorflow import keras import tensorflow as tf model = keras.models.load_model("model.hdf5") tf.saved_model.save(model, './output_dir/') this works fine and the result is a saved_model.pb file which I can later view with other software with no issues. However, when I try to import this PB file using TensorFlow1, my code fails. As PB is supposed to be a universal format, this

Why do I have to call model.predict(x) instead of model(x)?

会有一股神秘感。 提交于 2021-01-28 20:50:31
问题 I have the following keras model: def model_1(vocab_size, output_dim, batch_input_dims, rnn_units, input_shape_LSTM, name='model_1'): model = Sequential(name=name) model.add(Embedding(input_dim=vocab_size+1, output_dim=output_dim, mask_zero=True, batch_input_shape=batch_input_dims)) model.add(LSTM(units=rnn_units, input_shape=input_shape_LSTM, stateful=True, return_sequences=True, recurrent_initializer='glorot_uniform', recurrent_activation='sigmoid')) model.add(Dense(units=vocab_size))

Arrange each pixel of a Tensor according to another Tensor

浪子不回头ぞ 提交于 2021-01-28 20:31:19
问题 Now, I am working on a work about registration using deep learning with the Keras backends. The state of task is that finish the registration between two images fixed and moving . Finally I get a deformation field D(200,200,2) where 200 is image size and 2 represents the offset of each pixel dx, dy, dz .I should apply D on moving and calculate loss with fixed . The problem is that is there a way that I can arrange the pixels in moving according to D in Keras model? 回答1: You should be able to

Resource localhost/total/N10tensorflow3VarE does not exist

六眼飞鱼酱① 提交于 2021-01-28 20:11:43
问题 I'm working with Google Colab and trying to train a model using VGG blocks. Like this: METRICS = [ keras.metrics.TruePositives(name='tp'), keras.metrics.FalsePositives(name='fp'), keras.metrics.TrueNegatives(name='tn'), keras.metrics.FalseNegatives(name='fn'), keras.metrics.BinaryAccuracy(name='accuracy'), keras.metrics.Precision(name='precision'), keras.metrics.Recall(name='recall'), keras.metrics.AUC(name='auc'), ] # function for creating a vgg block def vgg_block(layer_in, n_filters, n

Python 3.7.0 Heroku buildpack issue

我是研究僧i 提交于 2021-01-28 19:51:59
问题 I've read some people with the same issue, but nothing suggested has worked. I'm trying to deploy a silly project to Heroku but nothing is working. Below this lines you can see the message: Writing objects: 100% (100/100), 55.42 MiB | 625.00 KiB/s, done. Total 100 (delta 19), reused 4 (delta 0) remote: Compressing source files... done. remote: Building source: Counting objects: 100, done. Delta compression using up to 4 threads. Compressing objects: 100% (94/94), done. Writing objects: 100%

Tensorflow error in Colab - ValueError: Shapes (None, 1) and (None, 10) are incompatible

倖福魔咒の 提交于 2021-01-28 19:49:31
问题 I'm trying to execute a small code for NN using the MNIST dataset for characters recognition. When it comes to the fit line I get ValueError: Shapes (None, 1) and (None, 10) are incompatible import numpy as np #Install Tensor Flow try: #Tensorflow_version solo existe en Colab %tensorflow_version 2.x except Exception: pass import tensorflow as tf tf.__version__ mnist = tf.keras.datasets.mnist (x_train, y_train), (x_test, y_test) = mnist.load_data() print(x_train.shape) print(x_test.shape)