tensorflow

How to compare two arrays using tensorflow?

南楼画角 提交于 2021-02-08 05:08:34
问题 I need to compare two arrays and get either true or false,not elementwise result. My code is X = tf.constant([0.05, 0.10], dtype=tf.float32, shape=[1, 2]) y = tf.constant([0.01, 0.99], dtype=tf.float32, shape=[1, 2]) equality = tf.equal(X, y) prints [False, False] my requirement is to get true or false, not an array. 回答1: Assuming that you want to return False if any of your values are not equal then you can use the reduce_all operation: equality = tf.math.reduce_all(tf.equal(X, y)) 回答2: I

to find the intersection of two bounding box in tensorflow?

拜拜、爱过 提交于 2021-02-08 04:52:08
问题 The coordinate of the system is boundary coordinates (x_min, y_min, x_max, y_max). and I want to find the intersection of two boxes set1 and set2 set1 -> (n1,4) set2 -> (n2,4) example set_1-> tensor([[0.2400, 0.2342, 0.8500, 0.8048], [0.1420, 0.5075, 0.2440, 0.5856], [0.0000, 0.5075, 0.1420, 0.5976]], device='cuda:0') set_2-> tensor([[-0.0368, -0.0368, 0.0632, 0.0632], [-0.0576, -0.0576, 0.0839, 0.0839], [-0.0576, -0.0222, 0.0839, 0.0485], ..., [ 0.0000, 0.0000, 1.0000, 1.0000], [ 0.0000, 0

is there a nice output of Keras model.summary( )?

我的未来我决定 提交于 2021-02-08 04:51:32
问题 is it possible to have a nice output of keras model.summary(), that can be included in paper, or can be ploted in a nice table like this. 回答1: You need to install graphvis and pydot, but you might like the results from this. It doesn't make a table but the graph is much better in my opinion. from keras.utils import plot_model plot_model(model, to_file='model.png', show_shapes=True,show_layer_names=True) But you would have to make properly named sub models if you want to nest the several

Why gradient of tanh in tensorflow is `grad = dy * (1 - y*y)`

倖福魔咒の 提交于 2021-02-08 04:46:24
问题 tf.raw_ops.TanhGrad says that grad = dy * (1 - y*y) , where y = tanh(x) . But I think since dy / dx = 1 - y*y , where y = tanh(x) , grad should be dy / (1 - y*y) . Where am I wrong? 回答1: An expression like dy / dx is a mathematical notation for the derivative, it is not an actual fraction. It is meaningless to move dy or dx around individually as you would with a numerator and denominator. Mathematically, it is known that d(tanh(x))/dx = 1 - (tanh(x))^2 . TensorFlow computes gradients

Tensorflow per channel quantization

六眼飞鱼酱① 提交于 2021-02-08 04:41:59
问题 Using the current Tensorflow quantization ops, how would I go about simulating per-channel quantization during inference? This paper defines per-layer quantization as We can specify a single quantizer (defined by the scale and zero-point) for an entire tensor referred to as per-layer quantization and per-channel quantization as Per-channel quantization has a different scale and offset for each convolutional kernel. Let's assume we have this subgraph import tensorflow as tf x = np.random

Inference on GPU with Keras

…衆ロ難τιáo~ 提交于 2021-02-08 04:00:55
问题 I'm trying to make predictions with Keras using my RTX 2060 Super. For some reason, it appears to be running on my CPU instead. Here's the test script I was using for debugging: import numpy as np import tensorflow as tf from keras import Sequential from keras.layers import Conv2D, Flatten, Dense def get_model(): model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(6, 7, 3), activation='relu')) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(Flatten()) model.add(Dense(16,

How does tf.map_fn work?

风流意气都作罢 提交于 2021-02-08 03:59:53
问题 Look at the demo: elems = np.array([1, 2, 3, 4, 5, 6]) squares = map_fn(lambda x: x * x, elems) # squares == [1, 4, 9, 16, 25, 36] elems = (np.array([1, 2, 3]), np.array([-1, 1, -1])) alternate = map_fn(lambda x: x[0] * x[1], elems, dtype=tf.int64) # alternate == [-1, 2, -3] elems = np.array([1, 2, 3]) alternates = map_fn(lambda x: (x, -x), elems, dtype=(tf.int64, tf.int64)) # alternates[0] == [1, 2, 3] # alternates[1] == [-1, -2, -3] I can't understand the second and third. For the second: I

Inference on GPU with Keras

社会主义新天地 提交于 2021-02-08 03:59:35
问题 I'm trying to make predictions with Keras using my RTX 2060 Super. For some reason, it appears to be running on my CPU instead. Here's the test script I was using for debugging: import numpy as np import tensorflow as tf from keras import Sequential from keras.layers import Conv2D, Flatten, Dense def get_model(): model = Sequential() model.add(Conv2D(32, (3, 3), input_shape=(6, 7, 3), activation='relu')) model.add(Conv2D(32, (3, 3), activation='relu')) model.add(Flatten()) model.add(Dense(16,

TensorFlow: Neural Network accuracy always 100% on train and test sets

回眸只為那壹抹淺笑 提交于 2021-02-08 03:39:57
问题 I created a TensorFlow neural network that has 2 hidden layers with 10 units each using ReLU activations and Xavier Initialization for the weights. The output layer has 1 unit outputting binary classification (0 or 1) using the sigmoid activation function to classify whether it believes a passenger on the titanic survived based on the input features. (The only code omitted is the load_data function which populates the variables X_train, Y_train, X_test, Y_test used later in the program)

IndexError: list index out of range in model.fit()

∥☆過路亽.° 提交于 2021-02-08 03:33:30
问题 I am new in using tensorflow. I am trying to train my network with images of shape (16*16). I have divided 3 grayscale images of 512*512 into 16*16 and appended all. so i have 3072*16*16. while training I am getting error. I am using jupyter notebook.Can anyone please help me? Here is the code import tensorflow as tf import numpy as np from numpy import newaxis import glob import os from PIL import Image,ImageOps import random from os.path import join import matplotlib.pyplot as plt from