tensorflow

Can't import tensorflow in PyCharm

陌路散爱 提交于 2021-01-28 08:42:16
问题 I'm trying to import tensorflow in PyCharm, however I get an error that the module is not found. I used pip install to install tensorflow. Also when I look at the interpreter in PyCharm it says I have pip version 9.0.1 and the latest is 10.0.1. I have upgraded to 10.0.1 using the pip commands, and when I run pip --version it says I got 10.0.1. I have tried both reinstalling pycharm and making new projects with no luck. 回答1: Go to Files -> Settings -> Project:projectname -> Project Interpreter

Expected a callable, found non-callable tensorflow_federated.python.learning.model_utils.EnhancedTrainableModel

寵の児 提交于 2021-01-28 08:38:47
问题 Unable to use TFF's build_federated_averaging_process(). Followed the tutorial from the TFF federated documentation. Here's my model code: X_train = <valuex> Y_train = <valuey> def model_fn(): model = tf.keras.models.Sequential([ tf.keras.layers.Conv1D(32,dtype="float64",kernel_size=3,padding='same',activation=tf.nn.relu,input_shape=(X_train.shape[1], X_train.shape[2])), tf.keras.layers.MaxPooling1D(pool_size=3), tf.keras.layers.Conv1D(64,kernel_size=3,padding='same',activation=tf.nn.relu),

Keras CNN with 1D data

淺唱寂寞╮ 提交于 2021-01-28 08:31:25
问题 Every instance of my data is an array with 72 elements. I am trying to construct a 1D cnn to do some classification but I got this error: Error when checking target: expected dense_31 to have 3 dimensions, but got array with shape (3560, 1) This is my code: training_features = np.load('features.npy') training_labels = np.load('labels.npy') training_features = training_features.reshape(-1, 72, 1) model = Sequential() model.add(Conv1D(64, 3, activation='relu', input_shape=(72, 1))) model.add

How to shift values in tensor

╄→гoц情女王★ 提交于 2021-01-28 08:21:21
问题 I have tensor T of shape [batch_size, A] with values and tensor S of shape [batch_size] with shift parameters. I would like to shift values in T[b] by S[b] positions to the right, the last S[b] elements of T[b] should be dropped and new elements should be set to 0. So basically want to do something like: for i in range(batch_size): T[i] = zeros[:S[i]] + T[i, :A-S[i]] Example: For: T = [[1, 2, 3], [4, 5, 6]] S = [1, 2] Return: T' = [[0, 1, 2], [0, 0, 4]] Is there some easy way to do it? 回答1:

Reducing .tflite model size

吃可爱长大的小学妹 提交于 2021-01-28 08:17:25
问题 Any of the zoo .tflite models I see are no more than 3MB in size. On an edgetpu they run fine. However, when I train my own object detection model the .pb file is 60MB and the .tflite is also huge at 20MB! It's also quantized as per below. The end result is segmentation faults on an edgetpu object_detection model. What's causing this file to be so large? Do non-resized images being fed into the model cause the model to be large (some photos were 4096×2160 and not resized)? From object

How can you map values in a tf.data.Dataset using a dictionary

戏子无情 提交于 2021-01-28 07:52:58
问题 Here is a simple use-case of a desired mapping. To map integer labels to one-hot encodings. I would like to mention that for this particular case one should use tf.one_hot . But I want to understand how you could map a dataset using a dictionary anyway. import tensorflow as tf import numpy as np #CREATE A ONE-HOT ENCODING MAPPING mike_labels = [164, 117, 132, 37, 66, 177, 225, 33, 28, 75, 7] num_classes = len(mike_labels) one_hots = np.eye(len(mike_labels)) one_hots = one_hots.tolist() #used

Keras multiple input, output, loss model

强颜欢笑 提交于 2021-01-28 07:40:25
问题 I am working on super-resolution GAN and having some doubts about the code I found on Github. In particular, I have multiple inputs, multiple outputs in the model. Also, I have two different loss functions. In the following code will the mse loss be applied to img_hr and fake_features? # Build and compile the discriminator self.discriminator = self.build_discriminator() self.discriminator.compile(loss='mse', optimizer=optimizer, metrics=['accuracy']) # Build the generator self.generator =

Tensorflow, uninitialized variables despite running global_variable_initializer

泄露秘密 提交于 2021-01-28 07:30:42
问题 I'm new to Tensorflow. I worked in Caffe previously. I'm trying to implement http://cvlab.cse.msu.edu/pdfs/Tai_Yang_Liu_CVPR2017.pdf in Tensorflow. I'm having trouble with variables in Tensorflow, despite having them initialized. I tried using tf.get_variable instead of tf.Variable, but this didn't work. And setting initializer=tf.contrib.layers.xavier_initializer() did nothing. My code: import tensorflow as tf import sys, os import numpy as np global xseed def get_model(inp, train): #create

How to predict a single image with Keras ImageDataGenerator?

∥☆過路亽.° 提交于 2021-01-28 07:22:15
问题 I have trained the CNN to classify images on 3 class. while training the model i have used ImageDataGenerator class from keras to apply preprocessing function on image and rescale it. Now my network is trained with a good accuracy on test set, but i don't know how to apply preprocessing function on single image prediction. If i use ImageDataGenerator it looks for directory. Suggest me some alternatives to do preprocessing function and rescaling on single image. see my code below TRAINING SET:

Cannot import tensorflow inside the project folder

你。 提交于 2021-01-28 07:16:17
问题 I can import tensorflow in my home directory but when I change to project directory I am getting import error as you can see in below screenshot You can see the project folder content also in the screenshot, even if I remove the __pycache__ folder it recreates it again with the same error. Last line of error: ImportError: cannot import name 'constant' from partially initialized module 'tensorflow.python.framework.constant_op' (most likely due to a circular import) (/home/prakhar/.local/lib