tensorflow

AttributeError: module 'tensorflow.python.framework.ops' has no attribute 'RegisterShape'

萝らか妹 提交于 2021-02-11 17:41:54
问题 I am using TensorFlow 2.1.0-dev20191125 Unfortunately I can't compile a simple example with error: "AttributeError: module 'tensorflow.python.framework.ops' has no attribute 'RegisterShape'" My source code: from tensorflow.python.framework import ops as _ops _ops.RegisterShape("GRUBlockCell")(None) Is it looks like incorrectly installed TF ? 回答1: Shape functions for core ops were moved to C++ via REGISTER_OP(...).SetShapeFn(...) . So you may have to first create/register your operation in C++

“TypeError: 'Session' object is not callable” error running sess = tf.compat.v1.Session()(graph=tf.compat.v1.get_default_graph(), config=session_conf)

|▌冷眼眸甩不掉的悲伤 提交于 2021-02-11 17:37:20
问题 I'm trying to set seeds and configure keras settings to ensure my experiments are reproducible. When I run the following (based on code in an answer to this question): # Import libraries import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.models import load_model from tensorflow.keras.regularizers import l2 # for setting seeds and configuring keras so that experiments are reproducible from numpy.random import seed import random as rn import os from tensorflow

“TypeError: 'Session' object is not callable” error running sess = tf.compat.v1.Session()(graph=tf.compat.v1.get_default_graph(), config=session_conf)

一世执手 提交于 2021-02-11 17:37:20
问题 I'm trying to set seeds and configure keras settings to ensure my experiments are reproducible. When I run the following (based on code in an answer to this question): # Import libraries import numpy as np import pandas as pd import tensorflow as tf from tensorflow.keras.models import load_model from tensorflow.keras.regularizers import l2 # for setting seeds and configuring keras so that experiments are reproducible from numpy.random import seed import random as rn import os from tensorflow

Is there a queue-like dataset?

谁说我不能喝 提交于 2021-02-11 17:09:02
问题 It seems that tf.data.Dataset provides a more flexible and more sophisticated alternative to TF queues (subclasses of QueueBase ). (E.g. a TF queue cannot really be reopened after it was closed, see here, here.) (There also seems to be some downsides with Dataset , like that it runs (mostly) on CPU.) I liked the FIFOQueue . Is there some equivalent Dataset ? More specifically, I have one (or multiple) background thread which would get data from somewhere (might not be TF related), and this

Is there a queue-like dataset?

我怕爱的太早我们不能终老 提交于 2021-02-11 17:04:20
问题 It seems that tf.data.Dataset provides a more flexible and more sophisticated alternative to TF queues (subclasses of QueueBase ). (E.g. a TF queue cannot really be reopened after it was closed, see here, here.) (There also seems to be some downsides with Dataset , like that it runs (mostly) on CPU.) I liked the FIFOQueue . Is there some equivalent Dataset ? More specifically, I have one (or multiple) background thread which would get data from somewhere (might not be TF related), and this

Print() or not to print() a tensorflow.python.framework.ops.EagerTensor - difference between print(eagertensor) & eagertensor in a notebook cell

让人想犯罪 __ 提交于 2021-02-11 15:59:29
问题 When I work with Python, I am used to work in a notebook environment (either Jupyter Notebook locally or Google Colab, where my examples below are tested). I am sometimes omitting the print() command when I am interested in a variable. Example: for each in ['a','b','c']: print(each) This prints a b c as expected. However, for each in ['a','b','c']: each doesn't print anything. If later I type in a new cell each , the notebook prints 'c' , as expected. If I write print(each) , I get c as

Why Tensorflow results are different in Python versions 3.5 and 3.7

雨燕双飞 提交于 2021-02-11 15:58:58
问题 Why Tensorflow results are different in Python versions 3.5 (SQL server, machine learning services) and 3.7 (local machine, anaconda)? I found out, it depends on 4 parameters values: dataset size number of epochs number of 1st layer (input) neurons number of 2nd layer (hidden) neurons Here is the example: identical results: dataset size - 50 000 number of epochs - 5/3/2 number of 1st layer (input) neurons - 300 **number of 2nd layer (hidden) neurons - 80% from 1st layer** different results:

Predict Image angel with Rotnet and Python

 ̄綄美尐妖づ 提交于 2021-02-11 15:37:35
问题 Hi i am working on classification model to predict angle of image in python for that i am using Correcting Image Orientation Using Convolutional Neural Networks tutorial with ROTNET. Tutorial is very much explained but training is stuck after 5 to 7 step with angel error 101 but tutorial says we only need to wait for 10 epochs to get an average angle error of 1-2 degrees! but i am not reaching there any one having any idea what is thing i am doing wrong. Or this is happening because i am

Can not load model weights saved to GCP with keras.save_weights. Need to transfer to new bucket to load weights

余生颓废 提交于 2021-02-11 15:36:00
问题 I am training on Google Colab with data and model weights loaded from/save to GCP. I am using Keras callbacks to save the weights to GCP. This is what the callback looks like callbacks = [tf.keras.callbacks.ModelCheckpoint(filepath='gs://mybucket/'+ 'savename' + '_loss_{loss:.2f}', monitor='loss', verbose=1, save_weights_only=True, save_freq='epoch')] The training saves the model weights successfully to my GCP bucket, but when I try to load those weights in a new session, the cell just hangs,

Output of Keras predict method has the wrong shape when using Google Colab's tpu strategy

ぃ、小莉子 提交于 2021-02-11 15:14:34
问题 I made the following architecture Layer (type) Output Shape Param # ================================================================= embedding_7 (Embedding) (None, 50, 64) 512000 _________________________________________________________________ bidirectional_5 (Bidirection (None, 200) 132000 _________________________________________________________________ dense_9 (Dense) (None, 1) 201 ================================================================= Total params: 644,201 Trainable params: