tensorflow

how to input multi features for tensorflow model inference

和自甴很熟 提交于 2021-01-29 12:59:09
问题 I'm trying to model serving test. Now, I'm following this example "https://www.tensorflow.org/beta/guide/saved_model" This example is OK. But, In my case, I have multi input features. loaded = tf.saved_model.load(export_path) infer = loaded.signatures["serving_default"] print(infer.structured_input_signature) => ((), {'input1': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input1'), 'input2': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input2')}) In example, for single input

Error in Keras: “ AttributeError: 'Tensor' object has no attribute '_keras_history'”?

走远了吗. 提交于 2021-01-29 12:50:50
问题 I am using google colab, its a pretty simple network that uses an LSTM-BiLSTM and CRF. But I get this error " AttributeError: 'Tensor' object has no attribute '_keras_history'", when model.fit() is called. I understand that I shouldn't have any + operations or numpy.add() and replace them with ADD(), but this is not my case. I also tried wrapping it Lambda function but it didn't work out(I think I wasnt dong it right) Any help would be highly appreciated This is my code: input = Input(shape=

how to input multi features for tensorflow model inference

◇◆丶佛笑我妖孽 提交于 2021-01-29 12:26:39
问题 I'm trying to model serving test. Now, I'm following this example "https://www.tensorflow.org/beta/guide/saved_model" This example is OK. But, In my case, I have multi input features. loaded = tf.saved_model.load(export_path) infer = loaded.signatures["serving_default"] print(infer.structured_input_signature) => ((), {'input1': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input1'), 'input2': TensorSpec(shape=(None, 1), dtype=tf.int32, name='input2')}) In example, for single input

TensorFlow2-tf.keras: Loss and model weights suddenly become 'nan' when training MTCNN PNet

安稳与你 提交于 2021-01-29 12:21:01
问题 I was trying to use tfrecords to train the PNet of MTCNN. At first the loss was decreasing smoothly for the first few epochs and then it became 'nan' and so did the model weights. Below are my model structure and training results: def pnet_train1(train_with_landmark = False): X = Input(shape = (12, 12, 3), name = 'Pnet_input') M = Conv2D(10, 3, strides = 1, padding = 'valid', kernel_initializer = glorot_normal, kernel_regularizer = l2(0.00001), name = 'Pnet_conv1')(X) M = PReLU(shared_axes =

How to create a tensorflow input pipeline for local dataset available as JSON in this format?

风格不统一 提交于 2021-01-29 12:01:08
问题 examples, metadata = tfds.load('ted_hrlr_translate/pt_to_en', with_info=True, as_supervised=True) train_examples, val_examples = examples['train'], examples['validation'] type(train_examples) #Output tensorflow.python.data.ops.dataset_ops._OptionsDataset for i in train_examples: print(type(i), i) <class 'tuple'> (<tf.Tensor: shape=(), dtype=string, numpy=b'os astr\xc3\xb3nomos acreditam que cada estrela da gal\xc3\xa1xia tem um planeta , e especulam que at\xc3\xa9 um quinto deles tem um

tensorflow variable printing as NaN

巧了我就是萌 提交于 2021-01-29 11:15:39
问题 import tensorflow as tf import numpy as np x = tf.constant(0, name='x') n = tf.constant(0, name='n') y = tf.Variable(x/n, name='y') model = tf.global_variables_initializer() with tf.Session() as session: session.run(model) for i in range(5): x = x + np.random.randint(1000) n = n + 1 print(session.run(x)) print(session.run(n)) print(session.run(y)) I'm trying to print a rolling average for the random number generated by np.random.randint(1000) . Here's the output: 378 1 nan 1242 2 nan 2020 3

Input multiple datasets to tensorflow model

ぃ、小莉子 提交于 2021-01-29 10:30:33
问题 Hi I'm trying to input multiple datasets in a model. This is an example of my problem, however in my case one of my models has 2 input parameters while the other one has one. The error I get in my case is : Failed to find data adapter that can handle input: (<class 'list'> containing values of types {"<class 'tensorflow.python.data.ops.dataset_ops.BatchDataset'>", "<class 'tensorflow.python.data.ops.dataset_ops.TakeDataset'>"}), <class 'NoneType'> Code: import tensorflow as tf # Create first

How do I get reproducible results with Tensorflow 2.0?

时光总嘲笑我的痴心妄想 提交于 2021-01-29 10:23:12
问题 I have seen this FAQ and this stackoverflow about reproducibility in keras and TF 1.x. How do I do something similar in TF 2.0 because it no longer has tf.Session ? I know I could still set the graph seed and the seed for each initialization in the layer by passing something like tf.keras.initializers.GlorotNormal(seed=10) . However, I am wondering if there is something more convenient. 回答1: Consider using tf.random.set_seed(seed) at the startup. In my use cases it provides reproducible

Getting true labels for keras predictions

时光怂恿深爱的人放手 提交于 2021-01-29 10:15:17
问题 I have a standard CNN for image classification, using the following generator to get the dataset: generator = validation_image_generator.flow_from_directory(batch_size=BATCH_SIZE, directory=val_dir, shuffle=False, target_size=(100,100), class_mode='categorical') I can easily get the predicted labels with: predictions = model.predict(dataset) Now I want to get the (original) true labels and images for all the predictions, in the same order as the predictions in order to compare them. I am sure

Graph disconnected: cannot obtain value for tensor Tensor(“conv2d_1_input:0”, shape=(?, 128, 128, 1), dtype=float32)

时间秒杀一切 提交于 2021-01-29 10:01:26
问题 I'm trying to implement an autoencoder which gets 3 different inputs and fuse this three image. I want to get the output of a layer in the encoder and concatenate it with a layer in the decoder but when I run it I get graph disconnected error. here is my code: def create_model(input_shape): input_1 = keras.layers.Input(input_shape) input_2 = keras.layers.Input(input_shape) input_3 = keras.layers.Input(input_shape) network = keras.models.Sequential([ keras.layers.Conv2D(32, (7, 7), activation