keras

Are Keras custom layer parameters non-trainable by default?

喜夏-厌秋 提交于 2021-02-08 08:42:11
问题 I built a simple custom layer in Keras and was surprised to find that the parameters were not set to trainable by default. I can get it to work by explicitly setting the trainable attribute. I can't explain why this is by looking at documentation or code. Is this how it is supposed to be or I am doing something wrong which is making the parameters non-trainable by default? Code: import tensorflow as tf class MyDense(tf.keras.layers.Layer): def __init__(self, **kwargs): super(MyDense, self)._

Integer Series prediction using Keras

巧了我就是萌 提交于 2021-02-08 08:11:09
问题 I'm trying to code a RNN model that will predict the next number in the integer series. The model loss gets smaller with each epoch, but the predictions never get quite accurate. I've tried many train set sizes and numbers of epochs, but my predicted value is always off from the expected by few digits. Can you give me some hints what to improve or what I'm doing wrong? This is the code: from keras.models import Sequential from keras.layers import Dense, Dropout, LSTM from keras.callbacks

Integer Series prediction using Keras

好久不见. 提交于 2021-02-08 08:04:04
问题 I'm trying to code a RNN model that will predict the next number in the integer series. The model loss gets smaller with each epoch, but the predictions never get quite accurate. I've tried many train set sizes and numbers of epochs, but my predicted value is always off from the expected by few digits. Can you give me some hints what to improve or what I'm doing wrong? This is the code: from keras.models import Sequential from keras.layers import Dense, Dropout, LSTM from keras.callbacks

How to design a shared weight, multi input/output Auto-Encoder network?

自古美人都是妖i 提交于 2021-02-08 07:47:09
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

How to design a shared weight, multi input/output Auto-Encoder network?

て烟熏妆下的殇ゞ 提交于 2021-02-08 07:44:56
问题 I have two different types of images (camera image and it's corresponding sketch). The goal of the network is to find the similarity between both images. The network consists of a single encoder and a single decoder. The motivation behind the single encoder-decoder is to share the weights between them. input_img = Input(shape=(img_width,img_height, channels)) def encoder(input_img): # Photo-Encoder Code pe = Conv2D(96, kernel_size=11, strides=(4,4), padding = 'SAME')(left_input) # (?, 64, 64,

Is it possible to create multiple instances of the same CNN that take in multiple images and are concatenated into a dense layer? (keras)

岁酱吖の 提交于 2021-02-08 07:21:35
问题 Similar to this question, I'm looking to have several image input layers that go through one larger CNN (e.g. XCeption minus dense layers), and then have the output of the one CNN across all images be concatenated into a dense layer. Is this possible with Keras or is it even possible to train a network from the ground-up with this architecture? I'm essentially looking to train a model that takes in a larger but fixed number of images per sample (i.e. 3+ image inputs with similar visual

Validation data batch_size of size one? (Keras)

≯℡__Kan透↙ 提交于 2021-02-08 07:19:47
问题 In Keras there is an option to set the size of the validation set batches to one: valid_batches = ImageDataGenerator().flow_from_directory(valid_path, ... batch_size=1) Is it correct that the model then just uses one object from the validation data to validate the model after each training data epoch? If that is the case then my model should not get a very good validation score. But when I run the model it runs without any problems, keeps improving and seems to be using many validation

NotFoundError: [_Derived_]No gradient defined for op: Einsum on Tensorflow 1.15.2

末鹿安然 提交于 2021-02-08 06:55:20
问题 I'm using Tensorflow 1.15.2 for making a WSD system, made with BERt in the Embeddings Layer. This is the code that I use for the model input_word_ids = tf.keras.layers.Input(shape=(64,), dtype=tf.int32, name="input_word_ids") input_mask = tf.keras.layers.Input(shape=(64,), dtype=tf.int32, name="input_mask") segment_ids = tf.keras.layers.Input(shape=(64,), dtype=tf.int32, name="segment_ids") # BERt = BERtLayer()([input_word_ids, input_mask, segment_ids]) bert = hub.KerasLayer("https://tfhub

How to implement n times repeated k-folds cross validation that yields n*k folds in sklearn?

只愿长相守 提交于 2021-02-08 06:23:11
问题 I got some trouble in implementing a cross validation setting that i saw in a paper. Basically it is explained in this attached picture: So, it says that they use 5 folds, which means k = 5 . But then, the authors said that they repeat the cross validation 20 times, which created 100 folds in total. Does that mean that i can just use this piece of code : kfold = StratifiedKFold(n_splits=100, shuffle=True, random_state=seed) Cause basically my code also yields 100-folds. Any recommendation?

Use SSIM loss function with Keras

不想你离开。 提交于 2021-02-08 06:20:56
问题 I need to use the SSIM from Sewar as a loss function in order to compare images for my model. I am getting errors when I try to compile my model. I import the function and compile the model like this: from sewar.full_ref import ssim ... model.compile('ssim', optimizer=my_optimizer, metrics=[ssim]) and I get this: File "/media/merry/merry32/train.py", line 19, in train model.compile(loss='ssim', optimizer=opt, metrics=[ssim]) File "/home/merry/anaconda3/envs/merry_env/lib/python3.7/site