keras-layer

Add dense layer before LSTM layer in keras or Tensorflow?

血红的双手。 提交于 2019-12-05 01:13:30
问题 I am trying to implement a denoising autoencoder with an LSTM layer in between. The architecture goes following. FC layer -> FC layer -> LSTM cell -> FC layer -> FC layer. I am unable to understand how my input dimension should be to implement this architecture? I tried the following code batch_size = 1 model = Sequential() model.add(Dense(5, input_shape=(1,))) model.add(Dense(10)) model.add(LSTM(32)) model.add(Dropout(0.3)) model.add(Dense(5)) model.add(Dense(1)) model.compile(loss='mean

Sentence embedding in keras

最后都变了- 提交于 2019-12-04 21:37:53
I am trying a simple document classification using sentence embeddings in keras. I know how to feed word vectors to a network, but I have problems using sentence embeddings. In my case, I have a simple representation of sentences (adding the word vectors along the axis, for example np.sum(sequences, axis=0) ). My question is, what should I replace the Embedding layer with in the code below to feed sentence embeddings instead? model = Sequential() model.add(Embedding(len(embedding_weights), len(embedding_weights[0]), weights=[embedding_weights], mask_zero=True, input_length=MAX_SEQUENCE_LENGTH,

Keras retrieve value of node before activation function

ぐ巨炮叔叔 提交于 2019-12-04 17:23:33
问题 Imagine a fully-connected neural network with its last two layers of the following structure: [Dense] units = 612 activation = softplus [Dense] units = 1 activation = sigmoid The output value of the net is 1, but I'd like to know what the input x to the sigmoidal function was (must be some high number, since sigm(x) is 1 here). Folllowing indraforyou's answer I managed to retrieve the output and weights of Keras layers: outputs = [layer.output for layer in model.layers[-2:]] functors = [K

Why does a binary Keras CNN always predict 1?

旧城冷巷雨未停 提交于 2019-12-04 11:41:27
问题 I want to build a binary classifier using a Keras CNN. I have about 6000 rows of input data which looks like this: >> print(X_train[0]) [[[-1.06405307 -1.06685851 -1.05989663 -1.06273152] [-1.06295958 -1.06655996 -1.05969803 -1.06382503] [-1.06415248 -1.06735609 -1.05999593 -1.06302975] [-1.06295958 -1.06755513 -1.05949944 -1.06362621] [-1.06355603 -1.06636092 -1.05959873 -1.06173742] [-1.0619655 -1.06655996 -1.06039312 -1.06412326] [-1.06415248 -1.06725658 -1.05940014 -1.06322857] [-1

skipping layer in backpropagation in keras

与世无争的帅哥 提交于 2019-12-04 08:31:57
I am using Keras with tensorflow backend and I am curious whether it is possible to skip a layer during backpropagation but have it execute in the forward pass. So here is what I mean Lambda (lambda x: a(x)) I want to apply a to x in the forward pass but I do not want a to be included in the derivation when the backprop takes place. I was trying to find a solution bit I could not find anything. Can somebody help me out here? UPDATE 2 In addition to tf.py_func , there is now an official guide on how to add a custom op . UPDATE See this question for an example of writing a custom op with

Cannot add layers to saved Keras Model. 'Model' object has no attribute 'add'

血红的双手。 提交于 2019-12-04 06:49:39
I have a saved a model using model.save() . I'm trying to reload the model and add a few layers and tune some hyper-parameters, however, it throws the AttributeError. Model is loaded using load_model() . I guess I'm missing understanding how to add layers to saved layers. If someone can guide me here, it will be great. I'm a novice to deep learning and using keras, so probably my request would be silly. Snippet: prev_model = load_model('final_model.h5') # loading the previously saved model. prev_model.add(Dense(256,activation='relu')) prev_model.add(Dropout(0.5)) prev_model.add(Dense(1

How to implement Merge from Keras.layers

荒凉一梦 提交于 2019-12-04 03:24:55
问题 I have been trying to merge the following sequential models but haven't been able to. Could somebody please point out my mistake, thank you. The code compiles while using"merge" but give the following error "TypeError: 'module' object is not callable" However it doesn't even compile while using "Merge" I am using keras version 2.2.0 and python 3.6 from keras.layers import merge def linear_model_combined(optimizer='Adadelta'): modela = Sequential() modela.add(Flatten(input_shape=(100, 34)))

Add a resizing layer to a keras sequential model

回眸只為那壹抹淺笑 提交于 2019-12-04 02:02:44
How can I add a resizing layer to model = Sequential() using model.add(...) To resize an image from shape (160, 320, 3) to (224,224,3) ? Normally you would use the Reshape layer for this: model.add(Reshape((224,224,3), input_shape=(160,320,3)) but since your target dimensions don't allow to hold all the data from the input dimensions ( 224*224 != 160*320 ), this won't work. You can only use Reshape if the number of elements does not change. If you are fine with losing some data in your image, you can specify your own lossy reshape: model.add(Reshape(-1,3), input_shape=(160,320,3)) model.add

Keras Concatenate Layers: Difference between different types of concatenate functions

你说的曾经没有我的故事 提交于 2019-12-04 00:12:47
问题 I just recently started playing around with Keras and got into making custom layers. However, I am rather confused by the many different types of layers with slightly different names but with the same functionality. For example, there are 3 different forms of the concatenate function from https://keras.io/layers/merge/ and https://www.tensorflow.org/api_docs/python/tf/keras/backend/concatenate keras.layers.Concatenate(axis=-1) keras.layers.concatenate(inputs, axis=-1) tf.keras.backend

Add dense layer before LSTM layer in keras or Tensorflow?

人盡茶涼 提交于 2019-12-03 17:27:19
I am trying to implement a denoising autoencoder with an LSTM layer in between. The architecture goes following. FC layer -> FC layer -> LSTM cell -> FC layer -> FC layer. I am unable to understand how my input dimension should be to implement this architecture? I tried the following code batch_size = 1 model = Sequential() model.add(Dense(5, input_shape=(1,))) model.add(Dense(10)) model.add(LSTM(32)) model.add(Dropout(0.3)) model.add(Dense(5)) model.add(Dense(1)) model.compile(loss='mean_squared_error', optimizer='adam') model.fit(trainX, trainY, nb_epoch=100, batch_size=batch_size, verbose=2