keras-layer

Dimension mismatch in Keras during model.fit

萝らか妹 提交于 2020-01-10 04:38:33
问题 I put together a VAE using Dense Neural Networks in Keras. During model.fit I get a dimension mismatch, but not sure what is throwing the code off. Below is what my code looks like from keras.layers import Lambda, Input, Dense from keras.models import Model from keras.datasets import mnist from keras.losses import mse, binary_crossentropy from keras.utils import plot_model from keras import backend as K import keras import numpy as np import matplotlib.pyplot as plt import argparse import os

Keras: Categorical vs Continuous input to a LSTM

▼魔方 西西 提交于 2020-01-06 09:56:10
问题 I am new to Keras and deep learning and after going through several tutorials and answers on stackoverflow, I am still unclear about how the input is manipulated once entering the network. I am using the functional API of keras to develop complex models, so my first layer is always input layer. Something like: Input() LSTM() Dense() Now lets say I have 2 training datasets A and B. Each dataset is identical 10,000 by 6,000 matrix with 200 distinct values in it. i.e 10,000 rows each

Keras: Categorical vs Continuous input to a LSTM

喜夏-厌秋 提交于 2020-01-06 09:55:07
问题 I am new to Keras and deep learning and after going through several tutorials and answers on stackoverflow, I am still unclear about how the input is manipulated once entering the network. I am using the functional API of keras to develop complex models, so my first layer is always input layer. Something like: Input() LSTM() Dense() Now lets say I have 2 training datasets A and B. Each dataset is identical 10,000 by 6,000 matrix with 200 distinct values in it. i.e 10,000 rows each

ValueError: None values not supported. training network with simple custom layer in keras

不羁的心 提交于 2020-01-05 04:25:13
问题 I've implemented a very simple custom layer. It just multiplies the input by a weight. When I try to train the network I get ValueError: None values not supported. I checked my input and output for None s but I couldn't find anything. Also tried to add bias to the result but didn't change anything. Also tried different weight initializers but this didn't have any effect. When I just build the model and predict some results it works, also the output doesn't have any None s Has anyone an idea

How to set the input of a Keras layer of a functional model, with a Tensorflow tensor?

倖福魔咒の 提交于 2020-01-05 02:24:29
问题 I have two packages I'd like to use, one is written in Keras1.2, and the other one in tensorflow. I'd like to use a part of the architecture that is built in tensorflow into a Keras model. A partial solution is suggested here, but it's for a sequential model. The suggestion regarding functional models - wrapping the pre-processing in a Lambda layer - didn't work. The following code worked: inp = Input(shape=input_shape) def ID(x): return x lam = Lambda(ID) flatten = Flatten(name='flatten')

In what ways are the output of neural network layers useful?

假如想象 提交于 2020-01-04 07:25:07
问题 I'm currently working with keras and want to visualize the output of each layer. When having a visualisation of a layer of a neural networks output, like the example below, which is for MNIST handwriting number recognition. What information or insight does a researcher gain from these images How are these images interpreted If you would choose to see the output of a layer, what are your criteria for selection? Any comment or suggestion is greatly appreciated. Thank you. 回答1: Preface: A

element-wise multiplication with broadcasting in keras custom layer

六眼飞鱼酱① 提交于 2020-01-03 11:17:10
问题 I am creating a custom layer with weights that need to be multiplied by element-wise before activation. I can get it to work when the output and input is the same shape. The problem occurs when I have a first order array as input with a second order array as output. tensorflow.multiply supports broadcasting, but when I try to use it in Layer.call(x, self.kernel) to multiply x by the self.kernel Variable it complains that they are different shapes saying: ValueError: Dimensions must be equal,

Using Subtract layer in Keras

孤人 提交于 2020-01-02 15:54:56
问题 I'm implementing in Keras the LSTM architecture described here. I think I am really close, though I still have a problem with the combination of the shared and language-specific layers. Here is the formula (approximately): y = g * y^s + (1 - g) * y^u And here is the code I tried: ### Linear Layers ### univ_linear = Dense(50, activation=None, name='univ_linear') univ_linear_en = univ_linear(en_encoded) univ_linear_es = univ_linear(es_encoded) print(univ_linear_en) # Gate >> g gate_en = Dense

Keras Custom Layer 2D input -> 2D output

不想你离开。 提交于 2020-01-02 05:21:09
问题 I have an 2D input (or 3D if one consider the number of samples) and I want to apply a keras layer that would take this input and outputs another 2D matrix. So, for example, if I have an input with size (ExV), the learning weight matrix would be (SxE) and the output (SxV). Can I do this with Dense layer? EDIT (Nassim request): The first layer is doing nothing. It's just to give an input to Lambda layer: from keras.models import Sequential from keras.layers.core import Reshape,Lambda from

ValueError: Tensor Tensor(…) is not an element of this graph. When using global variable keras model

半城伤御伤魂 提交于 2020-01-02 05:21:09
问题 I'm running a web server using flask and the error comes up when I try to use vgg16, which is the global variable for keras' pre-trained VGG16 model. I have no idea why this error rises or whether it has anything to do with the Tensorflow backend. Here is my code: vgg16 = VGG16(weights='imagenet', include_top=True) def getVGG16Prediction(img_path): global vgg16 img = image.load_img(img_path, target_size=(224, 224)) x = image.img_to_array(img) x = np.expand_dims(x, axis=0) x = preprocess_input