keras-layer

Is there any way to get variable importance with Keras?

好久不见. 提交于 2019-12-31 09:03:23
问题 I am looking for a proper or best way to get variable importance in a Neural Network created with Keras. The way I currently do it is I just take the weights (not the biases) of the variables in the first layer with the assumption that more important variables will have higher weights in the first layer. Is there another/better way of doing it? 回答1: Since everything will be mixed up along the network, the first layer alone can't tell you about the importance of each variable. The following

keras layers tutorial and samples

断了今生、忘了曾经 提交于 2019-12-31 07:18:09
问题 I am trying to code and learn different neural network models. I am having a lot of complication with input dimensionality. I am looking for some tutorial that shows differences in layers and how to set input and outputs for each layers. 回答1: Keras documentation shows you all the input_shape s expected by each layer. In Keras, you'll see input shapes in these forms: input_shape defined by user in layers shapes shown in summaries and others array shapes tensor shapes Input shape defined by

AttributeError: 'NoneType' object has no attribute '_inbound_nodes' in Keras

送分小仙女□ 提交于 2019-12-31 04:17:31
问题 I want to define my own Lstm model as follows: from keras import backend as K from keras.callbacks import ModelCheckpoint from keras.layers.core import Dense, Activation, Flatten, Dropout from keras.layers import Input,Concatenate, Average, Maximum from keras.layers.normalization import BatchNormalization from keras.layers import LSTM, Bidirectional from keras.models import Model from keras.optimizers import Adam class LSTMModel(object): def __init__(self, config): self.num_batch = config[

How to change input shape in Sequential model in Keras

大兔子大兔子 提交于 2019-12-30 04:37:09
问题 I have a sequential model that I built in Keras. I try to figure out how to change the shape of the input. In the following example model = Sequential() model.add(Dense(32, input_shape=(500,))) model.add(Dense(10, activation='softmax')) model.compile(optimizer='rmsprop', loss='categorical_crossentropy', metrics=['accuracy']) let's say that I want to build a new model with different input shape, conceptual this should looks like this: model1 = model model1.layers[0] = Dense(32, input_shape=

Resizing an input image in a Keras Lambda layer

风流意气都作罢 提交于 2019-12-28 13:08:06
问题 I would like my keras model to resize the input image using cv2 or similar. I have seen the use of ImageGenerator , but I would prefer to write my own generator and simply resize the image in the first layer with keras.layers.core.Lambda . How would I do this? 回答1: If you are using tensorflow backend then you can use tf.image.resize_images() function to resize the images in Lambda layer. Here is a small example to demonstrate the same: import numpy as np import scipy.ndimage import matplotlib

How do you create a custom activation function with Keras?

我的梦境 提交于 2019-12-27 12:17:41
问题 Sometimes the default standard activations like ReLU, tanh, softmax, ... and the advanced activations like LeakyReLU aren't enough. And it might also not be in keras-contrib. How do you create your own activation function? 回答1: Credits to this Github issue comment by Ritchie Ng. # Creating a model from keras.models import Sequential from keras.layers import Dense # Custom activation function from keras.layers import Activation from keras import backend as K from keras.utils.generic_utils

Keras - passing different parameter for different data point onto Lambda Layer

只愿长相守 提交于 2019-12-24 09:06:58
问题 I am working on a CNN model in Keras/TF background. At the end of final convolutional layer, I need to pool the output maps from the filters. Instead of using GlobalAveragePooling or any other sort of pooling, I had to pool according to time frames which exist along the width of the output map. So if a sample output from one filter is let's say n x m , n being time frames and m outputs along the features. Here I just need to pool output from frames n1 to n2 where n1 and n2 <= n. So my output

how to know which node is dropped after using keras dropout layer

佐手、 提交于 2019-12-24 08:40:02
问题 From nick blog it is clear that in dropout layer of CNN model we drop some nodes on the basis of bernoulli. But how to verify it, i.e. how to check which node is not selected. In DropConnect we leave some weights so I think with the help of model.get_weights() we can verify, but how in the case of dropout layer. model = Sequential() model.add(Conv2D(2, kernel_size=(3, 3), activation='relu', input_shape=input_shape)) model.add(Conv2D(4, (3, 3), activation='relu')) model.add(MaxPooling2D(pool

Keras: Feeding in part of previous layer to next layer, in CNN

六眼飞鱼酱① 提交于 2019-12-24 06:58:12
问题 I am trying to feed in the individual kernel outputs of the previous layer to a new conv filter, to get the next layer. To do that, I tried passing each of the kernel outputs through a Conv2D , by calling them by their index. The function I used is: def modification(weights_path=None, classes=2): ########### ## Input ## ########### ### 224x224x3 sized RGB Input inputs = Input(shape=(224,224,3)) ################################# ## Conv2D Layer with 5 kernels ## ###############################

keras layer that computes logarithms?

风流意气都作罢 提交于 2019-12-24 06:48:04
问题 I'd like to set up a Keras layer in which each node simply computes the logarithm of the corresponding node in the preceding layer. I see from the Keras documentation that there is a "log" function in its backend module. But somehow I'm not understanding how to use this. Thanks in advance for any hints you can offer! 回答1: You can use any backend function inside a Lambda layer: from keras.layers import Lambda import keras.backend as K Define just any function taking the input tensor: def