keras-layer

Keras cnn model output shape doesn't match model summary

时光怂恿深爱的人放手 提交于 2019-12-08 10:32:38
问题 I am trying to use the convolution part of ResNet50() model, as this: #generate batches def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical', target_size=(224,224)): return gen.flow_from_directory(dirname, target_size=target_size, class_mode=class_mode, shuffle=shuffle, batch_size=batch_size) trn_batches = get_batches("path_to_dirctory", shuffle=False,batch_size=4) #create model rn_mean = np.array([123.68, 116.779, 103.939], dtype=np

Error while calling eval() on Tensor variable in keras

*爱你&永不变心* 提交于 2019-12-08 09:33:21
问题 I am using keras and using a layer output for some modifications. Before, using the output ( a tensor variable ) I am converting it to numpy array and thus calling eval() on it, as below: def convert_output(orig_output): conv_output = invoke_modifications(orig_output.eval(), 8) The code fails with following error: File "<ipython-input-11-df86946997d5>", line 1, in <module> orig_output.eval() File "C:\ENV\p34\lib\site-packages\theano-0.9.0.dev4-py3.4.egg\theano\gof\graph.py", line 516, in eval

How can I implement KL-divergence regularization for Keras?

China☆狼群 提交于 2019-12-08 09:08:49
This is a follow-up question for this question Keras backend mean function: " 'float' object has no attribute 'dtype' "? I am trying to make a new regularizer for Keras. Here is my code import keras from keras import initializers from keras.models import Model, Sequential from keras.layers import Input, Dense, Activation from keras import regularizers from keras import optimizers from keras import backend as K kullback_leibler_divergence = keras.losses.kullback_leibler_divergence def kl_divergence_regularizer(inputs): means = K.mean((inputs)) rho=0.05 down = 0.05 * K.ones_like(means) up = (1 -

how do I implement Gaussian blurring layer in Keras?

让人想犯罪 __ 提交于 2019-12-08 07:51:18
问题 I have an autoencoder and I need to add a Gaussian noise layer after my output. I need a custom layer to do this, but I really do not know how to produce it, I need to produce it using tensors. what should I do if I want to implement the above equation in the call part of the following code? class SaltAndPepper(Layer): def __init__(self, ratio, **kwargs): super(SaltAndPepper, self).__init__(**kwargs) self.supports_masking = True self.ratio = ratio # the definition of the call method of custom

Change input tensor shape for VGG16 application

大城市里の小女人 提交于 2019-12-08 06:08:50
问题 I want to feed images with the shape (160,320,3) to VGG16(input_tensor=input_tensor, include_top=False) How can I include a layer that reshapes the images to the shape expected by the VGG16 model, which is (224,224,3) ? 回答1: VGG16 model in itself is just a set of weights of the fixed sequence of layers and fixed convolution kernel sizes etc. That doesn't mean that those convolution kernels cannot be applied to images of other sizes. For example in your case: from keras.models import Model

Restricting the output values of layers in Keras

久未见 提交于 2019-12-08 05:18:58
问题 I have defined my MLP in the code below. I want to extract the values of layer_2. def gater(self): dim_inputs_data = Input(shape=(self.train_dim[1],)) dim_svm_yhat = Input(shape=(3,)) layer_1 = Dense(20, activation='sigmoid')(dim_inputs_data) layer_2 = Dense(3, name='layer_op_2', activation='sigmoid', use_bias=False)(layer_1) layer_3 = Dot(1)([layer_2, dim_svm_yhat]) out_layer = Dense(1, activation='tanh')(layer_3) model = Model(input=[dim_inputs_data, dim_svm_yhat], output=out_layer) adam =

Keras - Variational Autoencoder Incompatible shape

廉价感情. 提交于 2019-12-08 05:13:20
问题 I am trying to adapt the code to achieve 1-D convolution using 1-D input. The model is compilable so you can see the layers and shapes in .summary() , but it throws the error when .fit() the model. it seems to occur in loss computation. Below is my code: import numpy as np from scipy.stats import norm from keras.layers import Input, Dense, Lambda, Flatten, Reshape from keras.layers import Conv1D, UpSampling1D from keras.models import Model from keras import backend as K from keras import

How to split a Keras model, with a non-sequential architecture like ResNet, into sub-models?

杀马特。学长 韩版系。学妹 提交于 2019-12-08 05:09:49
问题 My model is a resnet-152 i wanna cutting it into two submodels and the problem is with the second one i can't figure out how to build a model from an intermediate layer to the output I tried this code from this response and it doesn't work for me here is my code: def getLayerIndexByName(model, layername): for idx, layer in enumerate(model.layers): if layer.name == layername: return idx idx = getLayerIndexByName(resnet, 'res3a_branch2a') input_shape = resnet.layers[idx].get_input_shape_at(0) #

Constructing a keras model

删除回忆录丶 提交于 2019-12-08 02:02:30
问题 I don't understand what's happening in this code: def construct_model(use_imagenet=True): # line 1: how do we keep all layers of this model ? model = keras.applications.InceptionV3(include_top=False, input_shape=(IMG_SIZE, IMG_SIZE, 3), weights='imagenet' if use_imagenet else None) # line 1: how do we keep all layers of this model ? new_output = keras.layers.GlobalAveragePooling2D()(model.output) new_output = keras.layers.Dense(N_CLASSES, activation='softmax')(new_output) model = keras.engine

Implementing a tensorflow graph into a Keras model

回眸只為那壹抹淺笑 提交于 2019-12-07 17:05:54
问题 I am trying to implement roughly the following architecture in Keras (preferably) or Tensorflow. ___________ _________ _________ ________ ______ | Conv | | Max | | Dense | | | | | Input0--> | Layer 1 | --> | Pool 1 | --> | Layer | -->| | | | |_________| |________| |________| | Sum | | Out | | Layer |-->|_____| Input1 ----------- Converted to trainable weights-->| | |_______| |_______| In short, it is pretty much a model with two inputs, merged into one output using an Add([input0, input1])