keras-layer

ImportError: cannot import name '_obtain_input_shape' from keras

六眼飞鱼酱① 提交于 2019-11-29 03:08:21
In Keras, I'm trying to import _obtain_input_shape as follows: from keras.applications.imagenet_utils import _obtain_input_shape However, I get the following error: ImportError: cannot import name '_obtain_input_shape' The reason I'm trying to import _obtain_input_shape is so that I can determine the input shape(so as to load VGG-Face as follows : I'm using it to determine the correct input shape of the input tensor as follow: input_shape = _obtain_input_shape(input_shape, default_size=224, min_size=48, data_format=K.image_data_format(), require_flatten=include_top)` Please assist? Thanks in

How to set the input of a Keras layer with a Tensorflow tensor?

不羁的心 提交于 2019-11-29 01:33:29
In my previous question , I used Keras' Layer.set_input() to connect my Tensorflow pre-processing output tensor to my Keras model's input. However, this method has been removed after Keras version 1.1.1 . How can I achieve this in newer Keras versions? Example: # Tensorflow pre-processing raw_input = tf.placeholder(tf.string) ### some TF operations on raw_input ### tf_embedding_input = ... # pre-processing output tensor # Keras model model = Sequential() e = Embedding(max_features, 128, input_length=maxlen) ### THIS DOESN'T WORK ANYMORE ### e.set_input(tf_embedding_input) #####################

Tensorflow Allocation Memory: Allocation of 38535168 exceeds 10% of system memory

被刻印的时光 ゝ 提交于 2019-11-29 01:20:58
Using ResNet50 pre-trained Weights I am trying to build a classifier. The code base is fully implemented in Keras high-level Tensorflow API. The complete code is posted in the below GitHub Link. Source Code: Classification Using RestNet50 Architecture The file size of the pre-trained model is 94.7mb . I loaded the pre-trained file new_model = Sequential() new_model.add(ResNet50(include_top=False, pooling='avg', weights=resnet_weight_paths)) and fit the model train_generator = data_generator.flow_from_directory( 'path_to_the_training_set', target_size = (IMG_SIZE,IMG_SIZE), batch_size = 12,

How to use lambda layer in keras?

半世苍凉 提交于 2019-11-29 00:12:33
I want to define lambda layer to combine features with cross product, then merge those models,just like the fig. ,What should I do? Test model_1, get 128 dimensions form dense, use pywt get two 64 dimensions feature( cA,cD ), then return cA*cD //of course I want to combine two models ,but try model_1 first. from keras.models import Sequential,Model from keras.layers import Input,Convolution2D,MaxPooling2D from keras.layers.core import Dense,Dropout,Activation,Flatten,Lambda import pywt def myFunc(x): (cA, cD) = pywt.dwt(x, 'db1') # x=x*x return cA*cD batch_size=32 nb_classes=3 nb_epoch=20 img

Error when checking model input: expected convolution2d_input_1 to have 4 dimensions, but got array with shape (32, 32, 3)

徘徊边缘 提交于 2019-11-28 22:04:28
问题 I want to train a deep network starting with the following layer: model = Sequential() model.add(Conv2D(32, 3, 3, input_shape=(32, 32, 3))) using history = model.fit_generator(get_training_data(), samples_per_epoch=1, nb_epoch=1,nb_val_samples=5, verbose=1,validation_data=get_validation_data() with the following generator: def get_training_data(self): while 1: for i in range(1,5): image = self.X_train[i] label = self.Y_train[i] yield (image,label) (validation generator looks similar). During

Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution'

血红的双手。 提交于 2019-11-28 20:48:09
I got this error message when declaring the input layer in Keras. ValueError: Negative dimension size caused by subtracting 3 from 1 for 'conv2d_2/convolution' (op: 'Conv2D') with input shapes: [?,1,28,28], [3,3,28,32]. My code is like this model.add(Convolution2D(32, 3, 3, activation='relu', input_shape=(1,28,28))) Sample application: https://github.com/IntellijSys/tensorflow/blob/master/Keras.ipynb By default, Convolution2D ( https://keras.io/layers/convolutional/ ) expects the input to be in the format (samples, rows, cols, channels), which is "channels-last". Your data seems to be in the

How do you make TensorFlow + Keras fast with a TFRecord dataset?

ぐ巨炮叔叔 提交于 2019-11-28 17:09:26
问题 What is an example of how to use a TensorFlow TFRecord with a Keras Model and tf.session.run() while keeping the dataset in tensors w/ queue runners? Below is a snippet that works but it needs the following improvements: Use the Model API specify an Input() Load a dataset from a TFRecord Run through a dataset in parallel (such as with a queuerunner) Here is the snippet, there are several TODO lines indicating what is needed: from keras.models import Model import tensorflow as tf from keras

How to add and remove new layers in keras after loading weights?

妖精的绣舞 提交于 2019-11-28 16:27:59
问题 I am trying to do a transfer learning; for that purpose I want to remove the last two layers of the neural network and add another two layers. This is an example code which also output the same error. from keras.models import Sequential from keras.layers import Input,Flatten from keras.layers.convolutional import Convolution2D, MaxPooling2D from keras.layers.core import Dropout, Activation from keras.layers.pooling import GlobalAveragePooling2D from keras.models import Model in_img = Input

Using Tensorflow Layers in Keras

孤街浪徒 提交于 2019-11-28 08:36:50
I've been trying to build a sequential model in Keras using the pooling layer tf.nn.fractional_max_pool . I know I could try making my own custom layer in Keras, but I'm trying to see if I can use the layer already in Tensorflow. For the following code snippet: p_ratio=[1.0, 1.44, 1.44, 1.0] model = Sequential() model.add(ZeroPadding2D((2,2), input_shape=(1, 48, 48))) model.add(Conv2D(320, (3, 3), activation=PReLU())) model.add(ZeroPadding2D((1,1))) model.add(Conv2D(320, (3, 3), activation=PReLU())) model.add(InputLayer(input_tensor=tf.nn.fractional_max_pool(model.layers[3].output, p_ratio)))

TimeDistributed(Dense) vs Dense in Keras - Same number of parameters

喜欢而已 提交于 2019-11-27 20:09:47
I'm building a model that converts a string to another string using recurrent layers (GRUs). I have tried both a Dense and a TimeDistributed(Dense) layer as the last-but-one layer, but I don't understand the difference between the two when using return_sequences=True, especially as they seem to have the same number of parameters. My simplified model is the following: InputSize = 15 MaxLen = 64 HiddenSize = 16 inputs = keras.layers.Input(shape=(MaxLen, InputSize)) x = keras.layers.recurrent.GRU(HiddenSize, return_sequences=True)(inputs) x = keras.layers.TimeDistributed(keras.layers.Dense