conv-neural-network

ValueError: When feeding symbolic tensors to a model, we expect the tensors to have a static batch size

半世苍凉 提交于 2020-02-25 04:04:13
问题 I am new to Keras and I was trying to build a text-classification CNN model using Python 3.6 when I encountered this error : Traceback (most recent call last): File "model.py", line 94, in <module> model.fit([x1, x2], y_label, batch_size=batch_size, epochs=epochs, verbose=1, callbacks=[checkpoint], validation_split=0.2) # starts training File "/../../anaconda3/lib/python3.6/site-packages/keras/engine/training.py", line 955, in fit batch_size=batch_size) File "/../../anaconda3/lib/python3.6

Number of epochs to be used in a Keras sequential model

你说的曾经没有我的故事 提交于 2020-02-24 14:48:54
问题 I'm building a Keras sequential model to do a binary image classification. Now when I use like 70 to 80 epochs I start getting good validation accuracy (81%). But I was told that this is a very big number to be used for epochs which would affect the performance of the network. My question is: is there a limited number of epochs that I shouldn't exceed, note that I have 2000 training images and 800 validation images. 回答1: If the number of epochs are very high, your model may overfit and your

ValueError: Error when checking target: expected dense_44 to have shape (1,) but got array with shape (3,). They seem to match though

非 Y 不嫁゛ 提交于 2020-02-06 08:39:06
问题 I've searched several similar topics covering comparable problems. For example this, this and this, among others. Despite this, I still haven't managed to solve my issue, why I now try to ask the community. What I'm ultimately trying to do is with a CNN and regression predict three parameters. The inputs are matrices (and can now be plotted as RGB images after I've pre-processed them in several steps) with the initial size of (3724, 4073, 3). Due to the size of the data set I'm feeding the

KerasLayer vs tf.keras.applications performances

て烟熏妆下的殇ゞ 提交于 2020-02-05 03:36:41
问题 I've trained some networks with ResNetV2 50 ( https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4 ) and it work very well for my datasets. Then I tried tf.keras.applications.ResNet50 and accuracy is very lower than the other. Here two models: The first (with hub) base_model = hub.KerasLayer('https://tfhub.dev/google/imagenet/resnet_v2_50/feature_vector/4', input_shape=(IMAGE_H, IMAGE_W, 3)) base_model.trainable = False model = tf.keras.Sequential([ base_model , Dense(num_classes,

LSTM on top of a pre-trained CNN

别等时光非礼了梦想. 提交于 2020-02-03 08:59:28
问题 I have trained a CNN and now I want to load the model and then on top put a LSTM but I'm getting some errors. ''' Load the output of the CNN ''' cnn_model = load_model(os.path.join('weights', 'CNN_patch_epoch-20.hdf5')) last_layer = cnn_model.get_layer('pool5').output ''' Freeze previous layers ''' for layer in cnn_model.layers: layer.trainable = False x = TimeDistributed(Flatten())(last_layer) x = LSTM(neurons, dropout=dropout, name='lstm')(x) out = Dense(n_output, kernel_initializer=weight

LSTM on top of a pre-trained CNN

我是研究僧i 提交于 2020-02-03 08:56:48
问题 I have trained a CNN and now I want to load the model and then on top put a LSTM but I'm getting some errors. ''' Load the output of the CNN ''' cnn_model = load_model(os.path.join('weights', 'CNN_patch_epoch-20.hdf5')) last_layer = cnn_model.get_layer('pool5').output ''' Freeze previous layers ''' for layer in cnn_model.layers: layer.trainable = False x = TimeDistributed(Flatten())(last_layer) x = LSTM(neurons, dropout=dropout, name='lstm')(x) out = Dense(n_output, kernel_initializer=weight

Visualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]

試著忘記壹切 提交于 2020-02-02 06:55:25
问题 I have a fine-tuned network that I created which uses vgg16 as it's base. I am following section 5.4.2 Visualizing CovNet Filters in Deep Learning With Python (which is very similar to the guide on the Keras blog to visualize convnet filters here). The guide simply uses the vgg16 network. My fine tuned model uses the vgg16 model as the base, for example: model.summary() Layer (type) Output Shape Param # ======================================================================= vgg16 (Model)

Visualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]

这一生的挚爱 提交于 2020-02-02 06:55:13
问题 I have a fine-tuned network that I created which uses vgg16 as it's base. I am following section 5.4.2 Visualizing CovNet Filters in Deep Learning With Python (which is very similar to the guide on the Keras blog to visualize convnet filters here). The guide simply uses the vgg16 network. My fine tuned model uses the vgg16 model as the base, for example: model.summary() Layer (type) Output Shape Param # ======================================================================= vgg16 (Model)

Visualizing ConvNet filters using my own fine-tuned network resulting in a “NoneType” when running: K.gradients(loss, model.input)[0]

孤街醉人 提交于 2020-02-02 06:54:31
问题 I have a fine-tuned network that I created which uses vgg16 as it's base. I am following section 5.4.2 Visualizing CovNet Filters in Deep Learning With Python (which is very similar to the guide on the Keras blog to visualize convnet filters here). The guide simply uses the vgg16 network. My fine tuned model uses the vgg16 model as the base, for example: model.summary() Layer (type) Output Shape Param # ======================================================================= vgg16 (Model)

Adding an additional value to a Convolutional Neural Network Input? [closed]

喜夏-厌秋 提交于 2020-02-01 05:20:06
问题 Closed . This question needs to be more focused. It is not currently accepting answers. Want to improve this question? Update the question so it focuses on one problem only by editing this post. Closed 2 years ago . I have a dataset of images I want to input to a Convolutional Neural Network Model, however, with each of these images, there is a range or distance from the object associated with the image. I want to input this range as an additional piece of context for the CNN model. Does