conv-neural-network

Error while running a convolutional network using my own data in Tensorflow

柔情痞子 提交于 2019-12-20 03:28:08
问题 I´m a complete beginner in using Tensorflow and machine learning in general, so there are many concepts that I still don´t understand quite well, so sorry if my error is obvious. I´m trying to train my own convolutional network using my own images (optical microscopy photos) resized to 60x60, and I have only 2 labels to classify them (if the sample is positive or not). Here is my code: from __future__ import absolute_import from __future__ import division from __future__ import print_function

Why does my model predict the same label?

我怕爱的太早我们不能终老 提交于 2019-12-20 03:07:02
问题 I am training a small network and the training seems to go fine, the val loss decreases, I reach validation accuracy around 80, and it actually stops training once there is no more improvement (patience=10). It trained for 40 epochs. However, it keeps predicting only one class for every test image! I tried to initialize the conv layers randomly, I added regularizers, I switched from Adam to SGD, I added clipvalue, I added dropouts. I also switched to softmax (I have only two labels but I saw

Why do we have to specify output shape during deconvolution in tensorflow?

懵懂的女人 提交于 2019-12-20 02:29:30
问题 The TF documentation has an output_shape parameter in tf.conv2d_transpose. Why is this needed? Don't the strides, filter size and padding parameters of the layer decide the output shape of that layer, similar to how it is decided during convolution? 回答1: This question was already asked on TF github and received an answer: output_shape is needed because the shape of the output can't necessarily be computed from the shape of the input, specifically if the output is smaller than the filter and

How to share convolution kernels between layers in keras?

若如初见. 提交于 2019-12-20 01:17:07
问题 Suppose I want to compare two images with deep convolutional NN. How can I implement two different pathways with the same kernels in keras? Like this: I need convolutional layers 1,2 and 3 use and train the same kernels. Is it possible? I was also thinking to concatenate images like below but question is about how to implement tolopology on first picture. 回答1: You can use the same layer twice in the model, creating nodes: from keras.models import Model from keras.layers import * #create the

One dimensional data with CNN

浪子不回头ぞ 提交于 2019-12-19 05:53:08
问题 Just wondering whether anybody has done this? I have a dataset that is one dimensional (not sure whether it's the right word choice though). Unlike the usual CNN inputs which are images (so 2D), my data only has one dimension. An example would be: instance1 - feature1, feature2,...featureN instance2 - feature1, feature2,...featureN ... instanceM - feature1, feature2,...featureN How do I use my dataset with CNNs? the ones I have looked at accepts images (like AlexNet and GoogleNet) in the form

Obtaining a prediction in Keras

感情迁移 提交于 2019-12-18 13:24:57
问题 I have successfully trained a simple model in Keras to classify images: model = Sequential() model.add(Convolution2D(32, 3, 3, border_mode='valid', input_shape=(img_channels, img_rows, img_cols), activation='relu', name='conv1_1')) model.add(Convolution2D(32, 3, 3, activation='relu', name='conv1_2')) model.add(MaxPooling2D(pool_size=(2, 2))) model.add(Dropout(0.25)) model.add(Convolution2D(64, 3, 3, border_mode='valid', activation='relu', name='conv2_1')) model.add(Convolution2D(64, 3, 3,

Merge 2 sequential models in Keras

我只是一个虾纸丫 提交于 2019-12-18 11:10:00
问题 I a trying to merge 2 sequential models in keras. Here is the code: model1 = Sequential(layers=[ # input layers and convolutional layers Conv1D(128, kernel_size=12, strides=4, padding='valid', activation='relu', input_shape=input_shape), MaxPooling1D(pool_size=6), Conv1D(256, kernel_size=12, strides=4, padding='valid', activation='relu'), MaxPooling1D(pool_size=6), Dropout(.5), ]) model2 = Sequential(layers=[ # input layers and convolutional layers Conv1D(128, kernel_size=20, strides=5,

Ordering of batch normalization and dropout?

情到浓时终转凉″ 提交于 2019-12-18 09:53:56
问题 The original question was in regard to TensorFlow implementations specifically. However, the answers are for implementations in general. This general answer is also the correct answer for TensorFlow. When using batch normalization and dropout in TensorFlow (specifically using the contrib.layers) do I need to be worried about the ordering? It seems possible that if I use dropout followed immediately by batch normalization there might be trouble. For example, if the shift in the batch

TensorFlow Object Detection API using image crops as training dataset

倖福魔咒の 提交于 2019-12-18 09:47:31
问题 I want to train a ssd-inception-v2 model from Tensorflow Object Detection API. The training dataset I want to use is a bunch of cropped images with different sizes without bounding boxes, as the crop itself is the bounding boxes. I followed the create_pascal_tf_record.py example replacing the bounding boxes and classifications portion accordingly to generate the TFRecords as follows: def dict_to_tf_example(imagepath, label): image = Image.open(imagepath) if image.format != 'JPEG': print(

My CNN classifier gives wrong prediction on random images

♀尐吖头ヾ 提交于 2019-12-18 09:07:06
问题 I trained my CNN classifier (using tensorflow) with 3 data categories (ID card, passport, bills). When I test it with images that belong to one of the 3 categories, it gives the right prediction. However, when I test it with a wrong image (a car image for example) it keeps giving me prediction (i.e. it predicts that the car belongs the ID card category). Is there a way to make it display an error message instead of giving a wrong prediction? 回答1: This should be tackled differently. This is