conv-neural-network

Combining the outputs of multiple models into one model

梦想的初衷 提交于 2019-12-31 22:57:13
问题 I am currently looking for a way i can combine the output of multiple model into one model, I need to create a CNN network that does classification. The image is separated into sections (as seen by the colors), each section is given as input to a certain model (1,2,3,4) the structure of each model is the same, but each section is given to a separate model to ensure that the the same weight is not applied on whole image - My attempt to avoid full weight sharing, and keeping the weight sharing

keras layers tutorial and samples

断了今生、忘了曾经 提交于 2019-12-31 07:18:09
问题 I am trying to code and learn different neural network models. I am having a lot of complication with input dimensionality. I am looking for some tutorial that shows differences in layers and how to set input and outputs for each layers. 回答1: Keras documentation shows you all the input_shape s expected by each layer. In Keras, you'll see input shapes in these forms: input_shape defined by user in layers shapes shown in summaries and others array shapes tensor shapes Input shape defined by

How to extract weights “from input layer to hidden layer” and “from hidden layer to output layer” with Keras API?

会有一股神秘感。 提交于 2019-12-31 05:49:06
问题 I am new to Keras and I am trying to get the weights in Keras. I know how to do it in Tensorflow in Python. Code: data = np.array(attributes, 'int64') target = np.array(labels, 'int64') feature_columns = [tf.contrib.layers.real_valued_column("", dimension=2, dtype=tf.float32)] learningRate = 0.1 epoch = 10000 # https://www.tensorflow.org/api_docs/python/tf/metrics validation_metrics = { "accuracy": tf.contrib.learn.MetricSpec(metric_fn = tf.contrib.metrics.streaming_accuracy , prediction_key

CNN Keras: How many weights will be trained?

匆匆过客 提交于 2019-12-31 02:15:09
问题 I have a little comprehension problem with CNN. And I'm not quite sure how many filters and thus weights are trained. Example: I have an input layer with the 32x32 pixels and 3 channels (i.e. shape of (32,32,3) ). Now I use a 2D-convolution layer with 10 filters of shape (4,4) . So I end up with 10 channels each with shape of (28,28) , but do I now train a separate filter for each input channel or are they shared? Do I train 3x10x4x4 weights or do I train 10x4x4 weights? 回答1: You can find out

Why my ConvLSTM model can not predict?

China☆狼群 提交于 2019-12-30 07:43:09
问题 I have built a Convolutional LSTM model using Tensorflow ConvLSTMCell(), tf.nn.dynamic_rnn(), and tf.contrib.legacy_seq2seq.rnn_decoder(). I have 3 layers of encoder, and 3 layers of decoder, the initial states of decoders come from the final states of encoders. I have 128, 64, and 64 filters for layer 1, layer 2, and layer 3 respectively. finally, I concatenate the outputs of decoders and pass them through a convolution layer to decrease the number of channels to one. and then I apply the

How to Create CaffeDB training data for siamese networks out of image directory

牧云@^-^@ 提交于 2019-12-28 04:04:28
问题 I need some help to create a CaffeDB for siamese CNN out of a plain directory with images and label-text-file. Best would be a python-way to do it. The problem is not to walk through the directory and making pairs of images. My problem is more of making a CaffeDB out of those pairs. So far I only used convert_imageset to create a CaffeDB out of an image directory. Thanks for help! 回答1: Why don't you simply make two datasets using good old convert_imagest ? layer { name: "data_a" top: "data_a"

TensorFlow CNN: Why validation loss are significantly different from the start and has been increasing?

一个人想着一个人 提交于 2019-12-25 17:16:13
问题 This is a classification model for ten categories of pictures. My code has three files, one is the CNN model convNet.py, one is read_TFRecord.py to read data, one is train.py to train and evaluation model. Training set of samples of 80,000, validation set of sample of 20,000. Question: In the first epoch: training loss = 2.11, train accuracy = 25.61% validation loss = 3.05, validation accuracy = 8.29% Why validation loss are significantly different right from the start? And why the validation

Training CNN with images in sklearn neural net

▼魔方 西西 提交于 2019-12-25 09:17:11
问题 I am trying to train CNN (Sklearn Neural Network). I am having 4 images of 128 x 128 pixels. shape -> (4, 128, 128) I am reading images like - in1 = misc.imread('../data/Train_Data/train-1.jpg', mode='L', flatten=True)/255. in2 = misc.imread('../data/Train_Data/train-2.jpg', mode='L', flatten=True)/255. in3 = misc.imread('../data/Train_Data/train-3.jpg', mode='L', flatten=True)/255. in4 = misc.imread('../data/Train_Data/train-4.jpg', mode='L', flatten=True)/255. Then numpy array is created

caffe: Confused about regression

。_饼干妹妹 提交于 2019-12-25 04:33:27
问题 I have a really weird problem I want to explain to you. I am not sure if this is a topic for SO but I hope it will be in the end. My general problem task is depth estimation, i.e. I have an image as input and its corresponding ground_truth (depth image). Then I have my net (which should be considered as black box) and my last layers. First of all depth estimation is rather a regression task than a classification task. Therefore I decided to use a EuclideanLoss layer where my num_output of my

How to output an image with a CNN?

别来无恙 提交于 2019-12-25 01:37:34
问题 I'm trying to do depth estimation with CNNs (this is my ultimate goal), but a problem that i found is: I just did image classifications with CNNs, using for example "CIFAR-10", "MNIST", "Cats vs Dogs", etc. To do depth estimation I need to output a new image (the NYUv2 dataset has the labeled images). So, I'll input an image like 256x256x3 and need to output another image with for example 228x228x3. What I need to do? Can I just do the convolutions for a while and after that decrease the