conv-neural-network

Augementations in Keras ImageDataGenerator

血红的双手。 提交于 2019-12-11 03:49:23
问题 I have please two questions concerning the ImageDataGenerator: 1) Are the same augmentations used on the whole batch or each image gets its own random transformation? e.g. for rotation, does the module rotates all the images in the batch with same angle or each image get a random rotation angle ? 2) The data in ImageDataGenerator.flow is looped over (in batches) indefinitely. Is there a way to stop this infinite loop, i.e. doing the augmentation only for n number of time. Because I need to

Unexpected layers were generated in the mnist example in Tensorboard

≯℡__Kan透↙ 提交于 2019-12-11 02:43:07
问题 In order to learn tensorflow, I executed this tensorflow official mnist script (cnn_mnist.py) and displayed the graph with tensorboard. The following is part of the code. This network contains two conv layers and two dense layers. conv1 = tf.layers.conv2d(inputs=input_layer,filters=32,kernel_size=[5, 5], padding="same",activation=tf.nn.relu) pool1 = tf.layers.max_pooling2d(inputs=conv1, pool_size=[2, 2], strides=2) conv2 = tf.layers.conv2d(inputs=pool1,filters=64,kernel_size=[5, 5], padding=

Error : H5LTfind_dataset(file_id, dataset_name_) Failed to find HDF5 dataset label

你说的曾经没有我的故事 提交于 2019-12-11 02:25:18
问题 I want to use HDF5 file to input my data and labels in my CNN. I created the hdf5 file with matlab. Here is my code: h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/image',[522 775 3 numFrames]); h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/train/anno',[522 775 3 numFrames]); h5create(['uNetDataSet.h5'],'/home/alexandra/Documents/my-u-net/warwick_dataset/Warwick_Dataset/label',[1 numFrames

How to reshape a 3D numpy array?

早过忘川 提交于 2019-12-11 00:38:22
问题 I have a list of numpy arrays which are actually input images to my CNN. However size of each of my image is not cosistent, and my CNN takes only images which are of dimension 224X224. How do I reshape each of my image into the given dimension? print(train_images[key].reshape(224, 224,3)) gives me an output ValueError: total size of new array must be unchanged I would be very grateful if anybody could help me with this. 回答1: New array should have the same amount of values when you are

Performing 1d convolution using 2d kernel in keras

跟風遠走 提交于 2019-12-10 21:47:51
问题 I am currently working on a CNN network, in which i want to apply a 2d kernel on a image, but it only has to perform 1d convolution, meaning that it only has to move along one axis (x-axis in this case). The shape of the kernel is same as the y-axis of the image. The number of filters applied is not a concern at the moment. An example: Given a image of size (6,3,3) = (rows, cols, color_channel) How should i perform a 1d convolution given a 2d filter? Tried what was suggested by @Marcin

Load checkpoint and evaluate single image with tensorflow DNN

ぐ巨炮叔叔 提交于 2019-12-10 20:01:22
问题 For research at university I am examining the oxford 17 flowers alexnet example. The example uses the API tflearn based on tensorflow. Training is working very well on my GPU, reaching an accuracy of ~ 97% after a while. Unfortunately evaluating single images isn't working yet in tflearn, I would have to use model.predict(...) to predict all my data per batch, and loop over all my test set and calculate accuracy by myself. My training code so far: ... import image_loader X, Y = image_loader

Why is there no trace of auxiliary classifiers of the Inception v3 model in Keras?

我是研究僧i 提交于 2019-12-10 18:38:48
问题 When using Inception v3 model in Keras, nor the graph of the network nor the model.summary() indicate the presence of the auxiliary classifiers (as in Szegedy et al.. Why is it so ? Is Keras still using the right architecture but hiding this specificity from the user ? If so, how can we then customize upper layers from the network ? Indeed, we may want the 3 classifiers to have distinct architectures. Thank you ! 来源: https://stackoverflow.com/questions/44906653/why-is-there-no-trace-of

Multi-Dimensional Batch-Image Convolution using Numpy

喜你入骨 提交于 2019-12-10 18:22:55
问题 In image processing and classification networks, a common task is the convolution or cross-correlation of input images with some fixed filters. For example, in convolutional neural nets (CNNs), this is an extremely common operation. I have reduced the general version task to this: Given : a batch of N images [N,H,W,D,...] and a set of K filters [K,H,W,D,...] Return : a ndarray that represents the m-dimensional cross-correlation (xcorr) of image N_i with filter K_j for every N_i in N and K_j

store images of different dimension in numpy array

我与影子孤独终老i 提交于 2019-12-10 18:15:40
问题 I have two images , image 1 of dimension (32,43,3) and image2 of dimension (67,86,3) . How can i store this in a numpy array , Whenever i try to append the array image=cv2.imread(image1,0) image=cv2.resize(image,(32,43)) x_train=np.array(image.flatten()) x_train=x_train.reshape(-1,3,32,43) X_train =np.append(X_train,x_train) #X_train is my array image=cv2.imread(image2,0) image=cv2.resize(image,(67,86)) x_train=np.array(image.flatten()) x_train=x_train.reshape(-1,3,67,86) X_train =np.append(X

Caffe: Softmax with temperature

谁都会走 提交于 2019-12-10 17:28:59
问题 I am working on implementing Hinton's Knowledge distillation paper. The first step is to store the soft targets of a "cumbersome model" with a higher temperature (i.e. I don't need to train the network, just need to do forward pass per image and store the soft targets with a temperature T ). Is there a way I can get the output of Alexnet or googlenet soft targets but with a different temperature? I need to modify the soft-max with pi= exp(zi/T)/sum(exp(zi/T) . Need to divide the outputs of