convolution

Keras dimensionality in convolutional layer mismatch

℡╲_俬逩灬. 提交于 2019-12-24 02:22:27
问题 I'm trying to play around with Keras to build my first neural network. I have zero experience and I can't seem to figure out why my dimensionality isn't right. I can't figure it out from their docs what this error is complaining about, or even what layer is causing it. My model takes in a 32byte array of numbers, and is supposed to give a boolean value on the other side. I want a 1D convolution on the input byte array. arr1 is the 32byte array, arr2 is an array of booleans. inputData = np

PCL Gaussian Kernal example

限于喜欢 提交于 2019-12-24 02:04:19
问题 I need help in applying a Gaussian Kernel on my points cloud to smooth the cloud. I could not figure out how I should write the code and I could not find any plain examples. Update: I am using Point Cloud Library (pcl): pcl::io::loadPCDFile ("/home/..../2240.pcd", *raw_cloud); Eigen::VectorXf horizontal; //Set up the Gaussian Kernel pcl::GaussianKernel<pcl::PointXYZRGB> gaussianKernel; gaussianKernel.compute(5,horizontal,40); pcl::filters::Convolution<pcl::PointXYZRGB> conv; conv

Tensor Flow Mninst example prediction using external image does not work

隐身守侯 提交于 2019-12-24 01:10:15
问题 i am new to neural networks. i have gone through TensorFlow mninst ML Beginners used tensorflow basic mnist tutorial and trying to get prediction using external image I have the updated the mnist example provided by tensorflow On top of that i have added few things : 1. Saving trained models locally 2. loading the saved models. 3. preprocessing the image into 28 * 28. i have attached the image for reference 1. while training the models, save it locally. So i can reuse it at any point of time.

3D Convolution using Intel MKL

老子叫甜甜 提交于 2019-12-23 19:01:46
问题 I am trying to compute 3D convolution of a 3D array using Intel MKL . Could someone kindly give me some hints how I can do that? Is it achievable using MKL ? Thanks in advance. 回答1: Intel has an example on their page of a 3D FFT, which should be helpful for performing convolution by multiplication in frequency space. Sorry I don't have a full solution: Three-Dimensional REAL FFT (C Interface) #include "mkl_dfti.h" float x[32][100][19]; float _Complex y[32][100][10]; /* 10 = 19/2 + 1 */ DFTI

When to use what type of padding for convolution layers?

五迷三道 提交于 2019-12-23 12:33:03
问题 I know when we are using convolution layers in a neural net we usually use padding and mainly constant padding(e.g. zero padding). And there are different kinds of padding(e.g. symmetric, reflective, constant). But I am not sure what are the advantages and disadvantages of using different padding methods and when to use which one. 回答1: it really depends on the situation for what the neural network is intended. I would not tell it pros and cons. This time the world cannot put into a binary

Labels in Caffe as Images

萝らか妹 提交于 2019-12-23 12:31:44
问题 I'm new to Caffe. I am trying to implement a Fully Convolution Neural Network (FCN-8s) for semantic segmentation. I have image data and label data, which are both images. This is for pixel-wise predictions. I tried using ImageData as the data type, but it asks for an integer label, which is not applicable to this scenario. Kindly advise as how to I can give Caffe a 2D label. Should I prefer LMDB instead of ImageData? If so, how do I proceed? I could not find any good tutorial/documentation

When to use what type of padding for convolution layers?

谁说胖子不能爱 提交于 2019-12-23 12:30:01
问题 I know when we are using convolution layers in a neural net we usually use padding and mainly constant padding(e.g. zero padding). And there are different kinds of padding(e.g. symmetric, reflective, constant). But I am not sure what are the advantages and disadvantages of using different padding methods and when to use which one. 回答1: it really depends on the situation for what the neural network is intended. I would not tell it pros and cons. This time the world cannot put into a binary

How to update the weights of a Deconvolutional Layer?

时光总嘲笑我的痴心妄想 提交于 2019-12-23 07:57:58
问题 I'm trying to develop a deconvolutional layer (or a transposed convolutional layer to be precise). In the forward pass, I do a full convolution (convolution with zero padding) In the backward pass, I do a valid convolution (convolution without padding) to pass the errors to the previous layer The gradients of the biases are easy to compute, simply a matter of averaging over the superfluous dimensions. The problem is I don't know how to update the weights of the convolutional filters. What are

Homomorphic Filter output

左心房为你撑大大i 提交于 2019-12-22 17:44:06
问题 I have written the following code to develop a Homomorphic Filter. I think (I am not sure though) the color images are being filtered well. In case of Grayscale images, Why is the kernel always Green? Also, the filter was supposed to be sharpening the image. But, its not doing so. What could have possibly gone wrong? . . Source Code: Here is the Github repository. public class HomomorphicFilter { public HomoMorphicKernel Kernel = null; public bool IsPadded { get; set; } public int Width { get

Memory usage of tensorflow conv2d with large filters

走远了吗. 提交于 2019-12-22 10:59:53
问题 I have a tensorflow model with some relatively large 135 x 135 x 1 x 3 convolution filters. I find that tf.nn.conv2d becomes unusable for such large filters - it attempts to use well over 60GB of memory, at which point I need to kill it. Here is the minimum script to reproduce my error: import tensorflow as tf import numpy as np frames, height, width, channels = 200, 321, 481, 1 filter_h, filter_w, filter_out = 5, 5, 3 # With this, output has shape (200, 317, 477, 3) # filter_h, filter_w,