convolution

what is the mistake in this program? [closed]

核能气质少年 提交于 2019-12-10 12:28:11
问题 This question is unlikely to help any future visitors; it is only relevant to a small geographic area, a specific moment in time, or an extraordinarily narrow situation that is not generally applicable to the worldwide audience of the internet. For help making this question more broadly applicable, visit the help center. Closed 6 years ago . This is my code for finding the convolution of two signals but my output is becoming zero everytime can anyone explain the mistake in my code? I tried

Python Fast Implementation of Convolution/Cross-correlation of 3D arrays

岁酱吖の 提交于 2019-12-10 11:55:20
问题 I'm working on calculating convolutions (cross-correlation) of 3D images. Due to the nature of the problem, FFT based approximations of convolution (e.g. scipy fftconvolve) is not desired, and the "direct sum" is the way to go. The images are ~(150, 150, 150) in size, and the largest kernels are ~(40, 40, 40) in size. the images are periodic (has periodic boundary condition, or needs to be padded by the same image) since ~100 such convolutions has to be done for one analysis, the speed of the

How did they calculate the output volume for this convnet example in Caffe?

大憨熊 提交于 2019-12-10 11:22:05
问题 In this tutorial, the output volumes are stated in output [25], and the receptive fields are specified in output [26]. Okay, the input volume [3, 227, 227] gets convolved with the region of size [3, 11, 11] . Using this formula (W−F+2P)/S+1 , where: W = the input volume size F = the receptive field size P = padding S = stride ...results with (227 - 11)/4 + 1 = 55 i.e. [55*55*96] . So far so good :) For 'pool1' they used F=3 and S=2 I think? The calculation checks out: 55-3/2+1=27 . From this

Python keras how to transform a dense layer into a convolutional layer

寵の児 提交于 2019-12-10 03:03:56
问题 I have a problem finding the correct mapping of the weights in order to transform a dense layer into a convolutional layer. This is an excerpt of a ConvNet that I'm working on: model.add(Convolution2D(512, 3, 3, activation='relu')) model.add(MaxPooling2D((2,2), strides=(2,2))) model.add(Flatten()) model.add(Dense(4096, activation='relu')) After the MaxPooling, the input is of shape (512,7,7). I would like to transform the dense layer into a convolutional layer to make it look like this: model

Image convolution in spatial domain

丶灬走出姿态 提交于 2019-12-10 01:06:49
问题 I am trying to replicate the outcome of this link using linear convolution in spatial-domain . Images are first converted to 2d double arrays and then convolved. Image and kernel are of the same size. The image is padded before convolution and cropped accordingly after the convolution. As compared to the FFT-based convolution, the output is weird and incorrect . How can I solve the issue? Note that I obtained the following image output from Matlab which matches my C# FFT output: . Update-1:

How correctly calculate tf.nn.weighted_cross_entropy_with_logits pos_weight variable

瘦欲@ 提交于 2019-12-09 23:47:19
问题 I am using convolution neural network. My data is quite imbalanced, I have two classes. My first class contains: 551,462 image files My second class contains: 52,377 image files I want to use weighted_cross_entropy_with_logits , but I'm not sure I'm calculating pos_weight variable correctly. Right now I'm using classes_weights = tf.constant([0.0949784, 1.0]) cross_entropy = tf.reduce_mean(tf.nn.weighted_cross_entropy_with_logits(logits=logits, targets=y_, pos_weight=classes_weights)) train

Efficiently implementing erode/dilate

人走茶凉 提交于 2019-12-09 12:04:54
问题 So normally and very inefficiently min/max filter is implemented by using four for loops. for( index1 < dy ) { // y loop for( index2 < dx ) { // x loop for( index3 < StructuringElement.dy() ) { // kernel y for( index4 < StructuringElement.dx() ) { // kernel x pixel = src(index3+index4); val = (pixel > val) ? pixel : val; // max } } dst(index2, index1) = val; } } However this approach is damn inefficient since it checks again previously checked values. So I am wondering what methods are there

2d convolution in python with missing data

我是研究僧i 提交于 2019-12-08 13:55:37
I know there is scipy.signal.convolve2d function to handle 2 dimension convolution for 2d numpy array, and there is numpy.ma module to handle missing data, but these two methods don't seem to compatible with each other (which means even if you mask a 2d array in numpy, the process in convolve2d won't be affected). Is there any way to handle missing values in convolution using only numpy and scipy packages? For example: 1 - 3 4 5 1 2 - 4 5 Array = 1 2 3 - 5 - 2 3 4 5 1 2 3 4 - Kernel = 1 0 0 -1 Desired result for convolution(Array, Kernel, boundary='wrap'): -1 - -1 -1 4 -1 -1 - -1 4 Result = -1

what is practical meaning of impulse response?

守給你的承諾、 提交于 2019-12-08 11:28:03
问题 Impulse response is usually used in filter and for convolution but i always find it difficult to explain my self what is this and how does it help. My question what is practical meaning of impulse response, either it an equation or characteristic of a system in response to input. 回答1: Definition: In signal processing, the impulse response, or impulse response function, of a dynamic system is its output when presented with a brief input signal, called an impulse. Explanation: Think of it from

Keras cnn model output shape doesn't match model summary

时光怂恿深爱的人放手 提交于 2019-12-08 10:32:38
问题 I am trying to use the convolution part of ResNet50() model, as this: #generate batches def get_batches(dirname, gen=image.ImageDataGenerator(), shuffle=True, batch_size=4, class_mode='categorical', target_size=(224,224)): return gen.flow_from_directory(dirname, target_size=target_size, class_mode=class_mode, shuffle=shuffle, batch_size=batch_size) trn_batches = get_batches("path_to_dirctory", shuffle=False,batch_size=4) #create model rn_mean = np.array([123.68, 116.779, 103.939], dtype=np