convolution

Convert sympy symbolic variable to numpy array

非 Y 不嫁゛ 提交于 2019-12-25 00:18:36
问题 I want to perform a convolution that contains a sympy symbolic variable, then convert it to a numpy array. My MWE is: from numpy import pi, float64, linspace from scipy.signal import fftconvolve import matplotlib.pyplot as plt from sympy import symbols from sympy.utilities.lambdify import lambdify a = 0.657 b = 0.745 c = 0.642 d = 0.343 x = symbols('x') f = 2*b / ((x-a)**2 + b**2) g = 2*d / ((x-c)**2 + d**2) fog = fftconvolve(f,g,mode='same') fog_fun = lambdify(x,fog,'numpy') # returns a

GPU library that implements Image Convolution using cuFFT?

孤人 提交于 2019-12-24 15:25:45
问题 I've been using the image convolution function from Nvidia Performance Primitives (NPP). However, my kernel is fairly large with respect to the image size, and I've heard rumors that NPP's convolution is a direct convolution instead of an FFT-based convolution. (I don't think the NPP source code is available, so I'm not sure how it's implemented.) I'd like to see how fast a cuFFT-based convolution function could run in the image processing application that I'm working on. You might say "hey,

Optimizing tensorflow to CPU use

假装没事ソ 提交于 2019-12-24 15:07:11
问题 I have a model that needs to be optimized to CPU. Currently the model takes a 1024 x 1024 bytes data. images = img[y:y+1024,x:x+1024,:] As per this document, they want to change the default tensorflow data format from NHCW to NCHW format. How can I transform from NHWC to NCHW format? https://software.intel.com/en-us/articles/tensorflow-optimizations-on-modern-intel-architecture 回答1: As per this document, they want to change the default tensorflow data format from NHCW to NCHW format. Actually

Scaling factor for convolution in matlab to get proper area

纵饮孤独 提交于 2019-12-24 14:05:49
问题 The question is pretty basic, I try to repeat results for continues convolution of two boxcar functions with conv function in matlab. Accordingly to http://en.wikipedia.org/wiki/Convolution it should result in the area overlap between the two given function. The results of discrete conv should be scaled to get a proper value for the area. Some suggest scaling with sampling frequency, but it does not give correct results for the area. It was suggested to use sum(f) in Scale Factor in Matlabs

Why am I getting only one channeled-output through the tf.nn.conv2d?

可紊 提交于 2019-12-24 11:54:29
问题 import tensorflow as tf import numpy as np import matplotlib.pyplot as plt from scipy.misc import imread img = imread('dog2.jpg') #img is a shape of (360, 480, 3) w = img.shape[0] h = img.shape[1] c = img.shape[2] k = 3 # for my convenience plt.subplot(1,2,1) plt.imshow(img) img = tf.cast(img, tf.float32) img4d = tf.reshape(img,[1,w,h,c]) diag = np.array([[1,1,1],[0,0,0],[1,1,1]]*k, np.float32) # diag = np.diag(diag) diag4d = tf.reshape(diag,[k,k,c,1]) convolved = tf.nn.conv2d(img4d, diag4d,

What have I done wrong Converting my MMX Intrinsics to x64 (SSE)?

随声附和 提交于 2019-12-24 11:52:16
问题 I understand converting MMX 32bit mmx intrinsics no longer allows the __m64. So I was having great trouble upgrading this piece of code to SSE. I was told on another stack-Overflow post to post my code. Perhaps this exercise will help others as well. I commented out '_mm_empty' thinking that was the right thing to do. I found like functions in the emmintrin.h for all the other __m128i opertions, but something is still wrong. original 32-bit function code: DWORD CSumInsideHorizontalTask:

col2im implementation in ConvNet

痞子三分冷 提交于 2019-12-24 10:59:05
问题 I'm trying to implement a CNN only using numpy. While doing the backpropagation, I found out that I had to use col2im in order to reshape dx , so I checked the implementation from https://github.com/huyouare/CS231n/blob/master/assignment2/cs231n/im2col.py. import numpy as np def get_im2col_indices(x_shape, field_height, field_width, padding=1, stride=1): # First figure out what the size of the output should be N, C, H, W = x_shape assert (H + 2 * padding - field_height) % stride == 0 assert

How a Convolutional Neural Net handles channels

空扰寡人 提交于 2019-12-24 10:37:09
问题 I've looks through a lot of explanations of the way a CNN conventionally handles multiple channels (such as 3 in an RGB image) and am still at a loss. When a 5x5x3 filter (say) is applied to a patch of an RGB image what exactly happens? Is it in fact 3 different 2D convolutions (with independent weights) that happen separately to each channel? And then the results get simply added together to produce the final output to pass to the next layer? Or a truly 3D convolution? 回答1: This image is

How to create a CNN with deterministic operations in TensorFlow on a GPU?

强颜欢笑 提交于 2019-12-24 08:46:24
问题 So after I realize there are operations in TensorFlow which are non-deterministic, see this Question: How to get the same loss value, every time training a CNN (MNIST data set), with TensorFlow?, I want to know: How can I build a convolutional neural net with: TensorFlow Version 1.1.0 CUDA release 8.0, V8.0.61 cuDNN 5.1.10 run on GPU which use only deterministic operations? 回答1: You can't until every operation on cuDNN is not completely deterministic. Moreover, even moving every operation on

Understanding the Caffe Convolutional Layer

僤鯓⒐⒋嵵緔 提交于 2019-12-24 06:18:15
问题 I successfully compiled Caffe under Ubuntu and started to study how to define and train my own networks. However, I'm having trouble to understand how the convolutional layer produces its output. For example the second convolutional layer (conv2) of the LeNet MNIST tutorial (tutorial, lenet.prototxt) has 20 input images and 50 output images: layer { name: "conv2" type: "Convolution" bottom: "pool1" top: "conv2" param { lr_mult: 1 } param { lr_mult: 2 } convolution_param { num_output: 50