scipy

Reshaping a Numpy Array into lexicographical list of cubes of shape (n, n, n)

本小妞迷上赌 提交于 2021-01-27 07:54:05
问题 In order to understand what I'm trying to achieve let's imagine an ndarray a with shape (8,8,8) from which I lexicographically take blocks of shape (4,4,4) . So while iterating through such blocks the indexes would look as follows: 0: a[0:4, 0:4, 0:4] 1: a[0:4, 0:4, 4:8] 2: a[0:4, 4:8, 0:4] 3: a[0:4, 4:8, 4:8] 4: a[4:8, 0:4, 0:4] 5: a[4:8, 0:4, 4:8] 6: a[4:8, 4:8, 0:4] 7: a[4:8, 4:8, 4:8] It is these blocks of data which I'm trying to access. Obviously, this can be described by using an

Reshaping a Numpy Array into lexicographical list of cubes of shape (n, n, n)

Deadly 提交于 2021-01-27 07:53:28
问题 In order to understand what I'm trying to achieve let's imagine an ndarray a with shape (8,8,8) from which I lexicographically take blocks of shape (4,4,4) . So while iterating through such blocks the indexes would look as follows: 0: a[0:4, 0:4, 0:4] 1: a[0:4, 0:4, 4:8] 2: a[0:4, 4:8, 0:4] 3: a[0:4, 4:8, 4:8] 4: a[4:8, 0:4, 0:4] 5: a[4:8, 0:4, 4:8] 6: a[4:8, 4:8, 0:4] 7: a[4:8, 4:8, 4:8] It is these blocks of data which I'm trying to access. Obviously, this can be described by using an

Reshaping a Numpy Array into lexicographical list of cubes of shape (n, n, n)

时光怂恿深爱的人放手 提交于 2021-01-27 07:50:19
问题 In order to understand what I'm trying to achieve let's imagine an ndarray a with shape (8,8,8) from which I lexicographically take blocks of shape (4,4,4) . So while iterating through such blocks the indexes would look as follows: 0: a[0:4, 0:4, 0:4] 1: a[0:4, 0:4, 4:8] 2: a[0:4, 4:8, 0:4] 3: a[0:4, 4:8, 4:8] 4: a[4:8, 0:4, 0:4] 5: a[4:8, 0:4, 4:8] 6: a[4:8, 4:8, 0:4] 7: a[4:8, 4:8, 4:8] It is these blocks of data which I'm trying to access. Obviously, this can be described by using an

Numpy n-th odd root including negative values

六眼飞鱼酱① 提交于 2021-01-27 07:07:01
问题 I want to calculate the n-th odd root of some numbers in python. Numpy as a cube root function. Using that function I can compute x^(1/3). x = np.linspace(-100,100,100) np.cbrt(x) >>> array([-4.64158883, -4.26859722, -3.81571414, -3.21829795, -2.23144317, 2.23144317, 3.21829795, 3.81571414, 4.26859722, 4.64158883]) However, if I want to compute the same thing for other k-th odd roots in a straightforward manner I'm somewhat stuck. I cannot use np.power directly, not even to compute the cube

why does my convolution routine differ from numpy & scipy's?

淺唱寂寞╮ 提交于 2021-01-27 05:58:43
问题 I wanted to manually code a 1D convolution because I was playing around with kernels for time series classification, and I decided to make the famous Wikipedia convolution image, as seen here. Here's my script. I'm using the standard formula for convolution for a digital signal. import numpy as np import matplotlib.pyplot as plt import scipy.ndimage plt.style.use('ggplot') def convolve1d(signal, ir): """ we use the 'same' / 'constant' method for zero padding. """ n = len(signal) m = len(ir)

why does my convolution routine differ from numpy & scipy's?

痞子三分冷 提交于 2021-01-27 05:58:34
问题 I wanted to manually code a 1D convolution because I was playing around with kernels for time series classification, and I decided to make the famous Wikipedia convolution image, as seen here. Here's my script. I'm using the standard formula for convolution for a digital signal. import numpy as np import matplotlib.pyplot as plt import scipy.ndimage plt.style.use('ggplot') def convolve1d(signal, ir): """ we use the 'same' / 'constant' method for zero padding. """ n = len(signal) m = len(ir)

why does my convolution routine differ from numpy & scipy's?

喜你入骨 提交于 2021-01-27 05:57:23
问题 I wanted to manually code a 1D convolution because I was playing around with kernels for time series classification, and I decided to make the famous Wikipedia convolution image, as seen here. Here's my script. I'm using the standard formula for convolution for a digital signal. import numpy as np import matplotlib.pyplot as plt import scipy.ndimage plt.style.use('ggplot') def convolve1d(signal, ir): """ we use the 'same' / 'constant' method for zero padding. """ n = len(signal) m = len(ir)

why does my convolution routine differ from numpy & scipy's?

吃可爱长大的小学妹 提交于 2021-01-27 05:56:45
问题 I wanted to manually code a 1D convolution because I was playing around with kernels for time series classification, and I decided to make the famous Wikipedia convolution image, as seen here. Here's my script. I'm using the standard formula for convolution for a digital signal. import numpy as np import matplotlib.pyplot as plt import scipy.ndimage plt.style.use('ggplot') def convolve1d(signal, ir): """ we use the 'same' / 'constant' method for zero padding. """ n = len(signal) m = len(ir)

Sparse Efficiency Warning while changing the column

狂风中的少年 提交于 2021-01-27 02:57:47
问题 def tdm_modify(feature_names,tdm): non_useful_words=['kill','stampede','trigger','cause','death','hospital'\ ,'minister','said','told','say','injury','victim','report'] indexes=[feature_names.index(word) for word in non_useful_words] for index in indexes: tdm[:,index]=0 return tdm I want to manually set zero weights for some terms in tdm matrix. Using the above code I get the warning. I don't seem to understand why? Is there a better way to do this? C:\Anaconda\lib\site-packages\scipy\sparse

Sparse Efficiency Warning while changing the column

微笑、不失礼 提交于 2021-01-27 02:57:12
问题 def tdm_modify(feature_names,tdm): non_useful_words=['kill','stampede','trigger','cause','death','hospital'\ ,'minister','said','told','say','injury','victim','report'] indexes=[feature_names.index(word) for word in non_useful_words] for index in indexes: tdm[:,index]=0 return tdm I want to manually set zero weights for some terms in tdm matrix. Using the above code I get the warning. I don't seem to understand why? Is there a better way to do this? C:\Anaconda\lib\site-packages\scipy\sparse