linear-algebra

How do I determine whether a 3-dimensional vector is contained within the acute region formed by three other vectors?

孤街醉人 提交于 2020-01-24 22:07:54
问题 I'm working on a project in C# where I have three vectors in R3, and I need to find out if a third vector is contained within the region formed by those vectors. The three basis vectors have a maximum angle of 90 degrees between any two of them, and they are all normalized on the unit sphere. They can be negative. So far, I've tried matrix-vector multiplication to find the transformed coordinates of the vector. From there, I check whether all three components are positive. This method works

How do I determine whether a 3-dimensional vector is contained within the acute region formed by three other vectors?

青春壹個敷衍的年華 提交于 2020-01-24 22:07:33
问题 I'm working on a project in C# where I have three vectors in R3, and I need to find out if a third vector is contained within the region formed by those vectors. The three basis vectors have a maximum angle of 90 degrees between any two of them, and they are all normalized on the unit sphere. They can be negative. So far, I've tried matrix-vector multiplication to find the transformed coordinates of the vector. From there, I check whether all three components are positive. This method works

numpy.array_equal returns False, even though arrays have the same shape and values

£可爱£侵袭症+ 提交于 2020-01-24 15:17:06
问题 I have a very simple function, as shown below def new_price(A, B, x): return np.linalg.inv(A @ B) @ x These are the inputs I give it A = np.array([ [2, 0, 1, 0], [1, 1, 1, 1], [0, 0, 0, 10] ]) B = np.array([ [3, 3, 3], [2, 0, 8], [0, 5, 3], [0, 0, 10] ]) x = np.array([ 84, 149, 500]) This returns the array [ 1. 3. 5.] . But, when I make the following equality check, it returns False v1 = new_price(A, B, x) v2 = np.array([1.0, 3.0, 5.0]) np.array_equal(new_price(A, B, [ 84, 149, 500]), np

numpy.array_equal returns False, even though arrays have the same shape and values

别等时光非礼了梦想. 提交于 2020-01-24 15:16:13
问题 I have a very simple function, as shown below def new_price(A, B, x): return np.linalg.inv(A @ B) @ x These are the inputs I give it A = np.array([ [2, 0, 1, 0], [1, 1, 1, 1], [0, 0, 0, 10] ]) B = np.array([ [3, 3, 3], [2, 0, 8], [0, 5, 3], [0, 0, 10] ]) x = np.array([ 84, 149, 500]) This returns the array [ 1. 3. 5.] . But, when I make the following equality check, it returns False v1 = new_price(A, B, x) v2 = np.array([1.0, 3.0, 5.0]) np.array_equal(new_price(A, B, [ 84, 149, 500]), np

Numpy - Dot Product of a Vector of Matrices with a Vector of Scalars

[亡魂溺海] 提交于 2020-01-24 04:45:09
问题 I have a 3 dimensional data set that I am trying to manipulate in the following way. data.shape = (643, 2890, 10) vector.shape = (643,) I would like numpy to see data as a 643 length 1-D array of 2890x10 matrices and calculate a dot product (sum-product?) between data and vector. I can do this with a loop, but would really like to find a way to do this using a primitive (this will be run many times across parallel nodes). The equivalent loop (I believe): a = numpy.zeros ((2890, 10)) for i in

Faster projected-norm (quadratic-form, metric-matrix…) style computations

最后都变了- 提交于 2020-01-23 01:42:08
问题 I need to perform lots of evaluations of the form X(:,i)' * A * X(:,i) i = 1...n where X(:,i) is a vector and A is a symmetric matrix. Ostensibly, I can either do this in a loop for i=1:n z(i) = X(:,i)' * A * X(:,i) end which is slow, or vectorise it as z = diag(X' * A * X) which wastes RAM unacceptably when X has a lot of columns. Currently I am compromising on Y = A * X for i=1:n z(i) = Y(:,i)' * X(:,i) end which is a little faster/lighter but still seems unsatisfactory. I was hoping there

Matrix exponentiation in Python

你说的曾经没有我的故事 提交于 2020-01-22 09:58:10
问题 I'm trying to exponentiate a complex matrix in Python and am running into some trouble. I'm using the scipy.linalg.expm function, and am having a rather strange error message when I try the following code: import numpy as np from scipy import linalg hamiltonian = np.mat('[1,0,0,0;0,-1,0,0;0,0,-1,0;0,0,0,1]') # This works t_list = np.linspace(0,1,10) unitary = [linalg.expm(-(1j)*t*hamiltonian) for t in t_list] # This doesn't t_list = np.linspace(0,10,100) unitary = [linalg.expm(-(1j)*t

Matrix exponentiation in Python

筅森魡賤 提交于 2020-01-22 09:57:22
问题 I'm trying to exponentiate a complex matrix in Python and am running into some trouble. I'm using the scipy.linalg.expm function, and am having a rather strange error message when I try the following code: import numpy as np from scipy import linalg hamiltonian = np.mat('[1,0,0,0;0,-1,0,0;0,0,-1,0;0,0,0,1]') # This works t_list = np.linspace(0,1,10) unitary = [linalg.expm(-(1j)*t*hamiltonian) for t in t_list] # This doesn't t_list = np.linspace(0,10,100) unitary = [linalg.expm(-(1j)*t

Haskell linear algebra?

人盡茶涼 提交于 2020-01-20 20:06:12
问题 I am starting to test Haskell for linear algebra. Does anyone have any recommendations for the best package for this purpose? Any other good resources for doing basic matrix manipulation with Haskell? The haskell wiki lists several resources for this. My current focus in on hmatrix and bindings-gsl, both of which look promising. 回答1: The hmatrix and hmatrix-static libraries are excellent. Hunt around on Hackage some more: http://hackage.haskell.org/package/vect 来源: https://stackoverflow.com

Haskell linear algebra?

独自空忆成欢 提交于 2020-01-20 20:05:59
问题 I am starting to test Haskell for linear algebra. Does anyone have any recommendations for the best package for this purpose? Any other good resources for doing basic matrix manipulation with Haskell? The haskell wiki lists several resources for this. My current focus in on hmatrix and bindings-gsl, both of which look promising. 回答1: The hmatrix and hmatrix-static libraries are excellent. Hunt around on Hackage some more: http://hackage.haskell.org/package/vect 来源: https://stackoverflow.com