linear-algebra

Imprecision with rotation matrix to align a vector to an axis

限于喜欢 提交于 2019-12-24 10:57:49
问题 I've been banging my head against the wall with this for several hours and I can't seem to figure out what I'm doing wrong. I'm trying to generate a rotation matrix which will align a vector with a particular axis (I'll ultimately be transforming more data, so having the rotation matrix is important). I feel like my method is right, and if I test it on a variety of vectors, it works pretty well , but the transformed vectors are always a little off . Here's a full code sample I'm using to test

Determining Cofactor Matrix in Java

ぐ巨炮叔叔 提交于 2019-12-24 04:00:55
问题 I'm trying to determine a cofactor matrix. My code is correctly generating all the cofactors; however, in some cases, the resulting matrix is rotated by 90 degrees (well, the cols/rows are switched). For example, the matrix: {{8, 5, 1}, {3, 6, 7}, {5, 6, 6}} produced the correct result. output > a 8 3 5 5 6 6 1 7 6 a -6 17 -12 -24 43 -23 29 -53 33 however, the matrix: {{1, 0, 5}, {9, 3, 0}, {0, 9, 3}} switches rows and columns. output > b 1 0 5 9 3 0 0 9 3 b 9 45 -15 -27 3 45 81 -9 3 the

how to solve a very large overdetermined system of linear equations?

时光毁灭记忆、已成空白 提交于 2019-12-24 03:19:57
问题 I am doing a project about image processing, and I need to solve the following set of equations: Nx+Nz*( z(x+1,y)-z(x,y) )=0 Ny+Nz*( z(x+1,y)-z(x,y) )=0 and equations of the boundary (bottom and right side of the image): Nx+Nz*( z(x,y)-z(x-1,y) )=0 Ny+Nz*( z(x,y)-z(x,y-1) )=0 where Nx,Ny,Nz are the surface normal vectors at the corresponding coordinates and are already determined. Now the problem is that since (x,y) are the coordinates on an image, which typically has a size of say x=300 and

Pytorch most efficient Jacobian/Hessian calculation

天涯浪子 提交于 2019-12-23 19:43:21
问题 I am looking for the most efficient way to get the Jacobian of a function through Pytorch and have so far come up with the following solutions: def func(X): return torch.stack(( X.pow(2).sum(1), X.pow(3).sum(1), X.pow(4).sum(1) ),1) X = Variable(torch.ones(1,int(1e5))*2.00094, requires_grad=True).cuda() # Solution 1: t = time() Y = func(X) J = torch.zeros(3, int(1e5)) for i in range(3): J[i] = grad(Y[0][i], X, create_graph=True, retain_graph=True, allow_unused=True)[0] print(time()-t) Output:

FMA instruction showing up as three packed double operations?

China☆狼群 提交于 2019-12-23 19:03:34
问题 I'm analyzing a piece of linear algebra code which is calling intrinsics directly, e.g. v_dot0 = _mm256_fmadd_pd( v_x0, v_y0, v_dot0 ); My test script computes the dot product of two double precision vectors of length 4 (so only one call to _mm256_fmadd_pd needed), repeated 1 billion times. When I count the number of operations with perf I get something as follows: Performance counter stats for './main': 0 r5380c7 (skl::FP_ARITH:512B_PACKED_SINGLE) (49.99%) 0 r5340c7 (skl::FP_ARITH:512B

Finding a Quaternion from Gyroscope Data?

£可爱£侵袭症+ 提交于 2019-12-23 17:15:08
问题 I've been trying to build a filter that can successfully combine compass, geomagnetic, and gyroscopic data to produce a smooth augmented reality experience. After reading this post along with lots of discussions, I finally found out a good algorithm to correct my sensor data. Most examples I've read show how to correct accelerometers with gyroscopes, but not correct compass + accelerometer data with gyroscope. This is the algorithm I've settled upon, which works great except that I run into

Perfect (or near) multicollinearity in julia

白昼怎懂夜的黑 提交于 2019-12-23 13:31:51
问题 Running a simple regression model in Julia with the presence of perfect multicollinearity produces an error. In R, we can run the same model producing NAs in the estimations of the corresponding covariates which R interprets: "not defined because of singularities". We can identify those variables using the alias() function in R. Is there any way I can check for perfect multicollinearity in Julia prior to modeling in order to drop the collinear variables? 回答1: Detecting Perfect Collinearity

Solve sparse upper triangular system

泪湿孤枕 提交于 2019-12-23 09:57:59
问题 I'm trying to figure out how to efficiently solve a sparse triangular system, Au*x = b in scipy sparse. For example, we can construct a sparse upper triangular matrix, Au, and a right hand side b with: import scipy.sparse as sp import scipy.sparse.linalg as sla import numpy as np n = 2000 A = sp.rand(n, n, density=0.4) + sp.eye(n) Au = sp.triu(A).tocsr() b = np.random.normal(size=(n)) We can get a solution to the problem using spsolve, however it is clear that the triangular structure is not

Fast linear system solver for D? [closed]

流过昼夜 提交于 2019-12-23 09:18:23
问题 Closed. This question is off-topic. It is not currently accepting answers. Want to improve this question? Update the question so it's on-topic for Stack Overflow. Closed 5 years ago . Where can I get a fast linear system solver written in D? It should be able to take a square matrix A and a vector b and solve the equation Ax = b for b and, ideally, also perform explicit inversion on A . I have one I wrote myself, but it's pretty slow, probably because it's completely cache-naive. However, for

Why is adding two std::vectors slower than raw arrays from new[]?

夙愿已清 提交于 2019-12-23 08:06:41
问题 I'm looking around OpenMP, partially because my program need to make additions of very large vectors (millions of elements). However i see a quite large difference if i use std::vector or raw array. Which i cannot explain. I insist that the difference is only on the loop, not the initialisation of course. The difference in time I refer to, is only timing the addition, especially not to take into account any initialization difference between vectors, arrays, etc. I'm really talking only about