linear-algebra

Deprojection without Intrinsic Camera Matrix

本小妞迷上赌 提交于 2020-01-16 08:47:12
问题 I am trying to verify a solution to deprojecting a pixel point (u,v) into a 3D world location (x,y,z) using only the camera's extrinsic rotation and translation in addition to (u,v). The proposed solution I have modeled the problem in Unreal, where I have a virtual camera with world position (1077,1133,450) and rotation yaw=90, pitch=345, roll=0 degrees. I have an object of known 3D position (923,2500,0) seen by the 1280x720 camera at pixel location (771,426) or frame center position (131,-66

Bug in Scikit-Learn PCA or in Numpy Eigen Decomposition?

梦想的初衷 提交于 2020-01-13 20:39:06
问题 I have a dataset with 400 features. What I did: # approach 1 d_cov = np.cov(d_train.transpose()) eigens, mypca = LA.eig(d_cov) # assume sort by eigen value also/ LA = numpy linear algebra # approach 2 pca = PCA(n_components=300) d_fit = pca.fit_transform(d_train) pc = pca.components_ Now, these two should be the same, right? as PCA is just the eigendecomposition of the covariance matrix. But these are very different in my case? How could that be, I am doing any mistake above? Comparing

3d point closest to multiple lines in 3D space

只愿长相守 提交于 2020-01-13 13:50:13
问题 I search for non iterative, closed form, algorithm to find Least squares solution for point closest to the set of 3d lines. It is similar to 3d point triangulation (to minimize re-projections) but seems to be be simpler and faster? Lines can be described in any form, 2 points, point and unit direction or similar. 回答1: Let the i th line be given by point a i and unit direction vector d i . We need to find the single point that minimizes the sum of squared point to line distances. This is where

scipy.linalg.eig return complex eigenvalues for covariance matrix?

て烟熏妆下的殇ゞ 提交于 2020-01-13 07:51:31
问题 The eigenvalues of a covariance matrix should be real and non-negative because covariance matrices are symmetric and semi positive definite. However, take a look at the following experiment with scipy: >>> a=np.random.random(5) >>> b=np.random.random(5) >>> ab = np.vstack((a,b)).T >>> C=np.cov(ab) >>> eig(C) 7.90174997e-01 +0.00000000e+00j, 2.38344473e-17 +6.15983679e-17j, 2.38344473e-17 -6.15983679e-17j, -1.76100435e-17 +0.00000000e+00j, 5.42658040e-33 +0.00000000e+00j However , reproducing

Quaternion is flipping sign for very similar rotations?

南楼画角 提交于 2020-01-12 10:16:34
问题 Consider the following minimal working example: #include <iostream> #include <math.h> #include <eigen3/Eigen/Dense> int main() { // Set the rotation matrices that give an example of the problem Eigen::Matrix3d rotation_matrix_1, rotation_matrix_2; rotation_matrix_1 << 0.15240781108708346, -0.98618841818279246, -0.064840288106743013, -0.98826031445019891, -0.1527775600229907, 0.00075368177315370682, -0.0106494132438156, 0.063964216524108775, -0.99789536976680049; rotation_matrix_2 << -0

Quaternion is flipping sign for very similar rotations?

隐身守侯 提交于 2020-01-12 10:14:06
问题 Consider the following minimal working example: #include <iostream> #include <math.h> #include <eigen3/Eigen/Dense> int main() { // Set the rotation matrices that give an example of the problem Eigen::Matrix3d rotation_matrix_1, rotation_matrix_2; rotation_matrix_1 << 0.15240781108708346, -0.98618841818279246, -0.064840288106743013, -0.98826031445019891, -0.1527775600229907, 0.00075368177315370682, -0.0106494132438156, 0.063964216524108775, -0.99789536976680049; rotation_matrix_2 << -0

Fastest way to solve least square for overdetermined system

◇◆丶佛笑我妖孽 提交于 2020-01-12 07:49:48
问题 I have a matrix A of size m*n( m order of ~100K and n ~500) and a vector b. Also, my matrix is ill-conditioned and rank-deficient. Now I want to find out the least-square solution to Ax = b and to this end I have compared some of the methods: scipy.linalg.lstsq (time/residual) : 14s, 626.982 scipy.sparse.linalg.lsmr (time/residual) : 4.5s, 626.982 (same accuracy) Now I have observed that when I don't have the rank-deficient case forming the normal equation and solving it using cholesky

Constrained linear least-squares for xA=b in matlab

本秂侑毒 提交于 2020-01-11 13:39:09
问题 I want to solve xA=b with constraint 0<=x for x . I found functions like lsqnonneg and lsqlin which solves for Ax=b . However, couldn't find a good way to solve for xA=b . How can I solve xA=b with non-negative x constraint? 回答1: As David commented, it is straightforward to show that so you can use standard methods to solve the problem with A' and b' and then transpose the answer. 来源: https://stackoverflow.com/questions/26600699/constrained-linear-least-squares-for-xa-b-in-matlab

Clustering: Cluster validation

江枫思渺然 提交于 2020-01-11 12:57:11
问题 I want to use some clustering method for large social network dataset. The problem is how to evaluate the clustering method. yes, I can use some external ,internal and relative cluster validation methods. I used Normalized mutual information(NMI) as external validation method for cluster validation based on synthetic data. I produced some synthetic dataset by producing 5 clusters with equal number of nodes and some strongly connected links inside each cluster and weak links between clusters

Finite Field Linear Algebra Library for Haskell

旧城冷巷雨未停 提交于 2020-01-11 04:55:11
问题 I'm searching for a finite field linear algebra library for Haskell. Something like FFLAS-FFPACK for Haskell would be great :-). Of course, I checked hmatrix, there seems to be some support for arbitrary matrix element types but I couldn't find any finite field library which works with hmatrix. And surely I'd appreciate a performant solution :-) In particular I want to be able to multiply 𝔽 p n×1 and 𝔽 p 1×m matrices (vectors) to 𝔽 p n×m matrices. 回答1: Your best bet would be a binding to