svd

R - svd() function - infinite or missing values in 'x'

我们两清 提交于 2019-11-28 00:25:52
I am constantly getting this error. I am sure the matrix does not have any non-numeric entries. I also tried imputing the matrix, did not work. Anyone know what the error might be? fileUrl <- "https://dl.dropboxusercontent.com/u/76668273/kdd.csv"; download.file(fileUrl,destfile="./kdd.csv",method="curl"); kddtrain <- read.csv("kdd.csv"); kddnumeric <- kddtrain[,sapply(kddtrain,is.numeric)]; kddmatrix <- as.matrix(kddnumeric); svd1 <- svd(scale(kddmatrix)); You have columns composed of all zeroes. Using scale on a column of all zeroes returns a column composed of NaN . To solve this, remove

抗几何攻击数字水印分类总结

浪子不回头ぞ 提交于 2019-11-27 23:24:33
1利用矩阵转换不变的特性。 最经典的是SVD(奇异值分解),生成的矩阵相互转换。在使用的过程中,能够抵抗几何攻击。但早期的SVD数字水印算法出现虚警率的问题。 2扩频模式 这个是经典的实现算法,模拟信道传输,需要DSP方面的技术,这方面我很缺乏,就不细讲了。 3同步检测 canny边缘检测是经典的方法,特别对于旋转攻击,检测到角点,计算出偏移的角度,实现旋转纠正。 4 特征点提取。经典的是SIFT算法,但时间复杂度高,SURF就继承过来,并且时间复杂度得到改善。在结合一些数学工具的改进,能使图像获得更多的特征点。我现在正在研究,希望能有所收获,发篇好的论文,求祝福。 来源: CSDN 作者: 杨宝涛 链接: https://blog.csdn.net/zhongriqianqian2076/article/details/51996664

SVD for sparse matrix in R

社会主义新天地 提交于 2019-11-27 22:22:29
I've got a sparse Matrix in R that's apparently too big for me to run as.matrix() on (though it's not super-huge either). The as.matrix() call in question is inside the svd() function, so I'm wondering if anyone knows a different implementation of SVD that doesn't require first converting to a dense matrix. The irlba package has a very fast SVD implementation for sparse matrices. You can do a very impressive bit of sparse SVD in R using random projection as described in http://arxiv.org/abs/0909.4061 Here is some sample code: # computes first k singular values of A with corresponding singular

Parallel implementation for multiple SVDs using CUDA

那年仲夏 提交于 2019-11-27 21:38:54
I'm new to parallel programming using GPU so I apologize if the question is broad or vague. I'm aware there is some parallel SVD function in the CULA library, but what should be the strategy if I have a large number of relatively small matrices to factorize? For example I have n matrices with dimension d , n is large and d is small. How to parallelize this process? Could anyone give me a hint? My previous answer is now out-of-date. As of February 2015, CUDA 7 (currently in release candidate version) offers full SVD capabilities in its cuSOLVER library. Below, I'm providing an example of

SVD computing different result in Matlab and OpenCV

家住魔仙堡 提交于 2019-11-27 16:03:18
问题 I wonder why there is sign difference in result for SVD computing in Matlab and OpenCV. I input the same matrix 3.65E+06 -2.09E+06 0 YY = -2.09E+06 2.45E+06 0 0 0 0 [U,S,V] = svd(YY);//Matlab -0.798728902689475 0.601691066917623 0 V = 0.601691066917623 0.798728902689475 0 0 0 1 cv::SVD::compute(YY, S, U, V);//opencv 0.798839 -0.601544 0 V = 0.601544 0.798839 0 0 0 1 I know that they use the same algo, why there is sign difference? Thanks 回答1: Which version of OpenCV are you using? From http:/

How do we decide the number of dimensions for Latent semantic analysis ?

牧云@^-^@ 提交于 2019-11-27 14:06:20
问题 I have been working on latent semantic analysis lately. I have implemented it in java by making use of the Jama package. Here is the code: Matrix vtranspose ; a = new Matrix(termdoc); termdoc = a.getArray(); a = a.transpose() ; SingularValueDecomposition sv =new SingularValueDecomposition(a) ; u = sv.getU(); v = sv.getV(); s = sv.getS(); vtranspose = v.transpose() ; // we obtain this as a result of svd uarray = u.getArray(); sarray = s.getArray(); varray = vtranspose.getArray(); if(semantics

Singular Value Decomposition (SVD) in PHP

人盡茶涼 提交于 2019-11-27 14:04:38
问题 I would like to implement Singular Value Decomposition (SVD) in PHP. I know that there are several external libraries which could do this for me. But I have two questions concerning PHP, though: 1) Do you think it's possible and/or reasonable to code the SVD in PHP? 2) If (1) is yes: Can you help me to code it in PHP? I've already coded some parts of SVD by myself. Here's the code which I made comments to the course of action in. Some parts of this code aren't completely correct. It would be

How many principal components to take?

…衆ロ難τιáo~ 提交于 2019-11-27 10:18:21
问题 I know that principal component analysis does a SVD on a matrix and then generates an eigen value matrix. To select the principal components we have to take only the first few eigen values. Now, how do we decide on the number of eigen values that we should take from the eigen value matrix? 回答1: To decide how many eigenvalues/eigenvectors to keep, you should consider your reason for doing PCA in the first place. Are you doing it for reducing storage requirements, to reduce dimensionality for a

MATLAB eig returns inverted signs sometimes

杀马特。学长 韩版系。学妹 提交于 2019-11-27 04:55:26
I'm trying to write a program that gets a matrix A of any size, and SVD decomposes it: A = U * S * V' Where A is the matrix the user enters, U is an orthogonal matrix composes of the eigenvectors of A * A' , S is a diagonal matrix of the singular values, and V is an orthogonal matrix of the eigenvectors of A' * A . Problem is: the MATLAB function eig sometimes returns the wrong eigenvectors. This is my code: function [U,S,V]=badsvd(A) W=A*A'; [U,S]=eig(W); max=0; for i=1:size(W,1) %%sort for j=i:size(W,1) if(S(j,j)>max) max=S(j,j); temp_index=j; end end max=0; temp=S(temp_index,temp_index); S

Compute projection / hat matrix via QR factorization, SVD (and Cholesky factorization?)

若如初见. 提交于 2019-11-27 02:48:49
问题 I'm trying to calculate in R a projection matrix P of an arbitrary N x J matrix S : P = S (S'S) ^ -1 S' I've been trying to perform this with the following function: P <- function(S){ output <- S %*% solve(t(S) %*% S) %*% t(S) return(output) } But when I use this I get errors that look like this: # Error in solve.default(t(S) %*% S, t(S), tol = 1e-07) : # system is computationally singular: reciprocal condition number = 2.26005e-28 I think that this is a result of numerical underflow and/or