svd

SVD speed in CPU and GPU

試著忘記壹切 提交于 2019-11-30 15:09:38
问题 I'm testing svd in Matlab R2014a and it seems that there is no CPU vs GPU speedup. I'm using a GTX 460 card and a Core 2 duo E8500 . Here is my code: %test SVD n=10000; %host Mh= rand(n,1000); tic %[Uh,Sh,Vh]= svd(Mh); svd(Mh); toc %device Md = gpuArray.rand(n,1000); tic %[Ud,Sd,Vd]= svd(Md); svd(Md); toc Also, the run times are different from run to run, but the CPU and GPU versions are about the same. Why there is no speedup? Here are some tests for i=1:10 clear; m= 10000; n= 100; %host Mh=

SVD(奇异值分解)记录

对着背影说爱祢 提交于 2019-11-30 11:59:57
转载自 https://www.cnblogs.com/endlesscoding/p/10033527.html 奇异值分解在数据降维中有较多的应用,这里把它的原理简单总结一下,并且举一个图片压缩的例子,最后做一个简单的分析,希望能够给大家带来帮助。 1、特征值分解(EVD) 实对称矩阵 在理角奇异值分解之前,需要先回顾一下特征值分解,如果矩阵A是一个m×m的 实对称矩阵 (即 ),那么它可以被分解成如下的形式: 其中Q为标准正交阵,即有 ,Σ为对角矩阵,且上面的矩阵的维度均为m×m。λi称为 特征值 ,qi是Q(特征矩阵)中的列向量,称为 特征向量 。 一般矩阵 上面的特征值分解,对矩阵有着较高的要求,它需要被分解的矩阵A为实对称矩阵,但是现实中,我们所遇到的问题一般不是实对称矩阵。那么当我们碰到一般性的矩阵,即有一个m×n的矩阵A,它是否能被分解成上式的形式呢?当然是可以的,这就是我们下面要讨论的内容。 2、奇异值分解(SVD) 2.1 奇异值分解定义 有一个m×n的实数矩阵A,我们想要把它分解成如下的形式 其中U和V均为单位正交阵,即有 和 ,U称为 左奇异矩阵 ,V称为 右奇异矩阵 ,Σ仅在主对角线上有值,我们称它为 奇异值 ,其它元素均为0。上面矩阵的维度分别为 , , 。 一般地Σ有如下形式 对于奇异值分解,我们可以利用上面的图形象表示,图中方块的颜色表示值的大小

Using SVD to compress an image in MATLAB

匆匆过客 提交于 2019-11-30 10:48:50
问题 I am brand new to MATLAB but am trying to do some image compression code for grayscale images. Questions How can I use SVD to trim off low-valued eigenvalues to reconstruct a compressed image? Work/Attempts so far My code so far is: B=imread('images1.jpeg'); B=rgb2gray(B); doubleB=double(B); %read the image and store it as matrix B, convert the image to a grayscale photo and convert the matrix to a class 'double' for values 0-255 [U,S,V]=svd(doubleB); This allows me to successfully decompose

LAPACK SVD (Singular Value Decomposition)

廉价感情. 提交于 2019-11-30 09:48:04
Do yo know any example to use LAPACK To calculate SVD? The routine dgesdd computes the SVD for a double precision matrix. Do you just need an example of how to use it? Have you tried reading the documentation? An example using the C LAPACK bindings (note that I wrote this just now, and haven't actually tested it. Also note that the exact types for arguments to clapack vary somewhat between platforms so you may need to change int to something else): #include <clapack.h> void SingularValueDecomposition(int m, // number of rows in matrix int n, // number of columns in matrix int lda, // leading

Fit points to a plane algorithms, how to iterpret results?

心不动则不痛 提交于 2019-11-30 09:06:48
Update : I have modified the Optimize and Eigen and Solve methods to reflect changes. All now return the "same" vector allowing for machine precision. I am still stumped on the Eigen method. Specifically How/Why I select slice of the eigenvector does not make sense. It was just trial and error till the normal matched the other solutions. If anyone can correct/explain what I really should do, or why what I have done works I would appreciate it. . Thanks Alexander Kramer, for explaining why I take a slice, only alowed to select one correct answer I have a depth image. I want to calculate a crude

sparse matrix svd in python

谁说我不能喝 提交于 2019-11-30 05:42:41
Does anyone know how to perform svd operation on a sparse matrix in python? It seems that there is no such functionality provided in scipy.sparse.linalg. You can use the Divisi library to accomplish this; from the home page: It is a library written in Python, using a C library (SVDLIBC) to perform the sparse SVD operation using the Lanczos algorithm. Other mathematical computations are performed by NumPy. Sounds like sparsesvd is what you're looking for! SVDLIBC efficiently wrapped in Python (no extra data copies made in RAM). Simply run "easy_install sparsesvd" to install. You can try scipy

Performing PCA on large sparse matrix by using sklearn

微笑、不失礼 提交于 2019-11-30 03:40:49
I am trying to apply PCA on huge sparse matrix, in the following link it says that randomizedPCA of sklearn can handle sparse matrix of scipy sparse format. Apply PCA on very large sparse matrix However, I always get error. Can someone point out what I am doing wrong. Input matrix 'X_train' contains numbers in float64: >>>type(X_train) <class 'scipy.sparse.csr.csr_matrix'> >>>X_train.shape (2365436, 1617899) >>>X_train.ndim 2 >>>X_train[0] <1x1617899 sparse matrix of type '<type 'numpy.float64'>' with 81 stored elements in Compressed Sparse Row format> I am trying to do: >>>from sklearn

importance of PCA or SVD in machine learning

耗尽温柔 提交于 2019-11-29 19:33:41
All this time (specially in Netflix contest), I always come across this blog (or leaderboard forum) where they mention how by applying a simple SVD step on data helped them in reducing sparsity in data or in general improved the performance of their algorithm in hand. I am trying to think (since long time) but I am not able to guess why is it so. In general, the data in hand I get is very noisy (which is also the fun part of bigdata) and then I do know some basic feature scaling stuff like log-transformation stuff , mean normalization. But how does something like SVD helps. So lets say i have

Calculating the null space of a matrix

Deadly 提交于 2019-11-29 13:26:59
问题 I'm attempting to solve a set of equations of the form Ax = 0. A is known 6x6 matrix and I've written the below code using SVD to get the vector x which works to a certain extent. The answer is approximately correct but not good enough to be useful to me, how can I improve the precision of the calculation? Lowering eps below 1.e-4 causes the function to fail. from numpy.linalg import * from numpy import * A = matrix([[0.624010149127497 ,0.020915658603923 ,0.838082638087629 ,62.0778180312547 ,

LinAlgError: SVD did not converge

有些话、适合烂在心里 提交于 2019-11-29 13:24:31
问题描述: 预测最小值时: pmax: 8 qmax: 8 求出来的BIC 最小的p值 和 q 值居然为: 27和0,比最大pmax还大!而且报“LinAlgError: SVD did not converge”的错误。 解决方法: 1、网上查找相关资料,说可能是空值问题,排除通用性的问题。于是尝试从自己代码中的问题着手解决问题。 2、逐步调试,打印出bic_matrix的值,发现居然不是8 8的矩阵,而是81 9的矩阵。认真审查代码后发现,原来是“bic_matrix.append(temp)”前少了一个缩进。 来源: https://blog.csdn.net/u011208984/article/details/100822241