svd

Image Enhancement using combination between SVD and Wavelet Transform

亡梦爱人 提交于 2019-12-06 17:32:15
问题 My objective is to handle illumination and expression variations on an image . So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image. Reference: this paper Lets see my steps: 1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage) . so that large

Using Principal Components Analysis (PCA) on binary data

懵懂的女人 提交于 2019-12-06 17:05:47
问题 I am using PCA on binary attributes to reduce the dimensions (attributes) of my problem. The initial dimensions were 592 and after PCA the dimensions are 497. I used PCA before, on numeric attributes in an other problem and it managed to reduce the dimensions in a greater extent (the half of the initial dimensions). I believe that binary attributes decrease the power of PCA, but i do not know why. Could you please explain me why PCA does not work as good as in numeric data. Thank you. 回答1:

SVD very slow when using cuSolver as compared to MATLAB

ⅰ亾dé卋堺 提交于 2019-12-06 06:59:54
问题 I'm trying to use the gesvd function from cuSOLVER which I found to be much slower than the svd function in MATLAB, for both cases using double array or gpuArray . C++ code [using cuSolver ] : #include <stdio.h> #include <stdlib.h> #include <assert.h> #include <cuda_runtime.h> #include <cusolverDn.h> // Macro for timing kernel runs #define START_METER {\ cudaEvent_t start, stop;\ float elapsedTime;\ cudaEventCreate(&start);\ cudaEventRecord(start, 0); #define STOP_METER cudaEventCreate(&stop)

How to reconstruct original matrix from svd components with Spark

蹲街弑〆低调 提交于 2019-12-06 03:27:24
I want to reconstruct (the approximation of) the original matrix decomposed in SVD. Is there a way to do this without having to convert the V factor local Matrix into a DenseMatrix ? Here is the decomposition based on the documentation (note that the comments are from the doc example) import org.apache.spark.mllib.linalg.Matrix import org.apache.spark.mllib.linalg.SingularValueDecomposition import org.apache.spark.mllib.linalg.Vector import org.apache.spark.mllib.linalg.distributed.RowMatrix val data = Array( Vectors.dense(1.0, 0.0, 7.0, 0.0, 0.0), Vectors.dense(2.0, 0.0, 3.0, 4.0, 5.0),

Using alternative LAPACK driver in numpy's svd method?

一世执手 提交于 2019-12-06 03:20:33
I'm using numpy.svd to compute singular value decompositions of badly conditioned matrices. For some special cases the svd won't converge and raise a Linalg.Error. I've done some research and found that numpy uses the DGESDD routine from LAPACK. The standard implementation has a hardcoded iteration limit of 35 or something iterations. If I try to decompose the same matrix in Matlab, everything works fine, and I think there's two reasons for that: 1. Matlab uses DGESVD instead of DGESDD which in general seems to be more robust. 2. Matlab uses an iteration limit of 75 in the routine. (They

线性代数之——SVD 分解

橙三吉。 提交于 2019-12-05 12:23:36
SVD 分解是线性代数的一大亮点。 1. SVD 分解 \(A\) 是任意的 \(m×n\) 矩阵,它的秩为 \(r\) ,我们要对其进行对角化,但不是通过 \(S^{-1}A S\) 。 \(S\) 中的特征向量有三个大问题:它们通常不是正交的;并不总是有足够的特征向量; \(Ax=\lambda x\) 需要 \(A\) 是一个方阵。 \(A\) 的奇异向量很好地解决了上述所有问题。 代价是我们需要两组奇异向量,一组是 \(\boldsymbol{u}\) , 一组是 \(\boldsymbol{v}\) 。 \(\boldsymbol{u}\) 是 \(AA^T\) 的特征向量, \(\boldsymbol{v}\) 是 \(A^TA\) 的特征向量,因为这两个矩阵都是对称矩阵,我们可以选出一组标准正交的特征向量。即: \[AA^T=U\Sigma^2U^T \quad A^TA =V\Sigma^2V^T\] 证明: 让我们从 \(A^TAv_i=\sigma_i^2v_i\) 开始,两边同乘以 \(v_i^T\) ,有: \[v_i^TA^TAv_i=\sigma_i^2v_i^Tv_i=\sigma_i^2 \to ||Av_i||=\sigma_i\] 这说明向量 \(Av_i\) 的长度为 \(\sigma_i\) 。然后两边同乘以 \(A\) ,有: \[AA^T

PCA of RGB Image

南笙酒味 提交于 2019-12-05 11:36:56
I'm trying to figure out how to use PCA to decorrelate an RGB image in python. I'm using the code found in the O'Reilly Computer vision book: from PIL import Image from numpy import * def pca(X): # Principal Component Analysis # input: X, matrix with training data as flattened arrays in rows # return: projection matrix (with important dimensions first), # variance and mean #get dimensions num_data,dim = X.shape #center data mean_X = X.mean(axis=0) for i in range(num_data): X[i] -= mean_X if dim>100: print 'PCA - compact trick used' M = dot(X,X.T) #covariance matrix e,EV = linalg.eigh(M)

Is it good to normalization/standardization data having large number of features with zeros

有些话、适合烂在心里 提交于 2019-12-05 07:33:19
问题 I'm having data with around 60 features and most will be zeros most of the time in my training data only 2-3 cols may have values( to be precise its perf log data). however, my test data will have some values in some other columns. I've done normalization/standardization(tried both separately) and feed it to PCA/SVD(tried both separately). I used these features in to fit my model but, it is giving very inaccurate results. Whereas, if I skip normalization/standardization step and directly feed

SciPy SVD vs. Numpy SVD

给你一囗甜甜゛ 提交于 2019-12-05 02:39:38
Both SciPy and Numpy have built in functions for singular value decomposition (SVD). The commands are basically scipy.linalg.svd and numpy.linalg.svd . What is the difference between these two? Is any of them better than the other one? Apart from the error checking, the actual work seems to be done within lapack both with numpy and scipy . Without having done any benchmarking, I guess the performance should be identical. From the FAQ page , it says scipy.linalg submodule provides a more complete wrapper for the Fortran LAPACK library whereas numpy.linalg tries to be able to build independent

Image Enhancement using combination between SVD and Wavelet Transform

那年仲夏 提交于 2019-12-04 20:55:55
My objective is to handle illumination and expression variations on an image . So I tried to implement a MATLAB code in order to work with only the important information within the image. In other words, to work with only the "USEFUL" information on an image. To do that, it is necessary to delete all unimportant information from the image. Reference: this paper Lets see my steps: 1) Apply the Histogram Equalization in order to get an histo_equalized_image=histeq(MyGrayImage) . so that large intensity variations can be handled to some extent. 2) Apply svd approximations on the histo_equalized