linear-algebra

Matlab- Given matrix X with xi samples, y binary column vector, and a vector w plot all these into 3d graph

孤街醉人 提交于 2020-01-06 05:22:08
问题 I have started to learn Machine Learning, and programming in matlab. I want to plot a matrix sized m*d where d=3 and m are the number of points. with y binary vector I'd like to color each point with blue/red. and plot a plane which is described with the vertical vector to it w. The problem I trying to solve is to give some kind of visual representation of the data and the linear predictor. All I know is how to single points with plot3, but no any number of points. Thanks. 回答1: Plot the

Is it possible to solve a non-square under/over constrained matrix using Accelerate/LAPACK?

眉间皱痕 提交于 2020-01-05 19:22:59
问题 Is it possible to solve a non-square under/over constrained matrix using Accelerate/LAPACK? Such as the following two matrices. If any variables are under constrained they should equal 0 instead of being infinite. So in the under constrained case: A, D & E would equal 0, while B, C & F equal -1. In the over constrained case all variables would be equal to -1. Under Constrained: ____ ____ | (A) (B) (C) (D) (E) (F) | | -1 0 0 1 0 0 | 0 | | 1 0 0 0 -1 0 | 0 | | 0 -1 1 0 0 0 | 0 | | 0 1 0 0 0 -1

Linear convolution using fft for system output

怎甘沉沦 提交于 2020-01-05 04:36:12
问题 Here is a mass-spring-damper system with an impulse response, h and an arbitrary forcing function, f ( cos(t) in this case). I am trying to use Matlab's FFT function in order to perform convolution in the frequency domain. I am expecting for the output ( ifft(conv) ) to be the solution to the mass-spring-damper system with the specified forcing, however my plot looks completely wrong! So, i must be implementing something wrong. Please help me find my errors in my code below! Thanks clear

Java optimized Cramers rule function

流过昼夜 提交于 2020-01-04 14:22:11
问题 Recently learned about Cramers rule in precalculus, and decided to make an algorithm in Java to help me understand it better. The following code works 100% correctly, however it does not use any sort of for loop to do what it does in a much simpler fashion. Question: Is there a more elegant implementation of Cramers Rule in Java? I'm thinking that making a basic determinant method, and then doing some column swapping for when I need to take the determinant of Dx, Dy, and Dz. (for Dx, swap

Implement Gauss-Jordan elimination in Haskell

…衆ロ難τιáo~ 提交于 2020-01-04 07:54:32
问题 We want to program the gauss-elimination to calculate a basis (linear algebra) as exercise for ourselves. It is not homework. I thought first of [[Int]] as structure for our matrix. I thought then that we can sort the lists lexicographically. But then we must calculate with the matrix. And there is the problem. Can someone give us some hints. 回答1: Consider using matrices from the hmatrix package. Among its modules you can find both a fast implementation of a matrix and a lot of linear algebra

Performing many small matrix operations in parallel in OpenCL

旧城冷巷雨未停 提交于 2020-01-04 06:52:55
问题 I have a problem that requires me to do eigendecomposition and matrix multiplication of many (~4k) small (~3x3) square Hermitian matrices. In particular, I need each work item to perform eigendecomposition of one such matrix, and then perform two matrix multiplications. Thus, the work that each thread has to do is rather minimal, and the full job should be highly parallelizable. Unfortunately, it seems all the available OpenCL LAPACKs are for delegating operations on large matrices to the GPU

Special tensor contraction in Python

爷,独闯天下 提交于 2020-01-04 06:22:49
问题 I need to perform a special type of tensor contraction. I want something of this kind: A_{bg} = Sum_{a,a',a''} ( B_{a} C_{a'b} D_{a''g} ) where all the indices can have values 0,1 and the sum over a, a' and a'' is carried for all cases where a+a'+a'' = 1 or a+a'+a'' = 2. So it is like the reverse of the Einstein summation convention: I want to sum only when one of the three indices is different to the others. Moreover, I want some flexibility with the number of indices that are not being

scipy LU factorization permutation matrix

纵然是瞬间 提交于 2020-01-04 02:57:26
问题 As I understand LU factorization, it means that a matrix A can be written as A = LU for a lower-triangular matrix L and an upper-triangular matrix U. However, the functions in scipy relating to LU factorizations (lu, lu_factor, lu_solve) seem to involve a third matrix P, such that A = PLU and P is a permutation matrix (and L, U are as before). What is the point of this permutation matrix? If a "true" LU factorization is always possible, why ever have P be something other than the identity

Unexpected eigenvectors in numPy

狂风中的少年 提交于 2020-01-03 15:33:23
问题 I have seen this question, and it is relevant to my attempt to compute the dominant eigenvector in Python with numPy. I am trying to compute the dominant eigenvector of an n x n matrix without having to get into too much heavy linear algebra. I did cursory research on determinants, eigenvalues, eigenvectors, and characteristic polynomials, but I would prefer to rely on the numPy implementation for finding eigenvalues as I believe it is more efficient than my own would be. The problem I

How to efficiently compute the inner product of two dictionaries

假如想象 提交于 2020-01-03 08:55:07
问题 Suppose I represent a feature vector using a dictionary (why? because I know the features are sparse, but, more on that later). How should I implement the inner product of two such dictionaries (denoted, A, B) I tried the naive approach: for k in A: if k in B: sum += A[k] * B[k] but it turns out to be slow. Some more details: I'm using a dictionary to represent features because The feature keys are strings There are ~20K possible keys Each vector is sparse (say, about 1000 non-zero elements).