Similarity matrix -> feature vectors algorithm?

烂漫一生 提交于 2019-12-05 07:42:16

You could do a truncated singular value decomposition (SVD) to find the best k-rank approximation to the matrix. The idea is the decompose the matrix into three matrices: U, sigma, and V such that U and V are orthonormal and sigma is diagonal.

By truncating off unimportant singular values, you can achieve O(k*m) storage space.

If you are only interested in the first eigenvector + eigenvalue, the power-iteration will probably be useful. I once used it to extract keywords from text documents. (based on inter-word distance within the sentences, but similarity will probably work, too)

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!