nearest-neighbor

Benefits of nearest neighbor search with Morton-order?

时光总嘲笑我的痴心妄想 提交于 2020-01-12 18:35:56
问题 While working on the simulation of particle interactions, I stumbled across grid indexing in Morton-order (Z-order)(Wikipedia link) which is regarded to provide an efficient nearest neighbor cell search. The main reason that I've read is the almost sequential ordering of spatially close cells in memory. Being in the middle of a first implementation, I can not wrap my head around how to efficiently implement the algorithm for the nearest neighbors, especially in comparison to a basic uniform

Iterative Closest Point (ICP) implementation on python

耗尽温柔 提交于 2020-01-10 07:07:26
问题 I have been searching for an implementation of the ICP algorithm in python lately with no result. According to wikipedia article http://en.wikipedia.org/wiki/Iterative_closest_point, the algorithm steps are: Associate points by the nearest neighbor criteria (for each point in one point cloud find the closest point in the second point cloud). Estimate transformation parameters (rotation and translation) using a mean square cost function (the transform would align best each point to its match

How to compare every element in the RDD with every other element in the RDD ?

谁说我不能喝 提交于 2020-01-09 11:19:04
问题 I'm Trying to perform a K nearest neighbor search using spark. I have a RDD[Seq[Double]] and I'm planing to return a RDD[(Seq[Double],Seq[Seq[Double]])] with the actual row and a list of neighbors val out = data.map(row => { val neighbours = data.top(num = 3)(new Ordering[Seq[Double]] { override def compare(a:Seq[Double],b:Seq[Double]) = { euclideanDistance(a,row).compare(euclideanDistance(b,row))*(-1) } }) (row,neighbours.toSeq) }) And it Gives the following error on spark Submit 15/04/29 21

How to compare every element in the RDD with every other element in the RDD ?

流过昼夜 提交于 2020-01-09 11:18:05
问题 I'm Trying to perform a K nearest neighbor search using spark. I have a RDD[Seq[Double]] and I'm planing to return a RDD[(Seq[Double],Seq[Seq[Double]])] with the actual row and a list of neighbors val out = data.map(row => { val neighbours = data.top(num = 3)(new Ordering[Seq[Double]] { override def compare(a:Seq[Double],b:Seq[Double]) = { euclideanDistance(a,row).compare(euclideanDistance(b,row))*(-1) } }) (row,neighbours.toSeq) }) And it Gives the following error on spark Submit 15/04/29 21

Numpy: Single loop vectorized code slow compared to two loop iteration

▼魔方 西西 提交于 2020-01-06 21:07:21
问题 The following codes iterates over each element of two array to compute pairwise euclidean distance. def compute_distances_two_loops(X, Y): num_test = X.shape[0] num_train = Y.shape[0] dists = np.zeros((num_test, num_train)) for i in range(num_test): for j in range(num_train): dists[i][j] = np.sqrt(np.sum((X[i] - Y[j])**2)) return dists The following code serves the same purpose but with single loop. def compute_distances_one_loop(X, Y): num_test = X.shape[0] num_train = Y.shape[0] dists = np

Scikit-learn - user-defined weights function for KNeighborsClassifier

浪子不回头ぞ 提交于 2020-01-05 08:55:26
问题 I have a KNeighborsClassifier which classifies data based on 4 attributes. I'd like to weight those 4 attributes manually but always run into "operands could not be broadcast together with shapes (1,5) (4)". There is very little documentation on weights : [callable] : a user-defined function which accepts an array of distances, and returns an array of the same shape containing the weights. (from here) This is what I have for now : for v in result: params = [v['a_one'], v['a_two'], v['a_three'

Knn regression in Matlab

穿精又带淫゛_ 提交于 2020-01-03 16:57:43
问题 What is the k nearest neighbour regression function in Matlab? Is only knn classification function available? Is anybody knowing any useful literature regarding to that? Regards Farideh 回答1: I don't believe the k-NN regression algorithm is directly implemented in matlab, but if you do some googling you can find some valid implementations. The algorithm is fairly simple though. Find the k-Nearest elements using whatever distance metric is suitable. Convert the inverse distance weight of each

Finding the single nearest neighbor using a Prefix tree in O(1)?

与世无争的帅哥 提交于 2020-01-02 18:05:55
问题 I'm reading a paper where they mention that they were able to find the single nearest neighbor in O(1) using a prefix tree. I will describe the general problem and then the classical solution and finally the proposed solution in the paper: Problem : given a list of bit vectors L (all vectors have the same length) and query bit vector q, we would like to find the nearest neighbor of q. The distance metric is the hamming distance (how many bits are different). The naive approach would be to go

Locality Sensitivy Hashing in OpenCV for image processing

一世执手 提交于 2020-01-02 10:48:14
问题 This is my first image processing application, so please be kind with this filthy peasant. THE APPLICATION: I want to implement a fast application ( performance are crucial even over accuracy) where given a photo (taken by mobile phone) containing a movie poster finds the most similar photo in a given dataset and return a similarity score. The dataset is composed by similar pictures (taken by mobile phone, containing a movie poster). The images can be of different size, resolutions and can be

Locality Sensitivy Hashing in OpenCV for image processing

|▌冷眼眸甩不掉的悲伤 提交于 2020-01-02 10:47:12
问题 This is my first image processing application, so please be kind with this filthy peasant. THE APPLICATION: I want to implement a fast application ( performance are crucial even over accuracy) where given a photo (taken by mobile phone) containing a movie poster finds the most similar photo in a given dataset and return a similarity score. The dataset is composed by similar pictures (taken by mobile phone, containing a movie poster). The images can be of different size, resolutions and can be