nearest neighbour search 4D space python fast - vectorization

六眼飞鱼酱① 提交于 2020-08-26 07:10:06

问题


For each observation in X (there are 20) I want to get the k(3) nearest neighbors. How to make this fast to support up to 3 to 4 million rows? Is it possible to speed up the loop iterating over the elements? Maybe via numpy, numba or some kind of vectorization?

A naive loop in python:

import numpy as np
from sklearn.neighbors import KDTree

n_points = 20
d_dimensions = 4
k_neighbours = 3

rng = np.random.RandomState(0)
X = rng.random_sample((n_points, d_dimensions))
print(X)
tree = KDTree(X, leaf_size=2, metric='euclidean')

for element in X:
    print('********')
    print(element)

# when simply using the first row
#element = X[:1]
#print(element)

    # potential optimization: query_radius https://scikit-learn.org/stable/modules/generated/sklearn.neighbors.KDTree.html#sklearn.neighbors.KDTree.query_radius
    dist, ind = tree.query([element], k=k_neighbours, return_distance=True, dualtree=False, breadth_first=False, sort_results=True)

    # indices of 3 closest neighbors
    print(ind)
    #[[0 9 1]] !! includes self (element that was searched for)
    print(dist)  # distances to 3 closest neighbors
    #[[0.         0.38559188 0.40997835]] !! includes self (element that was searched for)

    # actual returned elements for index:
    print(X[ind])

    ## after removing self
    print(X[ind][0][1:])

Optimally the output is a pandas.DataFrame of the following structure:

lat_1,long_1,lat_2,long_2,neighbours_list
0.5488135,0.71518937,0.60276338,0.54488318, [[0.61209572 0.616934   0.94374808 0.6818203 ][0.4236548  0.64589411 0.43758721 0.891773]

edit

For now, I have a pandas-based implementation:

df = df.dropna() # there are sometimes only parts of the tuple (either left or right) defined
X = df[['lat1', 'long1', 'lat2', 'long2']]
tree = KDTree(X, leaf_size=4, metric='euclidean')

k_neighbours = 3
def neighbors_as_list(row, index, complete_list):
    dist, ind = index.query([[row['lat1'], row['long1'], row['lat2'], row['long2']]], k=k_neighbours, return_distance=True, dualtree=False, breadth_first=False, sort_results=True)
    return complete_list.values[ind][0][1:]    
df['neighbors'] = df.apply(neighbors_as_list, index=tree, complete_list=X, axis=1)
df.head()

But this is very slow.

edit 2

Sure, here is a pandas version:

import numpy as np
import pandas as pd

from sklearn.neighbors import KDTree
from scipy.spatial import cKDTree

rng = np.random.RandomState(0)
#n_points = 4_000_000
n_points = 20
d_dimensions = 4
k_neighbours = 3

X = rng.random_sample((n_points, d_dimensions))
X


df = pd.DataFrame(X)
df = df.reset_index(drop=False)
df.columns = ['id_str', 'lat_1', 'long_1', 'lat_2', 'long_2']
df.id_str = df.id_str.astype(object)
display(df.head())

tree = cKDTree(df[['lat_1', 'long_1', 'lat_2', 'long_2']])
dist,ind=tree.query(X, k=k_neighbours,n_jobs=-1)

display(dist)
print(df[['lat_1', 'long_1', 'lat_2', 'long_2']].shape)
print(X[ind_out].shape)
X[ind_out]

# fails with
# AssertionError: Shape of new values must be compatible with manager shape
df['neighbors'] = X[ind_out]
df

But it fails as I cannot re-assign the result.


回答1:


You could use scipy's cKdtree.

Example

rng = np.random.RandomState(0)
n_points = 4_000_000
d_dimensions = 4
k_neighbours = 3

X = rng.random_sample((n_points, d_dimensions))

tree = cKDTree(X)

#%timeit tree = cKDTree(X)
#3.74 s ± 29.6 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

#%%timeit
_,ind=tree.query(X, k=k_neighbours,n_jobs=-1)
#shape=(4000000, 2)
ind_out=ind[:,1:]

#shape=(4000000, 2, 4)
coords_out=X[ind_out].shape
#7.13 s ± 87.1 ms per loop (mean ± std. dev. of 7 runs, 1 loop each)

About 11s for a problem of this size is quite good.



来源:https://stackoverflow.com/questions/60078287/nearest-neighbour-search-4d-space-python-fast-vectorization

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!