Optimized CUDA matrix hamming distance

旧巷老猫 提交于 2019-12-06 06:04:43

问题


Is anyone aware of an optimized CUDA kernel for computing a GEMM style hamming distance between two matrices of dimension A x N and N x B? The problem is nearly identical to GEMM, but instead computes the sum( a_n != b_n ) for each vector {1 ... N}, instead of multiplying and summing each vector element.

I wanted to verify before writing my own, since this problem is relatively common, but I haven't had success in finding code for it yet. Suggestions for code to modify would be excellent as well.

EDIT:

In addition to kangshiyin's suggestions below, I found this walk-through of an optimized SGEMM implementation to be extraordinarily helpful in understanding steps beyond the basic shared memory matrix multiplication example in the CUDA C Programming Guide.


回答1:


You are right that you could write your kernel by modifying gemm() code. CUDA examples have a simple implementation of gemm(), but it is too simple. The performance is bounded by shared memory access, giving only ~250 Gflops on Kepler devices. For higher performance, you may want to check the gemm() code in MAGMA.

http://icl.cs.utk.edu/magma/index.html

These two papers also tell you how to implement and tune gemm().

http://staff.kfupm.edu.sa/ics/ahkhan/Resources/Papers/Autotuning/Autotuning%2520GEMM%2520Kernels%2520for%2520the%2520Fermi%2520GPU.pdf

http://www.netlib.org/lapack/lawnspdf/lawn267.pdf

Unlike gemm() which has hardware support with the FMA instruction for fast multiply-and-add operation, your desired operation compare-and-add may need more instructions, thus the performance should be lower. Considering the peak performance of gemm() is ~3 Tflops on Kepler. You may be able to get 0.5~2 Tflops for hamming distance matrix calculation.



来源:https://stackoverflow.com/questions/38277218/optimized-cuda-matrix-hamming-distance

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!