I\'m trying to construct a parallel algorithm with CUDA that takes an array of integers and removes all of the 0
\'s with or without keeping the order.
E
What you are asking for is a classic parallel algorithm called stream compaction1.
If Thrust is an option, you may simply use thrust::copy_if. This is a stable algorithm, it preserves relative order of all elements.
Rough sketch:
#include <thrust/copy.h>
template<typename T>
struct is_non_zero {
__host__ __device__
auto operator()(T x) const -> bool {
return T != 0;
}
};
// ... your input and output vectors here
thrust::copy_if(input.begin(), input.end(), output.begin(), is_non_zero<int>());
If Thrust is not an option, you may implement stream compaction yourself (there is plenty of literature on the topic). It's a fun and reasonably simple exercise, while also being a basic building block for more complex parallel primitives.
(1) Strictly speaking, it's not exactly stream compaction in the traditional sense, as stream compaction is traditionally a stable algorithm but your requirements do not include stability. This relaxed requirement could perhaps lead to a more efficient implementation?
Stream compaction is a well known problem for which lot code was written (Thrust,Chagg to cite two libraries that implements stream compaction on CUDA).
If you have a relatively new CUDA-capable device which supports intrinsic function as __ballot (compute cdapability >= 3.0) it is worth to try a small CUDA procedure that performs stream compaction much faster than Thrust.
Here finds the code and minimal doc. https://github.com/knotman90/cuStreamComp
Is uses ballotting function in a single kernel fashion to perform the compaction.
With this answer, I'm only trying to provide more details to Davide Spataro's approach.
As you mentioned, stream compaction consists of removing undesired elements in a collection depending on a predicate. For example, considering an array of integers and the predicate p(x)=x>5
, the array A={6,3,2,11,4,5,3,7,5,77,94,0}
is compacted to B={6,11,7,77,94}
.
The general idea of stream compaction approaches is that a different computational thread be assigned to a different element of the array to be compacted. Each of such threads must decide to write its corresponding element to the output array depending on whether it satisfies the relevant predicate or not. The main problem of stream compaction is thus letting each thread know in which position the corresponding element must be written in the output array.
The approach in [1,2] is an alternative to Thrust's copy_if
mentioned above and consists of three steps:
Step #1. Let P
be the number of launched threads and N
, with N>P
, the size of the vector to be compacted. The input vector is divided in sub-vectors of size S
equal to the block size. The __syncthreads_count(pred)
block intrinsic is exploited which counts the number of threads in a block satisfying the predicate pred. As a result of the first step, each element of the array d_BlockCounts
, which has size N/P
, contains the number of elements meeting the predicate pred in the corresponding block.
Step #2. An exclusive scan operation is performed on the array d_BlockCounts. As a result of the second step, each thread knows how many elements in the previous blocks write an element. Accordingly, it knows the position where to write its corresponding element, but for an offset related to its own block.
Step #3. Each thread computes the mentioned offset using warp intrinsic functions and eventually writes to the output array. It should be noted that the execution of step #3 is related to warp scheduling. As a consequence, the elements order in the output array does not necessarily reflect the elements order in the input array.
Of the three steps above, the second is performed by CUDA Thrust’s exclusive_scan
primitive and is computationally significantly less demanding than the other two.
For an array of 2097152
elements, the mentioned approach has executed in 0.38ms
on an NVIDIA GTX 960
card, in contrast to 1.0ms
of CUDA Thrust’s copy_if
. The mentioned approach appears to be faster for two reasons:
1) It is specifically tailored to cards supporting warp intrinsic elements;
2) The approach does not guarantee the output ordering.
It should be noticed that we have tested the approach also against the code available at inkc.sourceforge.net. Although the latter code is arranged in a single kernel call (it does not employ any CUDA Thrust primitive), it has not better performance as compared to the three-kernels version.
The full code is available here and is slightly optimized as compared to the original Davide Spataro's routine.
[1] M.Biller, O. Olsson, U. Assarsson, “Efficient stream compaction on wide SIMD many-core architectures,” Proc. of the Conf. on High Performance Graphics, New Orleans, LA, Aug. 01 - 03, 2009, pp. 159-166.
[2] D.M. Hughes, I.S. Lim, M.W. Jones, A. Knoll, B. Spencer, “InK-Compact: in-kernel stream compaction and its application to multi-kernel data visualization on General-Purpose GPUs,” Computer Graphics Forum, vol. 32, n. 6, pp. 178-188, 2013.