boost-compute

Performance: boost.compute v.s. opencl c++ wrapper

ⅰ亾dé卋堺 提交于 2019-12-21 04:13:41
问题 The following codes add two vectors using boost.compute and opencl c++ wrapper respectively. The result shows boost.compute is almost 20 times slower than the opencl c++ wrapper. I wonder if I miss use boost.compute or it is indeed slow. Platform: win7, vs2013, boost 1.55, boost.compute 0.2, ATI Radeon HD 4600 Code uses the c++ wrapper: #define __CL_ENABLE_EXCEPTIONS #include <CL/cl.hpp> #include <boost/timer/timer.hpp> #include <boost/smart_ptr/scoped_array.hpp> #include <fstream> #include

Performance: boost.compute v.s. opencl c++ wrapper

喜欢而已 提交于 2019-12-21 04:13:07
问题 The following codes add two vectors using boost.compute and opencl c++ wrapper respectively. The result shows boost.compute is almost 20 times slower than the opencl c++ wrapper. I wonder if I miss use boost.compute or it is indeed slow. Platform: win7, vs2013, boost 1.55, boost.compute 0.2, ATI Radeon HD 4600 Code uses the c++ wrapper: #define __CL_ENABLE_EXCEPTIONS #include <CL/cl.hpp> #include <boost/timer/timer.hpp> #include <boost/smart_ptr/scoped_array.hpp> #include <fstream> #include

boost::compute stream compaction

泪湿孤枕 提交于 2019-12-07 18:54:52
问题 How to do stream compaction with boost::compute? E.g. if you want to perform heavy operation only on certain elements in the array. First you generate mask array with ones corresponding to elements for which you want to perform operation: mask = [0 0 0 1 1 0 1 0 1] Then perform exclusive scan (prefix sum) of mask array to get: scan = [0 0 0 0 1 2 2 3 3] Then compact this array with: if (mask[i]) inds[scan[i]] = i; To get final array of compacted indices (inds): [3 4 6 8] Size of the final

Differences between VexCL, Thrust, and Boost.Compute

感情迁移 提交于 2019-11-29 19:52:00
With a just a cursory understanding of these libraries, they look to be very similar. I know that VexCL and Boost.Compute use OpenCl as a backend (although the v1.0 release VexCL also supports CUDA as a backend) and Thrust uses CUDA. Aside from the different backends, what's the difference between these. Specifically, what problem space do they address and why would I want to use one over the other. Also, on the Thrust FAQ it is stated that The primary barrier to OpenCL support is the lack of an OpenCL compiler and runtime with support for C++ templates If this is the case, how is it possible

Differences between VexCL, Thrust, and Boost.Compute

那年仲夏 提交于 2019-11-28 14:46:11
问题 With a just a cursory understanding of these libraries, they look to be very similar. I know that VexCL and Boost.Compute use OpenCl as a backend (although the v1.0 release VexCL also supports CUDA as a backend) and Thrust uses CUDA. Aside from the different backends, what's the difference between these. Specifically, what problem space do they address and why would I want to use one over the other. Also, on the Thrust FAQ it is stated that The primary barrier to OpenCL support is the lack of