CUDA: Wrapping device memory allocation in C++

前端 未结 4 985
长情又很酷
长情又很酷 2020-12-28 17:51

I\'m starting to use CUDA at the moment and have to admit that I\'m a bit disappointed with the C API. I understand the reasons for choosing C but had the language been base

4条回答
  •  感情败类
    2020-12-28 18:24

    Does anybody have information about future CUDA developments that go in this general direction (let's face it: C interfaces in C++ s*ck)?

    Yes, I've done something like that:

    https://github.com/eyalroz/cuda-api-wrappers/

    nVIDIA's Runtime API for CUDA is intended for use both in C and C++ code. As such, it uses a C-style API, the lower common denominator (with a few notable exceptions of templated function overloads).

    This library of wrappers around the Runtime API is intended to allow us to embrace many of the features of C++ (including some C++11) for using the runtime API - but without reducing expressivity or increasing the level of abstraction (as in, e.g., the Thrust library). Using cuda-api-wrappers, you still have your devices, streams, events and so on - but they will be more convenient to work with in more C++-idiomatic ways.

提交回复
热议问题