TensorFlow Custom Allocator and Accessing Data from Tensor

江枫思渺然 提交于 2020-01-23 12:10:14

问题


In TensorFlow, you can create custom allocators for various reasons (I am doing it for new hardware). Due to the structure of the device, I need to use a struct of a few elements as my data pointer which the allocator returns as a void*.

In the kernels that I am writing, I am given access to Tensors but I need t get the pointer struct that I wrote. Examining the classes, it seemed that I could get this struct by doing tensor_t.buf_->data()

Tensor::buf_

TensorBuffer::data()

The problem is that I can't find code that does this and I am worried that it is unsafe (highly likely!) or there is a more standard way to do this.

Can someone confirm if this is a good/bad idea? And provide an alternative if such exists?


回答1:


You may also be able to use Tensor::tensor_data().data() to get access to the raw pointer, without using the weird indirection through DMAHelper.




回答2:


Four days later ...

void* GetBase(const Tensor* src) {
  return const_cast<void*>(DMAHelper::base(src));
}

from GPUUtils

DMAHelper::base() is a friend class method that is given the ability to use the private Tensor::base() to get at the data pointer.

The implementation shows that this is all just a wrapper around what I wanted to do after yet another abstraction. I am guessing it is a safer approach to getting the pointer and should be used instead.



来源:https://stackoverflow.com/questions/39797095/tensorflow-custom-allocator-and-accessing-data-from-tensor

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!