How to write a scikit-learn estimator in PyTorch

我是研究僧i 提交于 2021-02-11 15:38:59

问题


I had developed an estimator in Scikit-learn but because of performance issues (both speed and memory usage) I am thinking of making the estimator to run using GPU.

One way I can think of to do this is to write the estimator in PyTorch (so I can use GPU processing) and then use Google Colab to leverage on their cloud GPUs and memory capacity.

What would be the best way to write an estimator which is already scikit-learn compatible in PyTorch?

Any pointers or hints pointing to the right direction would really be appreciated. Many thanks in advance.

来源:https://stackoverflow.com/questions/61556043/how-to-write-a-scikit-learn-estimator-in-pytorch

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!