问题
I had developed an estimator in Scikit-learn but because of performance issues (both speed and memory usage) I am thinking of making the estimator to run using GPU.
One way I can think of to do this is to write the estimator in PyTorch (so I can use GPU processing) and then use Google Colab to leverage on their cloud GPUs and memory capacity.
What would be the best way to write an estimator which is already scikit-learn compatible in PyTorch?
Any pointers or hints pointing to the right direction would really be appreciated. Many thanks in advance.
来源:https://stackoverflow.com/questions/61556043/how-to-write-a-scikit-learn-estimator-in-pytorch