问题
I'm using https://github.com/tensorflow/models/blob/master/research/object_detection for object detection, and am finding that running the vanilla Faster R-CNN algorithm on my computer for inference is running entirely too slowly (~15s to process one image).
I don't have much experience with building real-world applications using GPUs, and am not sure whether I should go with a cloud instance or buy a lower-end GPU (in the $200-$500 range). I anticipate a very small amount of traffic to my application, so even the smaller instances will be quite pricey for what I'm trying to accomplish.
Question: What is the best way to for me to determine if a GPU would be fast enough without actually going out and buying it first? All of the cloud GPUs (Amazon, Google) use hardware that is way out of my budget, so running my code on these instances wouldn't give me a good comparison.
回答1:
With the rapid rise of cloud computing and services, I don't see many benefits of buying your own GPU and setting up their own servers other than restrictions of sending data to Cloud. Not only is it time consuming, but perhaps doesn't scale well. Checkout services like AWS Sagemaker (They offer free trials) where you can certainly upload your faster RCNN code and you could then hit the service through an endpoint. Services like sagemaker are relatively cheap ($4/hour for a very fast Tesla V100 gpu) if you want to run some experimentation or relatively have a small load.
The best way for you to determine which GPU is best suitable for you is to boot up an EC2 Instance with the GPU of your choice, upload the Faster RCNN repository along with some test environment which calculates time to process the images. EC2 instance have a variety of GPUs that you can test out your code with. From there, you should be able to see which GPU is good enough for your use-case.
But if you are looking for 300-500 dollar gpu, then you would probably have to switch to a lighter model like SSD or YOLO since Faster RCNN is a relatively big model. In my previous projects where I had to also use Faster RCNN, I find that even with a Tesla K80 (costs $5000) is still not fast enough for my use-case. So I ended up upgrading to a a Tesla V100 gpu(costs ~ $10 000) which finally is enough for me.
All this is done in EC2 of course.
来源:https://stackoverflow.com/questions/59547549/how-do-i-know-which-gpu-would-be-sufficient-for-a-problem