Google Colab is very slow compared to my PC

*爱你&永不变心* 提交于 2019-11-30 03:59:56

As @Feng has already noted, reading files from drive is very slow. This tutorial suggests using some sort of a memory mapped file like hdf5 or lmdb in order to overcome this issue. This way the I\O Operations are much faster (for a complete explanation on the speed gain of hdf5 format see this).

It's very slow to read file from google drives.

For example, I have one big file(39GB).

It cost more than 10min when I exec '!cp drive/big.file /content/'.

After I shared my file, and got the url from google drive. It cost 5 min when I exec '! wget -c -O big.file http://share.url.from.drive'. Download speed can up to 130MB/s.

I have the same question as to why the GPU on colab seems to be taking at least just as long as my local pc so I can't really be of help there. But with that being said, if you are trying to use your data locally, I have found the following process to be significantly faster than just using the upload function provided in colab.

1.) mount google drive

# Run this cell to mount your Google Drive.
from google.colab import drive
drive.mount('/content/drive')

2.) create a folder outside of the google drive folder that you want your data to be stored in

3.) use the following command to copy the contents from your desired folder in google drive to the folder you created

  !ln -s "/content/drive/My Drive/path_to_folder_desired" "/path/to/the_folder/you created"

(this is referenced from another stackoverflow response that I used to find a solution to a similar issue )

4.) Now you have your data available to you at the path, "/path/to/the_folder/you created"

You can load your data as numpy array (.npy format) and use flow method instead of flow_from_directory. Colab provides 25GB RAM ,so even for big data-sets you can load your entire data into memory. The speed up was found to be aroud 2.5x, with the same data generation steps!!! (Even faster than data stored in colab local disk i.e '/content' or google drive.

Since colab provides only a single core CPU (2 threads per core), there seems to be a bottleneck with CPU-GPU data transfer (say K80 or T4 GPU), especially if you use data generator for heavy preprocessing or data augmentation. You can also try setting different values for parameters like 'workers', 'use_multiprocessing', 'max_queue_size ' in fit_generator method ...

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!