How to finish a broken data upload to the production Google App Engine server?

时光总嘲笑我的痴心妄想 提交于 2019-12-11 17:43:30

问题


I was uploading the data to App Engine (not dev server) through loader class and remote api, and I hit the quota in the middle of a CSV file. Based on logs and progress sqllite db, how can I select remaining portion of data to be uploaded?

Going through tens of records to determine which was and which was not transfered, is not appealing task, so I look for some way to limit the number of record I need to check.

Here's relevant (IMO) log portion, how to interpret work item numbers?

[DEBUG    2010-03-30 03:22:51,757 bulkloader.py] [Thread-2] [1041-1050] Transferred 10 entities in 3.9 seconds
[DEBUG    2010-03-30 03:22:51,757 adaptive_thread_pool.py] [Thread-2] Got work item [1071-1080]
<cut>
[DEBUG    2010-03-30 03:23:09,194 bulkloader.py] [Thread-1] [1141-1150] Transferred 10 entities in 4.6 seconds
[DEBUG    2010-03-30 03:23:09,194 adaptive_thread_pool.py] [Thread-1] Got work item [1161-1170]
<cut>
[DEBUG    2010-03-30 03:23:09,226 bulkloader.py] [Thread-3] [1151-1160] Transferred 10 entities in 4.2 seconds
[DEBUG    2010-03-30 03:23:09,226 adaptive_thread_pool.py] [Thread-3] Got work item [1171-1180]
[ERROR    2010-03-30 03:23:10,174 bulkloader.py] Retrying on non-fatal HTTP error: 503 Service Unavailable

回答1:


You can resume a broken upload:

If the transfer is interrupted, you can resume the transfer from where it left off using the --db_filename=... argument. The value is the name of the progress file created by the tool, which is either a name you provided with the --db_filename argument when you started the transfer, or a default name that includes a timestamp. This assumes you have sqlite3 installed, and did not disable the progress file with --db_filename=skip.



来源:https://stackoverflow.com/questions/2547248/how-to-finish-a-broken-data-upload-to-the-production-google-app-engine-server

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!