Only limited number of pages can be retrieved

萝らか妹 提交于 2019-12-24 00:25:14

问题


I wonder why I can't retrieve more pages of data after page 165?

page number is: 165
4
image/gif
page number is: 165
13
page number is: 165
3
page number is: 165
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning
/usr/local/lib/python2.7/dist-packages/requests/packages/urllib3/util/ssl_.py:90: InsecurePlatformWarning: A true SSLContext object is not available. This prevents urllib3 from configuring SSL appropriately and may cause certain SSL connections to fail. For more information, see https://urllib3.readthedocs.org/en/latest/security.html#insecureplatformwarning.
  InsecurePlatformWarning

Here's the code:

 33 p=0
 34 while True:
 35     p += 1
 36     time.sleep(2)
 37     try:
 38         search = imgur_client.gallery_search('cat', window='all', sort='time', page=p)
 39         for i in range(0, len(search)):
 40             item_count +=1 
 41             print(search[i].comment_count)
 42             if not search[i].is_album:
 43                 print(search[i].type) 
 44             print('{0}: {1}'.format("page number is", p))
 45             if search[i].comment_count > 10 and not search[i].is_album:
 46                 if search[i].type[6:] in image_type:
 47                     count = 0
 48                     try:
 49                         image_file = urllib2.urlopen(search[i].link, timeout = 5)
 50                         image_file_name = 'images/'+ search[i].id+'.'+search[i].type[6:]
 51                         output_image = open(image_file_name, 'wb') 
 52                         output_image.write(image_file.read())
 53                         for post in imgur_client.gallery_item_comments(search[i].id, sort='best'):
 54                             if count <= 10:
 55                                 count += 1
 56                         output_image.close()
 57                     except urllib2.URLError as e:
 58                         print(e)
 59                         continue
 60                     except socket.timeout as e:
 61                         print(e)
 62                         continue
 63                     except socket.error as e:
 64                         print(e)
 65                         continue
 66     except ImgurClientError as e:
 67         print(e)
 68         continue

Also is there a better method than gallery_search() in a page by page fashion? How can I browse the most popular items over all the time?

来源:https://stackoverflow.com/questions/40116518/only-limited-number-of-pages-can-be-retrieved

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!