How to get all image urls with urllib.request.urlopen from multiple urls

对着背影说爱祢 提交于 2020-07-23 06:06:03

问题


from bs4 import BeautifulSoup
import urllib.request

urls = [
"https://archillect.com/1",
"https://archillect.com/2",
"https://archillect.com/3",
]

soup = BeautifulSoup(urllib.request.urlopen(urls))

for u in urls:
   for img in soup.find_all("img", src=True):
    print(img["src"])

AttributeError: 'list' object has no attribute 'timeout'


回答1:


@krishna has given you the answer. I'll give you another solution for reference only.

from simplified_scrapy import Spider, SimplifiedDoc, SimplifiedMain, utils
class ImageSpider(Spider):
  name = 'archillect'
  start_urls = ["https://archillect.com/1","https://archillect.com/2","https://archillect.com/3"]
  def afterResponse(self, response, url, error=None, extra=None):
    try:
      # Create file name
      end = url.find('?') if url.find('?')>0 else len(url)
      name = 'data'+url[url.rindex('/',0,end):end]
      # save image
      if utils.saveResponseAsFile(response,name,'image'):
        return None 
      else:
        return Spider.afterResponse(self, response, url, error)
    except Exception as err:
      print (err)
  def extract(self,url,html,models,modelNames):
    doc = SimplifiedDoc(html)
    urls = doc.listImg(url=url.url)
    return {'Urls':urls} 
SimplifiedMain.startThread(ImageSpider()) # Start

Here are more examples: https://github.com/yiyedata/simplified-scrapy-demo/tree/master/spider_examples




回答2:


You can not pass the list of URL.

for url in urls:
   soup = BeautifulSoup(urllib.request.urlopen(url))


来源:https://stackoverflow.com/questions/60474932/how-to-get-all-image-urls-with-urllib-request-urlopen-from-multiple-urls

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!