Download files using requests and BeautifulSoup

时光总嘲笑我的痴心妄想 提交于 2019-12-20 17:14:54

问题


I'm trying download a bunch of pdf files from here using requests and beautifulsoup4. This is my code:

import requests
from bs4 import BeautifulSoup as bs

_ANO = '2013/'
_MES = '01/'
_MATERIAS = 'matematica/'
_CONTEXT = 'wp-content/uploads/' + _ANO + _MES
_URL = 'http://www.desconversa.com.br/' + _MATERIAS + _CONTEXT

r = requests.get(_URL)
soup = bs(r.text)

for i, link in enumerate(soup.findAll('a')):
    _FULLURL = _URL + link.get('href')

    for x in range(i):
        output = open('file[%d].pdf' % x, 'wb')
        output.write(_FULLURL.read())
        output.close()

I'm getting AttributeError: 'str' object has no attribute 'read'.

Ok, I know that, but... how can I download from that URL generated?


回答1:


This will write all the files from the page with their original filenames into a pdfs/ directory.

import requests
from bs4 import BeautifulSoup as bs
import urllib2


_ANO = '2013/'
_MES = '01/'
_MATERIAS = 'matematica/'
_CONTEXT = 'wp-content/uploads/' + _ANO + _MES
_URL = 'http://www.desconversa.com.br/' + _MATERIAS + _CONTEXT

# functional
r = requests.get(_URL)
soup = bs(r.text)
urls = []
names = []
for i, link in enumerate(soup.findAll('a')):
    _FULLURL = _URL + link.get('href')
    if _FULLURL.endswith('.pdf'):
        urls.append(_FULLURL)
        names.append(soup.select('a')[i].attrs['href'])

names_urls = zip(names, urls)

for name, url in names_urls:
    print url
    rq = urllib2.Request(url)
    res = urllib2.urlopen(rq)
    pdf = open("pdfs/" + name, 'wb')
    pdf.write(res.read())
    pdf.close()



回答2:


It might be easier with wget, because then you have the full power of wget (user agent, follow, ignore robots.txt ...), if necessary:

import os

names_urls = zip(names, urls)

for name, url in names_urls:
    print('Downloading %s' % url)
    os.system('wget %s' % url)


来源:https://stackoverflow.com/questions/19056031/download-files-using-requests-and-beautifulsoup

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!