urllib2

Python ImportError: No module named 'requests'解决方法

心已入冬 提交于 2019-12-21 11:34:43
前言:最近在学习python,安装了python3.5的环境后,在网上下载了一个python文件运行的时候,提示ImportError: No module named 'requests'(找不到requests模块)。 requests介绍:requests是python的一个HTTP客户端库,跟urllib,urllib2类似,那为什么要用requests而不用urllib2呢?官方文档中是这样说明的:python的标准库urllib2提供了大部分需要的HTTP功能,但是API太逆天了,一个简单的功能就需要一大堆代码。 解决方法:由于我安装的python的时候,也选择安装了pip,所以这里只分享自己实践过的方式。我的python安装的目录是D:/Python ①cmd ②cd D:/Python ③pip install requests 等待系统自动加载安装。 来源: https://www.cnblogs.com/jamespan23/p/5526311.html

urllib2 basic authentication oddites

馋奶兔 提交于 2019-12-21 10:51:58
问题 I'm slamming my head against the wall with this one. I've been trying every example, reading every last bit I can find online about basic http authorization with urllib2, but I can not figure out what is causing my specific error. Adding to the frustration is that the code works for one page, and yet not for another. logging into www.mysite.com/adm goes absolutely smooth. It authenticates no problem. Yet if I change the address to 'http://mysite.com/adm/items.php?n=201105&c=200' I receive

How do I gracefully interrupt urllib2 downloads?

怎甘沉沦 提交于 2019-12-21 06:56:04
问题 I am using urllib2 's build_opener() to create an OpenerDirector . I am using the OpenerDirector to fetch a slow page and so it has a large timeout. So far, so good. However, in another thread, I have been told to abort the download - let's say the user has selected to exit the program in the GUI. Is there a way to signal an urllib2 download should quit? 回答1: There is no clean answer. There are several ugly ones. Initially, I was putting rejected ideas in the question. As it has become clear

Python 2.6 urlib2 timeout issue

你说的曾经没有我的故事 提交于 2019-12-21 05:12:04
问题 It seems I cannot get the urllib2 timeout to be taken into account. I did read - I suppose - all posts related to this topic and it seems I'm not doing anything wrong. Am I correct? Many thanks for your kind help. Scenario: I need to check for Internet connectivity before continuing with the remaining of a script. I then wrote a function (Net_Access), which is provided below. When I execute this code with my LAN or Wifi interface connected, and by checking an existing hostname: all is fine as

Why I get urllib2.HTTPError with urllib2 and no errors with urllib?

六眼飞鱼酱① 提交于 2019-12-21 04:49:15
问题 I have the following simple code: import urllib2 import sys sys.path.append('../BeautifulSoup/BeautifulSoup-3.1.0.1') from BeautifulSoup import * page='http://en.wikipedia.org/wiki/Main_Page' c=urllib2.urlopen(page) This code generates the following error messages: c=urllib2.urlopen(page) File "/usr/lib64/python2.4/urllib2.py", line 130, in urlopen return _opener.open(url, data) File "/usr/lib64/python2.4/urllib2.py", line 364, in open response = meth(req, response) File "/usr/lib64/python2.4

gevent / requests hangs while making lots of head requests

孤人 提交于 2019-12-21 02:51:30
问题 I need to make 100k head requests, and I'm using gevent on top of requests. My code runs for a while, but then eventually hangs. I'm not sure why it's hanging, or whether it's hanging inside requests or gevent. I'm using the timeout argument inside both requests and gevent. Please take a look at my code snippet below, and let me know what I should change. import gevent from gevent import monkey, pool monkey.patch_all() import requests def get_head(url, timeout=3): try: return requests.head

Batch downloading text and images from URL with Python / urllib / beautifulsoup?

杀马特。学长 韩版系。学妹 提交于 2019-12-21 02:39:07
问题 I have been browsing through several posts here but I just cannot get my head around batch-downloading images and text from a given URL with Python. import urllib,urllib2 import urlparse from BeautifulSoup import BeautifulSoup import os, sys def getAllImages(url): query = urllib2.Request(url) user_agent = "Mozilla/4.0 (compatible; MSIE 7.0; Windows NT 5.1; .NET CLR 1.1.4322; .NET CLR 1.0.3705)" query.add_header("User-Agent", user_agent) page = BeautifulSoup(urllib2.urlopen(query)) for div in

Why am I getting an AttributeError when trying to print out

别来无恙 提交于 2019-12-20 18:42:34
问题 I am learning about urllib2 by following this tutorial http://docs.python.org/howto/urllib2.html#urlerror Running the code below yields a different outcome from the tutorial import urllib2 req = urllib2.Request('http://www.pretend-o-server.org') try: urllib2.urlopen(req) except urllib2.URLError, e: print e.reason Python interpreter spits this back Traceback (most recent call last): File "urlerror.py", line 8, in <module> print e.reason AttributeError: 'HTTPError' object has no attribute

Why am I getting an AttributeError when trying to print out

三世轮回 提交于 2019-12-20 18:42:31
问题 I am learning about urllib2 by following this tutorial http://docs.python.org/howto/urllib2.html#urlerror Running the code below yields a different outcome from the tutorial import urllib2 req = urllib2.Request('http://www.pretend-o-server.org') try: urllib2.urlopen(req) except urllib2.URLError, e: print e.reason Python interpreter spits this back Traceback (most recent call last): File "urlerror.py", line 8, in <module> print e.reason AttributeError: 'HTTPError' object has no attribute

Python urllib简单使用

旧时模样 提交于 2019-12-20 18:08:05
Python的urllib和urllib2模块都做与请求URL相关的操作。 它们最显著的差异为: urllib2可以接受一个Request对象,并以此可以来设置一个URL的headers,但是urllib只接收一个URL。 urllib模块可以提供进行urlencode的方法,该方法用于GET查询字符串的生成,urllib2的不具有这样的功能. python 2.7.x提供了urllib与urllib2,鉴于上述异同两个库通常搭配使用。 urlopen urllib2.urlopen(url, *data, *timeout) urlopen方法是urllib2模块最常用的方法,用于访问发送某一请求。 url参数可以是一个字符串url或者是一个Request对象。 可选参数timeout用于设置超时时间,以秒为单位。 如果没有指定,将使用设置的全局默认timeout值。 urlopen使用默认opener进行访问, 为阻塞式IO. 如果请求成功,函数将返回响应。 在data为None时默认用GET方法: import urllib2 response = urllib2.urlopen('http://python.org/') html = response.read() print(html) 使用POST发送参数前需要先将参数编码: import urllib import