As the title suggests, I\'m working on a site written in python and it makes several calls to the urllib2 module to read websites. I then parse them with BeautifulSoup.
Edit: Please take a look at Wai's post for a better version of this code. Note that there is nothing wrong with this code and it will work properly, despite the comments below.
The speed of reading web pages is probably bounded by your Internet connection, not Python.
You could use threads to load them all at once.
import thread, time, urllib
websites = {}
def read_url(url):
websites[url] = urllib.open(url).read()
for url in urls_to_load: thread.start_new_thread(read_url, (url,))
while websites.keys() != urls_to_load: time.sleep(0.1)
# Now websites will contain the contents of all the web pages in urls_to_load