Extracting information from AJAX based sites using Python

帅比萌擦擦* 提交于 2019-12-02 18:28:16

问题


I am trying to retrieve query results on sites based on ajax like www.snapbird.org using Python. Since it doesn't show in the page source, I am not sure how to proceed. I am a Python newbie and hence it would be great if I could get a pointer in the right direction. I am also open to some other approach to the task if that is easier


回答1:


This is going to be complex but as a start, ppen firebug and find the URL that gets called when the AJAX request is handled. You can call that directly in your Python program and parse the output.




回答2:


You could use Selenium's Python client driver to parse the page source. I usually use this in conjunction with PyQuery to make web scraping easier.

Here's the basic tutorial for Selenium's Python driver. Be sure to follow the instructions for Selenium version 2 instead of version 1 (unless you're using version 1 for some reason).




回答3:


You could also configure chrome/firefox to an HTTP proxy and then log/extract the necessary content with the proxy. I've tinkered with python proxies to save/log the requests/content based on content-type or URI globs.

For other projects I've written site-specific javascript bookmarklets which poll for new data and then POST it to my server (by dynamically creating both a form and iframe, and setting myform.target=myiframe;

Other javascript scripts/bookmarklets simulate a user interacting with sites, so instead of polling every few seconds the javascript automates clicking buttons and form submissions, etc. These scripts are always very site-specific of course but they've been hugely useful for me, especially when iterating over all the paginated results for a given search.

Here is a stripped down version of walking over a list of "paginated" results and preparing to send the data off to my server (which then further parses it with BeautifulSoup). In particular this was designed for Youtube's Sent/Inbox messages.

var tables = [];
function process_and_repeat(){
    if(!(inbox && inbox.message_pane_ && inbox.message_pane_.innerHTML)){
        alert("We've got no data!");
    }
    if(inbox.message_pane_.innerHTML.indexOf('<table') === 0)
    {
        tables.push(inbox.message_pane_.innerHTML);
        inbox.next_page();
        setTimeout("process_and_repeat()",3000);
    }
    else{
        alert("Fininshed, [" + tables.length + " processed]");
        document.write('<form action=http://curl.sente.cc method=POST><textarea name=sent.html>'+escape(tables.join('\n'))+'</textarea><input type=submit></form>')
    }
}

process_and_repeat();  // now we wait and watch as all the paginated pages are viewed :)

This is a stripped down example without any fancy iframes/non-essentials which just add complexity.

Adding to what Liam said, Selenium is a great tool, too, which has aided in my various scraping needs. I'd be more than happy to help you out with this if you'd like.




回答4:


One easy solution might be using a browser like Mechanize. So you can browse site, follow links, make searches and nearly everything that you can do with a browser with user interface.

But for a very sepcific job, you may not even need a such library, you can use urllib and urllib2 python libraries to make a connection and read response... You can use Firebug to see data structure of a search and response body. Then use urllib to make a request with relevant parameters...

With an example...

I made a search with joyvalencia and check the request url with firebug to see:

http://api.twitter.com/1/statuses/user_timeline.json?screen_name=joyvalencia&count=100&page=2&include_rts=true&callback=twitterlib1321017083330

So calling this url with urllib2.urlopen() will be the same with making the query on Snapbird. Response body is:

twitterlib1321017083330([{"id_str":"131548107799396357","place":null,"geo":null,"in_reply_to_user_id_str":null,"coordinates":.......

When you use urlopen() and read the response, the upper string is what you get... Then you can use json library of python to read the data and parse it to a pythonic data structure...



来源:https://stackoverflow.com/questions/8084707/extracting-information-from-ajax-based-sites-using-python

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!