How to check if the value on a website has changed

回眸只為那壹抹淺笑 提交于 2019-12-03 13:23:05

问题


Basically I'm trying to run some code (Python 3.2) if a value on a website changes, otherwise wait for a bit and check it later.

First I thought I could just save the value in a variable and compare it to the new value that was fetched the next time the script would run. But that quickly ran into problems as the value was overwritten when the script would run again and initialize that variable.

So then I tried just saving the html of the webpage as a file and then comparing it to the html that would be called on the next time the script ran. No luck there either as it kept coming up False even when there were no changes.

Next up was pickling the webpage and then trying to compare it with the html. Interestingly that didn't work either within the script. BUT, if I type file = pickle.load( open( 'D:\Download\htmlString.p', 'rb')) after the script has run and then file == html, it shows True when there hasn't been any changes.

I'm a bit confused as to why it won't work when the script runs but if I do the above it shows the correct answer.

Edit: Thanks for the responses so far guys. The question I have wasn't really about other ways to go about this (although it's always good to learn more ways to accomplish a task!) but rather why the code below doesn't work when it's run as a script, but if I reload the pickle object at the prompt after the script has run and then test it against the html, it will return True if there hasn't been any changes.

try: 
    file = pickle.load( open( 'D:\\Download\\htmlString.p', 'rb'))
    if pickle.load( open( 'D:\\Download\\htmlString.p', 'rb')) == htmlString:
        print("Values haven't changed!")
        sys.exit(0)
    else:
        pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )  
        print('Saving')
except: 
    pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )
    print('ERROR')

回答1:


Edit: I hadn't realized you were just looking for the problem with your script. Here's what I think is the problem, followed by my original answer which addresses another approach to the bigger problem you're trying to solve.

Your script is a great example of the dangers of using a blanket except statement: you catch everything. Including, in this case, your sys.exit(0).

I'm assuming you're try block is there to catch the case where D:\Download\htmlString.p doesn't exist yet. That error is called IOError, and you can catch it specifically with except IOError:

Here is your script plus a bit of code before to make it go, fixed for your except issue:

import sys
import pickle
import urllib2

request = urllib2.Request('http://www.iana.org/domains/example/')
response = urllib2.urlopen(request) # Make the request
htmlString = response.read()

try: 
    file = pickle.load( open( 'D:\\Download\\htmlString.p', 'rb'))
    if file == htmlString:
        print("Values haven't changed!")
        sys.exit(0)
    else:
        pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )  
        print('Saving')
except IOError: 
    pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "wb" ) )
    print('Created new file.')

As a side note, you might consider using os.path for your file paths -- it will help anyone later who wants to use your script on another platform, and it saves you the ugly double back-slashes.

Edit 2: Adapted for your specific URL.

There is a dynamically-generated number for the ads on that page which changes with each page load. It's right near the end after all the content, so we can just split the HTML string at that point and take the first half, discarding the part with the dynamic number.

import sys
import pickle
import urllib2

request = urllib2.Request('http://ecal.forexpros.com/e_cal.php?duration=weekly')
response = urllib2.urlopen(request) # Make the request
# Grab everything before the dynabic double-click link
htmlString = response.read().split('<iframe src="http://fls.doubleclick')[0]

try: 
    file = pickle.load( open( 'D:\\Download\\htmlString.p', 'r'))
    if pickle.load( open( 'D:\\Download\\htmlString.p', 'r')) == htmlString:
        print("Values haven't changed!")
        sys.exit(0)
    else:
        pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "w" ) )  
        print('Saving')
except IOError: 
    pickle.dump( htmlString, open( 'D:\\Download\\htmlString.p', "w" ) )
    print('Created new file.')

Your string is not a valid HTML document anymore if that was important. If it was, you might just remove that line or something. There is probably a more elegant way of doing this, -- perhaps deleting the number with a regex -- but this at least satisfies your question.

Original Answer -- an alternate approach to your problem.

What do the response headers look like from the web server? HTTP specifies a Last-Modified property that you could use to check if the content has changed (assuming the server tells the truth). Use this one with a HEAD request as Uku showed in his answer. If you'd like to conserve bandwidth and be nice to the server you're polling.

And there is also an If-Modified-Since header which sounds like what you might be looking for.

If we combine them, you might come up with something like this:

import sys
import os.path
import urllib2

url = 'http://www.iana.org/domains/example/'
saved_time_file = 'last time check.txt'

request = urllib2.Request(url)
if os.path.exists(saved_time_file):
    """ If we've previously stored a time, get it and add it to the request"""
    last_time = open(saved_time_file, 'r').read()
    request.add_header("If-Modified-Since", last_time)

try:
    response = urllib2.urlopen(request) # Make the request
except urllib2.HTTPError, err:
    if err.code == 304:
        print "Nothing new."
        sys.exit(0)
    raise   # some other http error (like 404 not found etc); re-raise it.

last_modified = response.info().get('Last-Modified', False)
if last_modified:
    open(saved_time_file, 'w').write(last_modified)
else:
    print("Server did not provide a last-modified property. Continuing...")
    """
    Alternately, you could save the current time in HTTP-date format here:
    http://www.w3.org/Protocols/rfc2616/rfc2616-sec3.html#sec3.3
    This might work for some servers that don't provide Last-Modified, but do
    respect If-Modified-Since.
    """

"""
You should get here if the server won't confirm the content is old.
Hopefully, that means it's new.
HTML should be in response.read().
"""

Also check out this blog post by Stii which may provide some inspiration. I don't know enough about ETags to have put them in my example, but his code checks for them as well.




回答2:


It would be more efficient to do a HEAD request and check the Content-Length of the document.

import urllib2
"""
read old length from file into variable
"""
request = urllib2.Request('http://www.yahoo.com')
request.get_method = lambda : 'HEAD'

response = urllib2.urlopen(request)
new_length = response.info()["Content-Length"]
if old_length != new_length:
    print "something has changed"

Note that it is unlikely although possible that the content-length will be exactly the same, but at the same time is the most efficient way. This method might be suitable or unsuitable depending what kind of changes your expect.




回答3:


You can always tell of ANY change within the data between the local stored file and the remote by hashing the contents of both. This is commonly employed to verify the veracity of downloaded data. For a continuous check, you will need a while loop.

import hashlib
import urllib

num_checks = 20
last_check = 1
while last_check != num_checks:
  remote_data = urllib.urlopen('http://remoteurl').read()
  remote_hash = hashlib.md5(remote_data).hexdigest()

  local_data = open('localfilepath').read()
  local_hash = hashlib.md5(local_data).hexdigest()
  if remote_hash == local_hash:
    print 'right now, we match!'
  else:
    print 'right now, we are different'

If the actual data need never be saved locally, I would only ever store the md5 hash and calculate it on the fly when checking.




回答4:


I wasn't entirely clear on whether or not you wanted to just see if the website has changed, or if you were going to do more with the website's data. If it is the former, definitely hash, as previously mentioned. Here is a working (python 2.6.1 on a mac) example that compares the complete old html with the new html; it should be easy to modify so it uses hashes or just a specific part of the website, as you need. Hopefully the comments and docstrings make everything clear.

import urllib2

def getFilename(url):
    '''
    Input: url
    Return: a (string) filename to be used later for storing the urls contents
    '''
    return str(url).lstrip('http://').replace("/",":")+'.OLD'


def getOld(url):
    '''
    Input: url- a string containing a url
    Return: a string containing the old html, or None if there is no old file
    (checks if there already is a url.OLD file, and make an empty one if there isn't to handle the case that this is the first run)
    Note: the file created with the old html is the format url(with : for /).OLD
    '''
    oldFilename = getFilename(url)
    oldHTML = ""
    try:
        oldHTMLfile = open(oldFilename,'r')
    except:
        # file doesn't exit! so make it
        with open(oldFilename,'w') as oldHTMLfile:
            oldHTMLfile.write("")
        return None
    else:
        oldHTML = oldHTMLfile.read()
        oldHTMLfile.close()

    return oldHTML

class ConnectionError(Exception):
    def __init__(self, value):
        if type(value) != type(''):
            self.value = str(value)
        else:
            self.value = value
    def __str__(self):
        return 'ConnectionError: ' + self.value       


def htmlHasChanged(url):
    '''
    Input: url- a string containing a url
    Return: a boolean stating whether the website at url has changed
    '''

    try:
        fileRecvd = urllib2.urlopen(url).read()
    except:
        print 'Could not connect to %s, sorry!' % url
        #handle bad connection error...
        raise ConnectionError("urlopen() failed to open " + str(url))
    else:
        oldHTML = getOld(url)
        if oldHTML == fileRecvd:
            hasChanged = False
        else:
            hasChanged = True

        # rewrite file
        with open(getFilename(url),'w') as f:
            f.write(fileRecvd)

        return hasChanged

if __name__ == '__main__':
    # test it out with whatismyip.com
    try:
        print htmlHasChanged("http://automation.whatismyip.com/n09230945.asp")
    except ConnectionError,e:
        print e


来源:https://stackoverflow.com/questions/11252576/how-to-check-if-the-value-on-a-website-has-changed

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!