问题
I want to parse robots.txt file in python. I have explored robotParser and robotExclusionParser but nothing really satisfy my criteria. I want to fetch all the diallowedUrls and allowedUrls in a single shot rather then manually checking for each url if it is allowed or not. Is there any library to do this?
回答1:
You can use curl
command to read the robots.txt file into a single string split it with new line check for allow and disallow urls.
import os
result = os.popen("curl https://fortune.com/robots.txt").read()
result_data_set = {"Disallowed":[], "Allowed":[]}
for line in result.split("\n"):
if line.startswith('Allow'): # this is for allowed url
result_data_set["Allowed"].append(line.split(': ')[1].split(' ')[0]) # to neglect the comments or other junk info
elif line.startswith('Disallow'): # this is for disallowed url
result_data_set["Disallowed"].append(line.split(': ')[1].split(' ')[0]) # to neglect the comments or other junk info
print (result_data_set)
回答2:
Why do you have to check your urls manually ?
You can use urllib.robotparser
in Python 3, and do something like this
import urllib.robotparser as urobot
url = "example.com"
rp = urobot.RobotFileParser()
rp.set_url(url + "/robots.txt")
rp.read()
if rp.can_fetch("*", url):
site = urllib.request.urlopen(url)
sauce = site.read()
soup = BeautifulSoup(sauce, "html.parser")
actual_url = site.geturl()[:site.geturl().rfind('/')]
my_list = soup.find_all("a", href=True)
for i in my_list:
# rather than != "#" you can control your list before loop over it
if i != "#":
newurl = str(actual_url+"/"+i)
try:
if rp.can_fetch("*", newurl):
site = urllib.request.urlopen(newurl)
# do what you want on each authorized webpage
except:
pass
else:
print("cannot scrap")
来源:https://stackoverflow.com/questions/43085744/parsing-robots-txt-in-python