问题
I am trying to fetch the data from the website. But not getting any of the information for fields like name, Nature of business, Telephone, Email, etc. in the variable soup. What should I add to the below code to have this data?
import requests
import pandas as pd
from bs4 import BeautifulSoup
page = "http://www.pmas.sg/page/members-directory"
pages = requests.get(page)
soup = BeautifulSoup(pages.content, 'html.parser')
print(soup)
The output I am getting using the above code is:-
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2 Final//EN">
<html>
<head>
<title>WebKnight Application Firewall Alert</title>
<meta content="NOINDEX" name="ROBOTS"/>
</head>
<body bgcolor="#ffffff" link="#FF3300" text="#000000" vlink="#FF3300">
<table cellpadding="3" cellspacing="5" width="410">
<tr>
<td align="left">
<font face="Verdana,Arial,Helvetica" size="2">
<font size="3"><b>WebKnight Application Firewall Alert</b></font><br/><br/><br/>
Your request triggered an alert! If you feel that you have received this page in error, please contact the administrator of this web site.
<br/>
<hr/>
<br/><b>What is WebKnight?</b><br/>
AQTRONIX WebKnight is an application firewall for web servers and is released under the GNU General Public License. It is an ISAPI filter for securing web servers by blocking certain requests. If an alert is triggered WebKnight will take over and protect the web server.<br/><br/>
<hr/>
<br/>For more information on WebKnight: <a href="http://www.aqtronix.com/webknight/">http://www.aqtronix.com/WebKnight/</a><br/><br/>
<b><font color="#FF3300">AQTRONIX</font> WebKnight</b></font>
</td>
</tr>
</table>
</body>
</html>
回答1:
import requests
from bs4 import BeautifulSoup
import csv
import regex
headers = {
"User-Agent": "Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:71.0) Gecko/20100101 Firefox/71.0"
}
r = requests.get('http://www.pmas.sg/page/members-directory', headers=headers)
soup = BeautifulSoup(r.text, 'html.parser')
data = []
for item in soup.findAll('div', {'class': 'col-md-4'}):
l = []
for p in item.findAll('p'):
matches = regex.findall(
r"^(?:.*?:[[:blank:]]+\K)?.*", p.text, regex.MULTILINE)
b = next(iter(matches))
l.append(b)
if l:
print(l)
data.append(l)
with open('data.csv', 'w', newline='') as f:
writer = csv.writer(f)
writer.writerow(['Name', 'Nature of Business',
'Address', 'Contact', 'Phone#', 'Fax', 'Website', 'Email'])
writer.writerows(data)
print("Done")
回答2:
WebKnight is an ISAPI filter that secures your web server by blocking certain requests. The server admin sets out rules which are applied to incoming requests and determine whether to block. In this case, the rules include expectations regarding allowable (and required) User-Agent headers. Having a play around I notice:
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64)' or 5.0 variants trigger alert
'Mozilla/4.0 (Windows NT 10.0; WOW64)', 'AppleWebKit/537.36 (KHTML, like Gecko)' , 'Chrome/79.0.3945.79' , 'Safari/537.36' are all fine so looks like list may need updating on server.
Note that indexing is indicated as not wanted by the <META NAME="ROBOTS" CONTENT="NOINDEX"> but I couldn't find any T&Cs , and there is no robots.txt file, governing scraping.
E.g.
import requests
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; WOW64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/79.0.3945.79 Safari/537.36',
}
r = requests.get('http://www.pmas.sg/page/members-directory', headers=headers)
print(r.text)
来源:https://stackoverflow.com/questions/59404674/beautifulsoup-not-fetching-the-data