问题
I'm currently working on a crawling-script in Python where I want to map the following HTML-response into a multilist or a dictionary (it does not matter).
My current code is:
from bs4 import BeautifulSoup
from urllib.request import Request, urlopen
req = Request("https://my.site.com/crawl", headers={'User-Agent': 'Mozilla/5.0'})
webpage = urlopen(req)
soup = BeautifulSoup(webpage, 'html.parser')
ul = soup.find('ul', {'class': ''})
After running this I get the following result stored in ul:
<ul>
<li><a class="reference" href="#ref1">Data1</a></li>
<li><a class="reference" href="#ref2">Data2</a>
<ul>
<li><a class="reference" href="#ref3">Data3</a></li>
<li><a class="reference" href="#ref4">Data4</a>
<ul>
<li><a class="reference" href="#ref5"><span class="pre">Data5</span></a></li>
<li><a class="reference" href="#ref6"><span class="pre">Data6</span></a></li>
.
.
.
</ul>
</li>
</ul>
</li>
<li><a class="reference" href="#ref7">Data7</a>
<ul>
<li><a class="reference" href="#ref8"><span class="pre">Data8</span></a></li>
<li><a class="reference" href="#ref9"><span class="pre">Data9</span></a></li>
.
.
.
</ul>
</li>
<li><a class="reference" href="#ref10">Data10</a>
<ul>
<li><a class="reference" href="#ref11"><span class="pre">Data11</span></a></li>
<li><a class="reference" href="#ref12">Data12</a></li>
</ul>
</li>
</ul>
As this is an external site I cannot control the id or class of the elements in the list.
It seems that I can not get my head around this, is there a simple way to arrange the data into a list or dict?:
dict = {'Data1': {'href': 'ref1'},
'Data2': {'href': 'ref2', {
'Data3': {'href': 'ref3'},
'Data4': {'href': 'ref4', {
'Data5': {'href': 'ref5'},
'Data6': {'href': 'ref6'},
.
.
. }
}
}
}
}
I do feel like this is a cumbersome process, however I do not see any other way of doing it.
Any help to get me going in the right direction is much appreciated!
Cheers!
回答1:
Just recurse the ul
element, pulling out the text of all the li
elements that have text, recursing deeper if there is a <ul>
element instead:
def parse_ul(elem):
result = {}
for sub in elem.find_all('li', recursive=False):
if sub.a is None:
continue
data = {k: v for k, v in sub.a.attrs.items() if k != 'class'}
if sub.ul is not None:
# recurse down
data['children'] = parse_ul(sub.ul)
result[sub.a.get_text(strip=True)] = data
return result
This takes all direct li
elements; if there is an <a>
element the text of that anchor element is turned into a key and we store a copy of the tag attributes as the value (ignoring any class
attributes). If there is also a <ul>
element next to the a
tag, it is parsed recursively and added as a children
key to the attribute dictionary for the <a>
tag.
For your sample input, this produces:
>>> from pprint import pprint
>>> pprint(parse_ul(soup.ul))
{'Data1': {'href': '#ref1'},
'Data10': {'children': {'Data11': {'href': '#ref11'},
'Data12': {'href': '#ref12'}},
'href': '#ref10'},
'Data2': {'children': {'Data3': {'href': '#ref3'},
'Data4': {'children': {'Data5': {'href': '#ref5'},
'Data6': {'href': '#ref6'}},
'href': '#ref4'}},
'href': '#ref2'},
'Data7': {'children': {'Data8': {'href': '#ref8'}, 'Data9': {'href': '#ref9'}},
'href': '#ref7'}}
回答2:
There's no trivial way to do this, but it's not all that cumbersome.
For example, you can do it recursively, something like this:
def make_data(ul):
d = {}
for a in ul.find_all('a'):
d[a.text] = {'href': a.attrs['href']}
lis = ul.find_all('li', recursive=False)
children = {}
for li in lis:
child = li.ul
if child:
children[li.a.attrs['href']] = make_data(child)
if children:
d['children'] = children
return d
(I had to give each of those children
dicts a key, because the structure you actually wanted isn't a valid dict.)
Of course you'll want to, e.g., add some error handling, but this should be enough to get you started.
回答3:
I really like the Martijn Pieters parse_ul(), but I have some code that do not follow the rules for this parser, with a double <ul></ul>
inside a single <li> .. </li>
where the last section got a <a ... > text </a>
prefixed.
Eg <li><a ...> <ul> </ul> <a..></a><ul> </ul> </li>
See below
<ul>
<li><a class="ref" href="#ref1">Data1</a></li>
<li><a class="ref" href="#ref2">Data2</a>
<ul>
<li><a class="ref" href="#ref4">Data4</a>
<ul>
<li><a class="ref" href="#ref5"><span class="pre">Data5</span></a>/li>
<li><a class="ref" href="#ref6"><span class="pre">Data6</span></a></li>
.
.
</ul>
<!-- a-tag without preceding <li> tag -->
<a class="ref" href="#ref4a">Data4a</a>
<ul>
<li><a class="ref" href="#ref5a"><span class="pre">Data5a</span></a></li>
<li><a class="ref" href="#ref6a"><span class="pre">Data6a</span></a></li>
.
.
</ul>
</li>
</ul>
</li>
.
.
</ul>
I can not figure out how to change the parse_ul() so that it accept this deviation and output this ?
{'Data1': {'href': '#ref1'},
'Data2': {'children': {'Data4': {'children': {'Data5': {'href': '#ref5'},
'Data6': {'href': '#ref6'}}},
'href': '#ref4'},
{'Data4a': {'children':{'Data5a': {'href': '#ref5a'},
'Data6a': {'href': '#ref6a'}}},
'href': '#ref4a'},
'href': '#ref2'}
}
The following script:
from bs4 import BeautifulSoup
import pprint
pp = pprint.PrettyPrinter(indent=4) # Init pritty print (pprint)
soup = BeautifulSoup(html_contents, 'lxml')
menu_dict = parse_ul(soup.ul)
pp.pprint(menu_dict)
will generate the following output, which is missing the second part contained in <a..></a><ul> </ul>
:
{'Data1': {'href': '#ref1'},
'Data2': {'children': {'Data4': {'children': {'Data5': {'href': '#ref5'},
'Data6': {'href': '#ref6'}}},
'href': '#ref4'},
'href': '#ref2'}
}
来源:https://stackoverflow.com/questions/50338108/using-beautifulsoup-in-order-to-find-all-ul-and-li-elements