How to scrape data from different Wikipedia pages?

怎甘沉沦 提交于 2020-07-08 02:53:17

问题


I've scrapped the wikipedia table using Python Beautifulsoup (https://en.wikipedia.org/wiki/Districts_of_Hong_Kong). But except for the offered data (i.e. population, area, density and region), I would like to get the location coordinates for each district. The data should get from another page of each district (there are the hyperlinks on the table).

Take the first district 'Central and Western District' for example, the DMS coordinates (22°17′12″N 114°09′18″E) can be found on the page. By further clicking the link, I could get the decimal coordinates (22.28666, 114.15497).

So, is it possible to create a table with Latitude and Longitude for each district?

New to the programming world, sorry if the question is stupid...

Reference:

DMS coordinates: https://en.wikipedia.org/wiki/Central_and_Western_District

Decimal coordinates: https://tools.wmflabs.org/geohack/geohack.php?pagename=Central_and_Western_District&params=22.28666_N_114.15497_E_type:adm2nd_region:HK


回答1:


import requests
from bs4 import BeautifulSoup

res = requests.get('https://en.wikipedia.org/wiki/Districts_of_Hong_Kong')
result = {}
soup = BeautifulSoup(res.content,'lxml')
tables = soup.find_all('table',{'class':'wikitable'})
table = tables[0].find('tbody')
districtLinks = table.find_all('a',href=True)

for link in districtLinks:
    if link.getText() in link.attrs.get('title','') or link.attrs.get('title','') in link.getText():
        district = link.attrs.get('title','')
        if district:
            url = link.attrs.get('href', '')
        else:
            continue
    else:
        continue
    try:
        res = requests.get("https://en.wikipedia.org/{}".format(url))
    except:
        continue
    else:
        soup = BeautifulSoup(res.content, 'lxml')
        try:
            tables = soup.find_all('table',{'class':'infobox geography vcard'})
            table = tables[0].find('tbody')
        except:
            continue
        else:
            for row in table.find_all('tr',{'class':'mergedbottomrow'}):
                geoLink = row.find('span',{'class': 'geo'}) # 'plainlinks nourlexpansion'
                locationSplit = geoLink.getText().split("; ")
                result.update({district : {"Latitude ": locationSplit[0], "Longitude":locationSplit[1]}})

print(result)

Result:

{'Central and Western District': {'Latitude ': '22.28666', 'Longitude': '114.15497'}, 'Eastern District, Hong Kong': {'Latitude ': '22.28411', 'Longitude': '114.22414'}, 'Southern District, Hong Kong': {'Latitude ': '22.24725', 'Longitude': '114.15884'}, 'Wan Chai District': {'Latitude ': '22.27968', 'Longitude': '114.17168'}, 'Sham Shui Po District': {'Latitude ': '22.33074', 'Longitude': '114.16220'}, 'Kowloon City District': {'Latitude ': '22.32820', 'Longitude': '114.19155'}, 'Kwun Tong District': {'Latitude ': '22.31326', 'Longitude': '114.22581'}, 'Wong Tai Sin District': {'Latitude ': '22.33353', 'Longitude': '114.19686'}, 'Yau Tsim Mong District': {'Latitude ': '22.32138', 'Longitude': '114.17260'}, 'Islands District, Hong Kong': {'Latitude ': '22.26114', 'Longitude': '113.94608'}, 'Kwai Tsing District': {'Latitude ': '22.35488', 'Longitude': '114.08401'}, 'North District, Hong Kong': {'Latitude ': '22.49471', 'Longitude': '114.13812'}, 'Sai Kung District': {'Latitude ': '22.38143', 'Longitude': '114.27052'}, 'Sha Tin District': {'Latitude ': '22.38715', 'Longitude': '114.19534'}, 'Tai Po District': {'Latitude ': '22.45085', 'Longitude': '114.16422'}, 'Tsuen Wan District': {'Latitude ': '22.36281', 'Longitude': '114.12907'}, 'Tuen Mun District': {'Latitude ': '22.39163', 'Longitude': '113.9770885'}, 'Yuen Long District': {'Latitude ': '22.44559', 'Longitude': '114.02218'}}



回答2:


Create function which use link to page with district to load it and use BeautifulSoup to search Latitude and Longitude and decimal coorcinates in link.

And then return it to main function as list and add to row with other information.

import requests
from bs4 import BeautifulSoup as BS
import re

def parse_district(url):
    r = requests.get(url)

    soup = BS(r.text, 'html.parser')

    link = soup.find('a', {'href': re.compile('//tools.wmflabs.org/.*')})

    item = link['href'].split('params=')[1].split('type:')[0].replace('_', ' ').strip()
    #print(item)

    items = link.find_all('span', {'class':('latitude', 'longitude')})

    #print('>>>', [item] + [i.text for i in items][:3] )

    return [item] + [i.text for i in items]

def main():
    url = 'https://en.wikipedia.org/wiki/Districts_of_Hong_Kong'

    r = requests.get(url)

    soup = BS(r.text, 'html.parser')

    table = soup.find_all('table', {'class': 'wikitable'})
    for row in table[0].find_all('tr'):
        items = row.find_all('td')
        if items:
            row = [i.text.strip() for i in items]

            link = 'https://en.wikipedia.org' + items[0].a['href']
            data = parse_district(link)

            row += data
            print(row)
main()   

Result

['Central and Western', '中西區', '244,600', '12.44', '19,983.92', 'Hong Kong Island', '22.28666 N 114.15497 E', '22°17′12″N', '114°09′18″E']
['Eastern', '東區', '574,500', '18.56', '31,217.67', 'Hong Kong Island', '22.28411 N 114.22414 E', '22°17′03″N', '114°13′27″E']
['Southern', '南區', '269,200', '38.85', '6,962.68', 'Hong Kong Island', '22.24725 N 114.15884 E', '22°14′50″N', '114°09′32″E']
['Wan Chai', '灣仔區', '150,900', '9.83', '15,300.10', 'Hong Kong Island', '22.27968 N 114.17168 E', '22°16′47″N', '114°10′18″E']
['Sham Shui Po', '深水埗區', '390,600', '9.35', '41,529.41', 'Kowloon', '22.33074 N 114.1622 E', '22°19′51″N', '114°09′44″E']
['Kowloon City', '九龍城區', '405,400', '10.02', '40,194.70', 'Kowloon', '22.3282 N 114.19155 E', '22°19′42″N', '114°11′30″E']
['Kwun Tong', '觀塘區', '641,100', '11.27', '56,779.05', 'Kowloon', '22.31326 N 114.22581 E', '22°18′48″N', '114°13′33″E']
['Wong Tai Sin', '黃大仙區', '426,200', '9.30', '45,645.16', 'Kowloon', '22.33353 N 114.19686 E', '22°20′01″N', '114°11′49″E']
['Yau Tsim Mong', '油尖旺區', '318,100', '6.99', '44,864.09', 'Kowloon', '22.32138 N 114.1726 E', '22°19′17″N', '114°10′21″E']
['Islands', '離島區', '146,900', '175.12', '825.14', 'New Territories', '22.26114 N 113.94608 E', '22°15′40″N', '113°56′46″E']
['Kwai Tsing', '葵青區', '507,100', '23.34', '21,503.86', 'New Territories', '22.35488 N 114.08401 E', '22°21′18″N', '114°05′02″E']
['North', '北區', '310,800', '136.61', '2,220.19', 'New Territories', '22.49471 N 114.13812 E', '22°29′41″N', '114°08′17″E']
['Sai Kung', '西貢區', '448,600', '129.65', '3,460.08', 'New Territories', '22.38143 N 114.27052 E', '22°22′53″N', '114°16′14″E']
['Sha Tin', '沙田區', '648,200', '68.71', '9,433.85', 'New Territories', '22.38715 N 114.19534 E', '22°23′14″N', '114°11′43″E']
['Tai Po', '大埔區', '307,100', '136.15', '2,220.35', 'New Territories', '22.45085 N 114.16422 E', '22°27′03″N', '114°09′51″E']
['Tsuen Wan', '荃灣區', '303,600', '61.71', '4,887.38', 'New Territories', '22.36281 N 114.12907 E', '22°21′46″N', '114°07′45″E']
['Tuen Mun', '屯門區', '495,900', '82.89', '5,889.38', 'New Territories', '22.39163 N 113.9770885 E', '22°23′30″N', '113°58′38″E']
['Yuen Long', '元朗區', '607,200', '138.46', '4,297.99', 'New Territories', '22.44559 N 114.02218 E', '22°26′44″N', '114°01′20″E']


来源:https://stackoverflow.com/questions/60408917/how-to-scrape-data-from-different-wikipedia-pages

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!