Unable to scrape the name from the inner page of each result using requests

ぃ、小莉子 提交于 2020-05-15 08:10:42

问题


I've created a script in python making use of post http requests to get the search results from a webpage. To populate the results, it is necessary to click on the fields sequentially shown here. Now a new page will be there and this is how to populate the result.

There are ten results in the first page and the following script can parse the results flawlessly.

What I wish to do now is use the results to reach their inner page in order to parse Sole Proprietorship Name (English) from there.

website address

I've tried so far with:

import re
import requests
from bs4 import BeautifulSoup

url = "https://www.businessregistration.moc.gov.kh/cambodia-master/service/create.html?targetAppCode=cambodia-master&targetRegisterAppCode=cambodia-br-soleproprietorships&service=registerItemSearch"

payload = {
    'QueryString': '0',
    'SourceAppCode': 'cambodia-br-soleproprietorships',
    'OriginalVersionIdentifier': '',
    '_CBASYNCUPDATE_': 'true',
    '_CBHTMLFRAG_': 'true',
    '_CBNAME_': 'buttonPush'
}

with requests.Session() as s:
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
    res = s.get(url)
    target_url = res.url.split("&")[0].replace("view.", "update.")
    node = re.findall(r"nodeW\d.+?-Advanced",res.text)[0].strip()
    payload['_VIKEY_'] = re.findall(r"viewInstanceKey:'(.*?)',", res.text)[0].strip()
    payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
    payload[node] = 'N'
    payload['_CBNODE_'] = re.findall(r"Callback\('(.*?)','buttonPush", res.text)[2]
    payload['_CBHTMLFRAGNODEID_'] = re.findall(r"AsyncWrapper(W\d.+?)'",res.text)[0].strip()

    res = s.post(target_url,data=payload)
    soup = BeautifulSoup(res.content, 'html.parser')
    for item in soup.find_all("span", class_="appReceiveFocus")[3:]:
        print(item.text)

How can I parse the Name (English) from each of the results inner page using requests?


回答1:


This is one of the ways you can parse the name from the site's inner page and then email address from the address tab. I added this function .get_email() only because I wanted to let you know as to how you can parse content from different tabs.

import re
import requests
from bs4 import BeautifulSoup

url = "https://www.businessregistration.moc.gov.kh/cambodia-master/service/create.html?targetAppCode=cambodia-master&targetRegisterAppCode=cambodia-br-soleproprietorships&service=registerItemSearch"
result_url = "https://www.businessregistration.moc.gov.kh/cambodia-master/viewInstance/update.html?id={}"
base_url = "https://www.businessregistration.moc.gov.kh/cambodia-br-soleproprietorships/viewInstance/update.html?id={}"

def get_names(s):
    s.headers['User-Agent'] = 'Mozilla/5.0 (Windows NT 10.0; Win64; x64; rv:74.0) Gecko/20100101 Firefox/74.0'
    res = s.get(url)
    target_url = result_url.format(res.url.split("id=")[1])
    soup = BeautifulSoup(res.text,"lxml")
    payload = {i['name']:i.get('value','') for i in soup.select('input[name]')}

    payload['QueryString'] = 'a'
    payload['SourceAppCode'] = 'cambodia-br-soleproprietorships'
    payload['_CBNAME_'] = 'buttonPush'
    payload['_CBHTMLFRAG_'] = 'true'
    payload['_VIKEY_'] = re.findall(r"viewInstanceKey:'(.*?)',", res.text)[0].strip()
    payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
    payload['_CBNODE_'] = re.findall(r"Callback\('(.*?)','buttonPush", res.text)[-1]
    payload['_CBHTMLFRAGNODEID_'] = re.findall(r"AsyncWrapper(W\d.+?)'",res.text)[0].strip()

    res = s.post(target_url,data=payload)
    soup = BeautifulSoup(res.text,"lxml")
    payload.pop('_CBHTMLFRAGNODEID_')
    payload.pop('_CBHTMLFRAG_')
    payload.pop('_CBHTMLFRAGID_')

    for item in soup.select("a[class*='ItemBox-resultLeft-viewMenu']"):
        payload['_CBNAME_'] = 'invokeMenuCb'
        payload['_CBVALUE_'] = ''
        payload['_CBNODE_'] = item['id'].replace('node','')

        res = s.post(target_url,data=payload)
        soup = BeautifulSoup(res.text,'lxml')
        address_url = base_url.format(res.url.split("id=")[1])
        node_id = re.findall(r"taba(.*)_",soup.select_one("a[aria-label='Addresses']")['id'])[0]
        payload['_CBNODE_'] = node_id
        payload['_CBHTMLFRAGID_'] = re.findall(r"guid:(.*?),", res.text)[0].strip()
        payload['_CBNAME_'] = 'tabSelect'
        payload['_CBVALUE_'] = '1'
        eng_name = soup.select_one(".appCompanyName + .appAttrValue").get_text()
        yield from get_email(s,eng_name,address_url,payload)

def get_email(s,eng_name,url,payload):
    res = s.post(url,data=payload)
    soup = BeautifulSoup(res.text,'lxml')
    email = soup.select_one(".EntityEmailAddresses:contains('Email') .appAttrValue").get_text()
    yield eng_name,email

if __name__ == '__main__':
    with requests.Session() as s:
        for item in get_names(s):
            print(item)

Output are like:

('AMY GEMS', 'amy.n.company@gmail.com')
('AHARATHAN LIN LIANJIN FOOD FLAVOR', 'skykoko344@gmail.com')
('AMETHYST DIAMOND KTV', 'twobrotherktv@gmail.com')



回答2:


To get the Name (English) you can simply replace print(item.text) with print(item.text.split('/')[1].split('(')[0].strip()) which prints AMY GEMS



来源:https://stackoverflow.com/questions/61280657/unable-to-scrape-the-name-from-the-inner-page-of-each-result-using-requests

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!