Web-scraping: Empty dataset after collecting information

末鹿安然 提交于 2020-04-11 11:43:45

问题


I would like to create a dataset that includes information scraped from a website. I explain what I have done and the expected output below. I am getting empty arrays for rows and columns, then for the whole dataset, and I do not understand the reason. I hope you can help me.

1) Create an empty dataframe with only one column: this columns should contains a list of urls to use.

data_to_use = pd.DataFrame([], columns=['URL'])

2) Select urls from a previous dataset.

select_urls=dataset.URL.tolist()

This set of urls looks like:

                             URL
0                     www.bbc.co.uk
1             www.stackoverflow.com           
2                       www.who.int
3                       www.cnn.com
4         www.cooptrasportiriolo.it
...                             ...

3) Populate the column with these urls:

data_to_use['URL']= select_urls
data_to_use['URLcleaned'] = data_to_use['URL'].str.replace('^(www\.)', '')

4) Select a random sample to test: the first 50 rows in column URL

data_to_use = data_to_use.loc[1:50, 'URL']

5) Try to scrape information

import requests
import time
from bs4 import BeautifulSoup

urls= data_to_use['URLcleaned'].tolist()

ares = []

for u in urls: # in the selection there should be an error. I am not sure that I am selecting the rig
    print(u)
    url = 'https://www.urlvoid.com/scan/'+ u
    r = requests.get(url)
    ares.append(r)   

rows = []
cols = []

for ar in ares:
    soup = BeautifulSoup(ar.content, 'lxml')
    tab = soup.select("table.table.table-custom.table-striped")   
    try:
            dat = tab[0].select('tr')
            line= []
            header=[]
            for d in dat:
                row = d.select('td')
                line.append(row[1].text)
            new_header = row[0].text
            if not new_header in cols:
                cols.append(new_header)
            rows.append(line)
    except IndexError:
        continue

print(rows) # this works fine. It prints the rows. The issue comes from the next line

data_to_use = pd.DataFrame(rows,columns=cols)  

Unfortunately there is something wrong in the steps above as I am not getting any results, but only [] or __.

Error from data_to_use = pd.DataFrame(rows,columns=cols):

ValueError: 1 columns passed, passed data had 12 columns

My expected output would be:

URL          Website Address   Last Analysis   Blacklist Status \  
bbc.co.uk          Bbc.co.uk         9 days ago       0/35
stackoverflow.com Stackoverflow.com  7 days ago      0/35

Domain Registration               IP Address       Server Location    ...
996-08-01 | 24 years ago       151.101.64.81    (US) United States    ...
2003-12-26 | 17 years ago      ...

At the end I should save the dataset created in a file csv.


回答1:


Yon can do it using pandas only.Try the following code.

urllist=[ 'bbc.co.uk','stackoverflow.com','who.int','cnn.com']

dffinal=pd.DataFrame()
for url in urllist:
    df=pd.read_html("https://www.urlvoid.com/scan/" + url + "/")[0]
    list = df.values.tolist()
    rows = []
    cols = []
    for li in list:
        rows.append(li[1])
        cols.append(li[0])
    df1=pd.DataFrame([rows],columns=cols)
    dffinal = dffinal.append(df1, ignore_index=True)

print(dffinal)
dffinal.to_csv("domain.csv",index=False)

Csv snapshot:

Snapshot.

Csv file.


Update with try..except block since some of the url doesn't return data.

urllist=['gov.ie','','who.int', 'comune.staranzano.go.it', 'cooptrasportiriolo.it', 'laprovinciadicomo.it', 'asufc.sanita.fvg.it', 'canale7.tv', 'gradenigo.it', 'leggo.it', 'urbanpost.it', 'monitorimmobiliare.it', 'comune.villachiara.bs.it', 'ilcittadinomb.it', 'europamulticlub.com']

dffinal=pd.DataFrame()
for url in urllist:
    try:
        df=pd.read_html("https://www.urlvoid.com/scan/" + url + "/")[0]
        list = df.values.tolist()
        rows = []
        cols = []
        for li in list:
            rows.append(li[1])
            cols.append(li[0])
        df1=pd.DataFrame([rows],columns=cols)
        dffinal = dffinal.append(df1, ignore_index=True)

    except:
        continue

print(dffinal)
dffinal.to_csv("domain.csv",index=False)

Console:

            Website Address  ...         Region
0                     Gov.ie  ...         Dublin
1                    Who.int  ...         Geneva
2    Comune.staranzano.go.it  ...        Unknown
3      Cooptrasportiriolo.it  ...        Unknown
4       Laprovinciadicomo.it  ...        Unknown
5                 Canale7.tv  ...        Unknown
6                   Leggo.it  ...          Milan
7               Urbanpost.it  ...  Ile-de-France
8      Monitorimmobiliare.it  ...        Unknown
9   Comune.villachiara.bs.it  ...        Unknown
10          Ilcittadinomb.it  ...        Unknown

[11 rows x 12 columns]



回答2:


Just adding to @KunduK's solution. You can condense part of that code using pandas' .T (transpose function).

So you can turn this part:

df=pd.read_html("https://www.urlvoid.com/scan/" + url + "/")[0]
list = df.values.tolist()
rows = []
cols = []
for li in list:
    rows.append(li[1])
    cols.append(li[0])
df1=pd.DataFrame([rows],columns=cols)
dffinal = dffinal.append(df1, ignore_index=True)

Into simply:

df=pd.read_html("https://www.urlvoid.com/scan/" + url + "/")[0].set_index(0).T
dffinal = dffinal.append(df, ignore_index=True)



回答3:


Putting aside the conversion to csv, let's try it this way:

urls=['gov.ie', 'who.int', 'comune.staranzano.go.it', 'cooptrasportiriolo.it', 'laprovinciadicomo.it', 'asufc.sanita.fvg.it', 'canale7.tv', 'gradenigo.it', 'leggo.it', 'urbanpost.it', 'monitorimmobiliare.it', 'comune.villachiara.bs.it', 'ilcittadinomb.it', 'europamulticlub.com']
ares = []
for u in urls:
    url = 'https://www.urlvoid.com/scan/'+u
    r = requests.get(url)
    ares.append(r)

Note that 3 of the urls have no data, so you should have only 11 rows in the dataframe. Next:

rows = []
cols = []
for ar in ares:
    soup = bs(ar.content, 'lxml')
    tab = soup.select("table.table.table-custom.table-striped")        
    if len(tab)>0:
        dat = tab[0].select('tr')
        line= []
        header=[]
        for d in dat:
            row = d.select('td')
            line.append(row[1].text)
            new_header = row[0].text
            if not new_header in cols:
                cols.append(new_header)
        rows.append(line)

my_df = pd.DataFrame(rows,columns=cols)   
my_df.info()

Output:

<class 'pandas.core.frame.DataFrame'>
RangeIndex: 11 entries, 0 to 10
Data columns (total 12 columns):
Website Address        11 non-null object
Last Analysis          11 non-null object
Blacklist Status       11 non-null object
Domain Registration    11 non-null object
Domain Information     11 non-null object
IP Address             11 non-null object
Reverse DNS            11 non-null object
ASN                    11 non-null object
Server Location        11 non-null object
Latitude\Longitude     11 non-null object
City                   11 non-null object
Region                 11 non-null object
dtypes: object(12)
memory usage: 1.2+ KB


来源:https://stackoverflow.com/questions/61108005/web-scraping-empty-dataset-after-collecting-information

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!