Multiple goroutines access/modify a list/map

前提是你 提交于 2021-01-28 20:10:52

问题


I am trying to implement a multithreaded crawler using a go lang as a sample task to learn the language.

It supposed to scan pages, follow links and save them do DB.

To avoid duplicates I'm trying to use map where I save all the URLs I've already saved.

The synchronous version works fine, but I have troubles when I'm trying to use goroutines.

I'm trying to use mutex as a sync object for map, and channel as a way to coordinate goroutines. But obviously I don't have clear understanding of them.

The problem is that I have many duplicate entries, so my map store/check does not work properly.

Here is my code:

package main

import (
    "fmt"
    "net/http"
    "golang.org/x/net/html"
    "strings"
    "database/sql"
    _ "github.com/ziutek/mymysql/godrv"
    "io/ioutil"
    "runtime/debug"
    "sync"
)

const maxDepth = 2;

var workers = make(chan bool)

type Pages struct {
    mu sync.Mutex
    pagesMap map[string]bool
}

func main() {
    var pagesMutex Pages
    fmt.Println("Start")
    const database = "gotest"
    const user = "root"
    const password = "123"

    //open connection to DB
    con, err := sql.Open("mymysql", database + "/" + user + "/" + password)
    if err != nil { /* error handling */
        fmt.Printf("%s", err)
        debug.PrintStack()
    }

    fmt.Println("call 1st save site")
    pagesMutex.pagesMap = make(map[string]bool)
    go pagesMutex.saveSite(con, "http://golang.org/", 0)

    fmt.Println("saving true to channel")
    workers <- true

    fmt.Println("finishing in main")
    defer con.Close()
}


func (p *Pages) saveSite(con *sql.DB, url string, depth int) {
    fmt.Println("Save ", url, depth)
    fmt.Println("trying to lock")
    p.mu.Lock()
    fmt.Println("locked on mutex")
    pageDownloaded := p.pagesMap[url] == true
    if pageDownloaded {
        p.mu.Unlock()
        return
    } else {
        p.pagesMap[url] = true
    }
    p.mu.Unlock()

    response, err := http.Get(url)
    if err != nil {
        fmt.Printf("%s", err)
        debug.PrintStack()
    } else {
        defer response.Body.Close()

        contents, err := ioutil.ReadAll(response.Body)
        if err != nil {
            if err != nil {
                fmt.Printf("%s", err)
                debug.PrintStack()
            }
        }

        _, err = con.Exec("insert into pages (url) values (?)", string(url))
        if err != nil {
            fmt.Printf("%s", err)
            debug.PrintStack()
        }
        z := html.NewTokenizer(strings.NewReader((string(contents))))

        for {
            tokenType := z.Next()
            if tokenType == html.ErrorToken {
                return
            }

            token := z.Token()
            switch tokenType {
            case html.StartTagToken: // <tag>

                tagName := token.Data
                if strings.Compare(string(tagName), "a") == 0 {
                    for _, attr := range token.Attr {
                        if strings.Compare(attr.Key, "href") == 0 {
                            if depth < maxDepth  {
                                urlNew := attr.Val
                                if !strings.HasPrefix(urlNew, "http")  {
                                    if strings.HasPrefix(urlNew, "/")  {
                                        urlNew = urlNew[1:]
                                    }
                                    urlNew = url + urlNew
                                }
                                //urlNew = path.Clean(urlNew)
                                go  p.saveSite(con, urlNew, depth + 1)

                            }
                        }
                    }

                }
            case html.TextToken: // text between start and end tag
            case html.EndTagToken: // </tag>
            case html.SelfClosingTagToken: // <tag/>

            }

        }

    }
    val := <-workers
    fmt.Println("finished Save Site", val)
}

Could someone explain to me how to do this properly, please?


回答1:


Well you have two chooses, for a little and simple implementation, I would recommend to separate the operations on the map into a separate structure.

// Index is a shared page index
type Index struct {
    access sync.Mutex
    pages map[string]bool
}

// Mark reports that a site have been visited
func (i Index) Mark(name string) {
    i.access.Lock()
    i.pages[name] = true
    i.access.Unlock()
}

// Visited returns true if a site have been visited
func (i Index) Visited(name string) bool {
    i.access.Lock()
    defer i.access.Unlock()

    return i.pages[name]
}

Then, add another structure like this:

// Crawler is a web spider :D
type Crawler struct {
    index Index
    /* ... other important stuff like visited sites ... */
}

// Crawl looks for content
func (c *Crawler) Crawl(site string) {
    // Implement your logic here 
    // For example: 
    if !c.index.Visited(site) {
        c.index.Mark(site) // When marked
    }
}

That way you keep things nice and clear, probably a little more code, but definitely more readable. You need to instance crawler like this:

sameIndex := Index{pages: make(map[string]bool)}
asManyAsYouWant := Crawler{sameIndex, 0} // They will share sameIndex

If you want to go further with a high level solution, then I would recommend Producer/Consumer architecture.



来源:https://stackoverflow.com/questions/35443781/multiple-goroutines-access-modify-a-list-map

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!