Best practics for parallelize web crawler in .net 4.0

邮差的信 提交于 2019-12-13 14:11:22

问题


I need to download a lot of pages through proxies. What is best practice for building a multi-threaded web crawler?

Is Parallel.For\Foreach is good enough or is it better for heavy CPU tasks?

What do you say about following code?

var multyProxy = new MultyProxy();

   multyProxy.LoadProxyList();


   Task[] taskArray = new Task[1000];

        for(int i = 0; i < taskArray.Length; i++)
        {
            taskArray[i] = new Task( (obj) =>
                {                                                             
                       multyProxy.GetPage((string)obj);
                },

            (object)"http://google.com"
            );
            taskArray[i].Start();
        }


   Task.WaitAll(taskArray);

It's working horribly. It's very slow and I don't know why.

This code is also working bad.

 System.Threading.Tasks.Parallel.For(0,1000, new System.Threading.Tasks.ParallelOptions(){MaxDegreeOfParallelism=30},loop =>
            {
                 multyProxy.GetPage("http://google.com");
            }
            );

Well i think that i am doing something wrong.

When i starting my script it use network only at 2%-4%.


回答1:


You are basically using up CPU bound threads for IO bound tasks - ie. even though you're parallelizing your operations, they're still using up essentially a ThreadPool thread, which is mainly intended for CPU bound operations.

Basically you need to use an async pattern for downloading the data to change it to using IO completion ports - if you're using WebRequest, then the BeginGetResponse() and EndGetResponse() methods

I would suggest looking at Reactive Extensions to do this, eg:

IEnumerable<string> urls = ... get your urls here...;
var results = from url in urls.ToObservable()
             let req = WebRequest.Create(url)
             from rsp in Observable.FromAsyncPattern<WebResponse>(
                  req.BeginGetResponse, req.EndGetResponse)()
             select ExtractResponse(rsp);

where ExtractResponse probably just uses a StreamReader.ReadToEnd to get the string results if that's what you're after

You can also look at using the .Retry operator then which will easily allow you to retry a few times if you get connection issues etc...




回答2:


Add this at the beginning of your main method:

System.Net.ServicePointManager.DefaultConnectionLimit = 100;

So you will not be limited to a tiny amount of concurrent connections.




回答3:


This might help you when you use a lot of connections (add to app.config or web.config):

<?xml version="1.0" encoding="utf-8" ?>
<configuration>
  <system.net>
    <connectionManagement>
      <add address="*" maxconnection="50"/>
    </connectionManagement>
  </system.net>
</configuration>

Set your number of concurrent connections instead of 50

read more about it at http://msdn.microsoft.com/en-us/library/fb6y0fyc.aspx



来源:https://stackoverflow.com/questions/10688359/best-practics-for-parallelize-web-crawler-in-net-4-0

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!