doParallel, cluster vs cores

喜夏-厌秋 提交于 2019-11-29 09:35:27

Yes, it's right from software view.

on single machine these are interchangeable and I will get same results.


To understand 'cluster' and 'cores' clearly, I suggest to think from 'hardware' and 'software' level.

In hardware level, 'cluster' means network connected machines which can work together by communications such as by socket (Need more init/stop operations as stopCluster you pointed). While 'cores' means several hardware cores in local CPU and they works together by shared memory typically (don't need send message explicitly from A to B).

In software level, sometime the boundary of cluster and cores is not that clear. The program can be ran in local by cores or remote by cluster, and the high level software don't need to know the details. So, we can mix two modes such as using explicit communication in local as setting cl in one machine, and also can run multicores in each of remote machines.


Back to your question, is setting cl or cores equal?

From the software, it will be the same that the program will be ran by same number of clients/servers and then get the same results.

From the hardware, it may be different. cl means to communicate explicit and cores to shared memory but if the high level software optimized very well. In local machine, both setting will goes into the same flow. I don't look into doParallel very deep now, so I am not very sure if these two are same.

But in practice, it is better to specify cores for single machine and cl for cluster.

Hope this help for you.

The behavior of doParallel::registerDoParallel(<numeric>) depends on the operating system, see print(doParallel::registerDoParallel) for details.

On Windows machines,

doParallel::registerDoParallel(4)

effectively does

cl <- makeCluster(4)
doParallel::registerDoParallel(cl)

i.e. it set up four ("PSOCK") workers that run in background R sessions. Then, %dopar% will basically utilize the parallel::parLapply() machinery. With this setup, you do have to worry about global variables and packages being attached on each of the workers.

However, on non-Windows machines,

doParallel::registerDoParallel(4)

the result will be that %dopar% will utilize the parallel::mclapply() machinery, which in turn relies on forked processes. Since forking is used, you don't have to worry about globals and packages.

I think the chosen answer is too general and actually not accurate, since it didn't touch the detail of doParallel package itself. If you read the vignettes, it's actually pretty clear.

The parallel package is essentially a merger of the multicore package, which was written by Simon Urbanek, and the snow package, which was written by Luke Tierney and others. The multicore functionality supports multiple workers only on those operating systems that support the fork system call; this excludes Windows. By default, doParallel uses multicore functionality on Unix-like systems and snow functionality on Windows.

We will use snow-like functionality in this vignette, so we start by loading the package and starting a cluster

To use multicore-like functionality, we would specify the number of cores to use instead

In summary, this is system dependent. Cluster is the more general mode cover all platforms, and cores is only for unix-like system.

To make the interface consistent, the package used same function for these two modes.

> library(doParallel)
> cl <- makeCluster(4)
> registerDoParallel(cl)
> getDoParName()
[1] "doParallelSNOW"

> registerDoParallel(cores=4)
> getDoParName()
[1] "doParallelMC"
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!