mclapply with big objects - “serialization is too large to store in a raw vector”

馋奶兔 提交于 2019-12-04 00:11:38

The integer limit is rumored to be addressed very soon in R. In my experience that limit can block datasets with under 2 billion cells (around the maximum integer), and low level functions like sendMaster in the multicore package rely on passing raw vectors. I had around 1 million processes representing about 400 million rows of data and 800 million cells in the data.table format, and when mclapply was sending the results back it ran into this limit.

A divide and conquer strategy is not that hard and it works. I realize this is a hack and one should be able to rely on mclapply.

Instead of one big list, create a list of lists. Each sub-list is smaller than the broken version, and you then feed them into mclapply split by split. Call this file_map. The results are a list of lists, so you could then use the special double concatenate do.call function. As a result, each time mclapply finishes the size of the serialized raw vector is of a manageable size.

Just loop over the smaller pieces:

collector = vector("list", length(file_map)) # more complex than normal for speed 

for(index in 1:length(file_map)) {
reduced_set <- mclapply(file_map[[index]], function(x) {
      on.exit(message(sprintf("Completed: %s", x)))
      message(sprintf("Started: '%s'", x))
      readBamGappedAlignments(x)
    }, mc.cores=10)
collector[[index]]= reduced_set

}

output = do.call("c",do.call('c', collector)) # double concatenate of the list of lists

Alternately, save the output to a database as you go such as SQLite.

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!