speed up large result set processing using rmongodb

左心房为你撑大大i 提交于 2019-12-09 12:09:35

问题


I'm using rmongodb to get every document in a a particular collection. It works but I'm working with millions of small documents, potentially 100M or more. I'm using the method suggested by the author on the website: cnub.org/rmongodb.ashx

count <- mongo.count(mongo, ns, query)
cursor <- mongo.find(mongo, query)
name <- vector("character", count)
age <- vector("numeric", count)
i <- 1
while (mongo.cursor.next(cursor)) {
    b <- mongo.cursor.value(cursor)
    name[i] <- mongo.bson.value(b, "name")
    age[i] <- mongo.bson.value(b, "age")
    i <- i + 1
}
df <- as.data.frame(list(name=name, age=age))

This works fine for hundreds or thousands of results but that while loop is VERY VERY slow. Is there some way to speed this up? Maybe an opportunity for multiprocessing? Any suggestions would be appreciated. I'm averaging 1M per hour and at this rate I'll need a week just to build the data frame.

EDIT: I've noticed that the more vectors in the while loop the slower it gets. I'm now trying to loop separately for each vector. Still seems like a hack though, there must be a better way.

Edit 2: I'm having some luck with data.table. Its still running but it looks like it will finish the 12M (this is my current test set) in 4 hours, that's progress but far from ideal

dt <- data.table(uri=rep("NA",count),
                 time=rep(0,count),
                 action=rep("NA",count),
                 bytes=rep(0,count),
                 dur=rep(0,count))

while (mongo.cursor.next(cursor)) {
  b <- mongo.cursor.value(cursor)
  set(dt, i, 1L,  mongo.bson.value(b, "cache"))
  set(dt, i, 2L,  mongo.bson.value(b, "path"))
  set(dt, i, 3L,  mongo.bson.value(b, "time"))
  set(dt, i, 4L,  mongo.bson.value(b, "bytes"))
  set(dt, i, 5L,  mongo.bson.value(b, "elaps"))

}


回答1:


You might want to try the mongo.find.exhaust option

cursor <- mongo.find(mongo, query, options=[mongo.find.exhaust])

This would be the easiest fix if actually works for your use case.

However the rmongodb driver seems to be missing some extra features available on other drivers. For example the JavaScript driver has a Cursor.toArray method. Which directly dumps all the find results to an array. The R driver has a mongo.bson.to.list function, but a mongo.cursor.to.list is probably what you want. It's probably worth pinging the driver developer for advice.

A hacky solution could be to create a new collection whose documents are data "chunks" of 100000 of the original documents each. Then these each of these could be efficiently read with mongo.bson.to.list. The chunked collection could be constructed using the mongo server MapReduce functionality.




回答2:


I know of no faster way to do this in a general manner. You are importing data from a foreign application and working with an interpreted language and there's no way rmongodb can anticipate the structure of the documents in the collection. The process is inherently slow when you are dealing with thousands of documents.



来源:https://stackoverflow.com/questions/13965261/speed-up-large-result-set-processing-using-rmongodb

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!