filehash

difference between ff and filehash package in R [closed]

人盡茶涼 提交于 2020-01-14 08:19:25
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I have a dataframe compose of 25 col and ~1M rows, split into 12 files, now I need to import them and then use some reshape package to

difference between ff and filehash package in R [closed]

放肆的年华 提交于 2020-01-14 08:19:06
问题 As it currently stands, this question is not a good fit for our Q&A format. We expect answers to be supported by facts, references, or expertise, but this question will likely solicit debate, arguments, polling, or extended discussion. If you feel that this question can be improved and possibly reopened, visit the help center for guidance. Closed 7 years ago . I have a dataframe compose of 25 col and ~1M rows, split into 12 files, now I need to import them and then use some reshape package to

Interactively work with list objects that take up massive memory

巧了我就是萌 提交于 2019-12-09 16:11:59
问题 I have recently discovered the wonders of the packages bigmemory , ff and filehash to handle very large matrices. How can I handle very large (300MB++) lists? In my work I work with these lists all day every day. I can do band-aid solution with save() & load() hacks everywhere but I would prefer a bigmemory -like solution. Something like a bigmemory bigmatrix would be ideal, where I work with it basically identically to a matrix except it takes up somethign like 660 bytes in my RAM. These

Interactively work with list objects that take up massive memory

自古美人都是妖i 提交于 2019-12-04 03:47:21
I have recently discovered the wonders of the packages bigmemory , ff and filehash to handle very large matrices. How can I handle very large (300MB++) lists? In my work I work with these lists all day every day. I can do band-aid solution with save() & load() hacks everywhere but I would prefer a bigmemory -like solution. Something like a bigmemory bigmatrix would be ideal, where I work with it basically identically to a matrix except it takes up somethign like 660 bytes in my RAM. These lists are mostly >1000 length lists of lm() objects (or similar regression objects). For example, Y <-