mmap

mmap, msync(MS_ASYNC) and munmap

余生颓废 提交于 2019-12-24 00:24:56
问题 If I call msync with MS_ASYNC on a memory mapped region, the sync process will be handled asynchronously. However, if I call munmap immediately on that region, can I assume that the msync will be carried out safely? Or will I have to call msync before munmap? 回答1: The short answer is yes -- the changes to the contents will eventually (and safely) make their way to the file, even if you never call msync . From man 2 mmap : MAP_SHARED Share this mapping. Updates to the mapping are visible to

Haskell Read Last Line with a Lazy mmap

瘦欲@ 提交于 2019-12-23 23:24:09
问题 I want to read the last line of my file and make sure it has the same number of fields as my first---I don't care about anything in the middle. I'm using mmap because it's fast for random access on large files, but am encountering problems not understanding Haskell or laziness. λ> import qualified Data.ByteString.Lazy.Char8 as LB λ> import System.IO.MMap λ> outh <- mmapFileByteStringLazy fname Nothing λ> LB.length outh 87094896 λ> LB.takeWhile (`notElem` "\n") outh "\"Field1\",\"Field2\",

Sharing an array of structs using mmap

拟墨画扇 提交于 2019-12-23 20:11:43
问题 I am trying to create an array of structs that is shared between a parent and child processes. I am getting a segmentation fault when trying to access the array data. I feel certain that the problem has something to do with the way I'm using pointers, as this is an area I'm not very comfortable with. Please note that I edited out most of the code that didn't seem relevant. /* structure of Registration Table */ struct registrationTable{ int port; char name[MAXNAME]; int req_no; }; main() { /*

Sharing an array of structs using mmap

橙三吉。 提交于 2019-12-23 19:51:50
问题 I am trying to create an array of structs that is shared between a parent and child processes. I am getting a segmentation fault when trying to access the array data. I feel certain that the problem has something to do with the way I'm using pointers, as this is an area I'm not very comfortable with. Please note that I edited out most of the code that didn't seem relevant. /* structure of Registration Table */ struct registrationTable{ int port; char name[MAXNAME]; int req_no; }; main() { /*

Reading memory mapped bzip2 compressed file

拈花ヽ惹草 提交于 2019-12-23 15:02:38
问题 So I'm playing with the Wikipedia dump file. It's an XML file that has been bzipped. I can write all the files to directories, but then when I want to do analysis, I have to reread all the files on the disk. This gives me random access, but it's slow. I have the ram to put the entire bzipped file into ram. I can load the dump file just fine and read all the lines, but I cannot seek in it as it's gigantic. From what it seems, the bz2 library has to read and capture the offset before it can

Increasing a file's size using mmap

有些话、适合烂在心里 提交于 2019-12-23 10:08:08
问题 In Python on Windows I can create a large file by from mmap import mmap f = open('big.file', 'w') f.close() f = open('big.file', 'r+') m = mmap(f.fileno(), 10**9) And now big.file is (about) 1 gigabyte. On Linux, though, this will return ValueError: mmap length is greater than file size . Is there a way to get the same behavior on Linux as with Windows? That is, to be able to increase a file's size using mmap ? 回答1: On POSIX systems at least, mmap() cannot be used to increase (or decrease)

php-fpm7.1 mmap/munmap (very) slow performance on virtualized systems (hugepage)

核能气质少年 提交于 2019-12-23 07:47:34
问题 My php-fpm process is facing performance issues on Ubuntu 14.04 LTS (Nginx server, MariaDB database). strace -f $(pidof php-fpm7.1 | sed 's/\([0-9]*\)/\-p \1/g') Gave me <... epoll_wait resumed> {}, 1, 1000) = 0 [pid 32533] epoll_wait(8, {}, 1, 103) = 0 [pid 32533] epoll_wait(8, <unfinished ...> [pid 32535] mmap(NULL, 2097152, PROT_READ|PROT_WRITE, MAP_PRIVATE|MAP_ANONYMOUS, -1, 0) = 0x7fd933fdd000 [pid 32535] munmap(0x7fd933fdd000, 2097152) = 0 [pid 32535] mmap(NULL, 4190208, PROT_READ|PROT

How can one add text files in Mongodb?

只谈情不闲聊 提交于 2019-12-23 04:19:12
问题 I have certain requirement to insert a text file in mongodb, retrieve it back and then check that whether files are same. I am hoping to do it without GridFS as the files i want to use is lesser than 16MB, so can you guys plz suggest me ways to do it, considering that set up of mongodb is very basic one. Thanks 回答1: A text file that is less than 16 MB can be stored as a simple key-value pair in a plain document. No need for GridFS and no need for Binary or JSON objects. If in doubt, try it.

vm.max_map_count and mmapfs

﹥>﹥吖頭↗ 提交于 2019-12-23 03:26:19
问题 What are the pros and cons of increasing vm.max_map_count from 64k to 256k? Does vm.max_map_count = 65530 imply --> 64k addresses * 64kb page size = upto 4GB of data can be referenced by the process? And if i exceed 4GB - the addressable space due to the vm.max_map_count limit, will OS need to page out some of the older accessed index data? Maybe my above understanding is not correct as FS cache can be pretty huge How does this limit result in OOM? I posted a similar question on elasticsearch

having linux persist memory changes to disk

十年热恋 提交于 2019-12-23 02:28:25
问题 I was trying to see if I could have the OS, linux, persist memory changes to disk for me. I would map certain sections of a file into memory. The file let's say would be a circular queue. I was figuring that it would be more efficient if I let the OS handle writing the changed pages to disk. I started looking into mmap(), msync() and munmap(). I found the following article: c linux msync(MS_ASYNC) flush order in which one of the posts indicate that MS_ASYNC of msync() is a no-op since the OS