mmap

mmap vs O_DIRECT for random reads (what are the buffers involved?)

匿名 (未验证) 提交于 2019-12-03 08:48:34
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: I am implementing a disk based hashtable supporting large amount of keys (26+ million). The value is deserialized. Reads are essentially random throughout the file, values are less than the page size, and I am optimising for SSDs. Safety/consistency are not such huge issues (performance matters). My current solution involves using a mmap() file with MADV_RANDOM | MADV_DONTNEED set to disable prefetching by the kernel and only load data as needed on-demand. I get the idea that the kernel reads from disk to memory buffer, and I deserialize

Docker memory limit causes SLUB unable to allocate with large page cache

≡放荡痞女 提交于 2019-12-03 07:53:00
问题 Given a process that creates a large linux kernel page cache via mmap'd files, running in a docker container (cgroup) with a memory limit causes kernel slab allocation errors: Jul 18 21:29:01 ip-10-10-17-135 kernel: [186998.252395] SLUB: Unable to allocate memory on node -1 (gfp=0x2080020) Jul 18 21:29:01 ip-10-10-17-135 kernel: [186998.252402] cache: kmalloc-2048(2412:6c2c4ef2026a77599d279450517cb061545fa963ff9faab731daab2a1f672915), object size: 2048, buffer size: 2048, default order: 3,

mmap slower than ioremap

谁都会走 提交于 2019-12-03 07:26:04
问题 I am developing for an ARM device running Linux 2.6.37. I am trying to toggle an IO pin as fast as possible. I made a little kernel module and a user space application. I tried two things : Manipulate the GPIO control registers directly from the kernel space using ioremap . mmap() the GPIO control registers without caching and using them from user space. Both methods work, but the second is about 3 times slower than the first (observed on oscilloscope). I think I disabled all caching

Is it possible to “punch holes” through mmap'ed anonymous memory?

我的未来我决定 提交于 2019-12-03 06:42:20
Consider a program which uses a large number of roughly page-sized memory regions (say 64 kB or so), each of which is rather short-lived. (In my particular case, these are alternate stacks for green threads.) How would one best do to allocate these regions, such that their pages can be returned to the kernel once the region isn't in use anymore? The naïve solution would clearly be to simply mmap each of the regions individually, and munmap them again as soon as I'm done with them. I feel this is a bad idea, though, since there are so many of them. I suspect that the VMM may start scaling badly

Anonymous mmap zero-filled?

倖福魔咒の 提交于 2019-12-03 06:24:11
In Linux, the mmap(2) man page explains that an anonymous mapping . . . is not backed by any file; its contents are initialized to zero. The FreeBSD mmap(2) man page does not make a similar guarantee about zero-filling, though it does promise that bytes after the end of a file in a non-anonymous mapping are zero-filled. Which flavors of Unix promise to return zero-initialized memory from anonymous mmaps? Which ones return zero-initialized memory in practice, but make no such promise on their man pages? It is my impression that zero-filling is partially for security reasons. I wonder if any

mmap and memory usage

感情迁移 提交于 2019-12-03 05:34:07
I am writing a program that receives huge amounts of data (in pieces of different sizes) from the network, processes them and writes them to memory. Since some pieces of data can be very large, my current approach is limiting the buffer size used. If a piece is larger than the maximum buffer size, I write the data to a temporary file and later read the file in chunks for processing and permanent storage. I'm wondering if this can be improved. I've been reading about mmap for a while but I'm not one hundred percent sure if it can help me. My idea is to use mmap for reading the temporary file.

Resizing numpy.memmap arrays

匆匆过客 提交于 2019-12-03 04:39:10
I'm working with a bunch of large numpy arrays, and as these started to chew up too much memory lately, I wanted to replace them with numpy.memmap instances. The problem is, now and then I have to resize the arrays, and I'd preferably do that inplace. This worked quite well with ordinary arrays, but trying that on memmaps complains, that the data might be shared, and even disabling the refcheck does not help. a = np.arange(10) a.resize(20) a >>> array([0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) a = np.memmap('bla.bin', dtype=int) a >>> memmap([0, 0, 0, 0, 0, 0, 0, 0, 0, 0]) a

Mmap and struct in C [closed]

冷暖自知 提交于 2019-12-03 04:05:20
Closed. This question is off-topic. It is not currently accepting answers. Learn more . Want to improve this question? Update the question so it's on-topic for Stack Overflow. I would like to read and write structs with mmap in C. I have a function named insert_med which allows the insertion of a new struct med into the mmap and each struct (with a unique key ) has to be written in a different position of the array (when a new struct is added, it has to be added in the last empty position of the array). Two structs med CAN'T have the same key as you can see in the code bellow. The key is

Mapping an array to a file via Mmap in Go

穿精又带淫゛_ 提交于 2019-12-03 03:47:29
问题 I'm trying to map an array to a file via Mmap, the array could be any type, like float64. In C, I find this one. After reading some texts, I wrote this sample. I don't know if it is correct, and it is not writing the values to the file. If I increase the size of array a lot, e.g from 1000 to 10000, it crashes. If someone know how to do that in the correctly way, please, tell me. Thanks! 回答1: For example, revising your sample program, package main import ( "fmt" "os" "syscall" "unsafe" ) func

Windows fsync (FlushFileBuffers) performance with large files

匿名 (未验证) 提交于 2019-12-03 03:08:02
可以将文章内容翻译成中文,广告屏蔽插件可能会导致该功能失效(如失效,请关闭广告屏蔽插件后再试): 问题: From information on ensuring data is on disk ( http://winntfs.com/2012/11/29/windows-write-caching-part-2-an-overview-for-application-developers/ ), even in the case of e.g. a power outage, it appears that on Windows platforms you need to rely on its "fsync" version FlushFileBuffers to have the best guarantee that buffers are actually flushed from disk device caches onto the storage medium itself. The combination of FILE_FLAG_NO_BUFFERING with FILE_FLAG_WRITE_THROUGH does not ensure flushing the device cache, but merely have an effect on the