问题
In our application we have to be able to map several (i.e. maybe up to 4) files into memory (via mapViewOfFile). For a long time this has not been a problem, but as the files were getting bigger and bigger over the last years, now memory fragmentation prevents us from mapping those big files (files will be about 200 MB). The problem may already exist if no other files are loaded at that moment.
I am now looking for a way t make sure that the mapping is always successful. Therefor I wanted to reserve a block of memory at program start only for the mapping and that would therefor suffer much less from the fragmentation.
My first approach was to HeapCreate a private heap, I would then HeapAlloc a block of memory large enough to hold the mapping for one file and then use MapViewOfFileEx with the address of that block. Of cause the address would have to match the memory allocation granularity. But the mapping still failed with error code ERROR_INVALID_ADDRESS (487).
Next I tried the same thing with VirtualAloc. My understanding was that when I pass the parameter MEM_RESERVE I would then be able to use that memory for what ever I wanted, e.g. to map a view of a file. But I found out that that is not possible (same error code as above) until i completely free the whole block with VirtualFree again. Therefor there would be no reserved memory for the next files anymore.
I'm already using the low fragmentation heap feature and it is of nearly no use to us. Rewriting our code to use only smaller views of the files is not an option at the moment. I also took a look at this post Can address space be recycled for multiple calls to MapViewOfFileEx without chance of failure? but didn't find any it very useful and was hoping for an other possibility.
Do you have any suggestions what I can do or where my design may be wrong? Thank you.
回答1:
Well, the documentation for MapViewOfFileEx
is clear: "The suggested address is used to specify that a file should be mapped at the same address in multiple processes. This requires the region of address space to be available in all involved processes. No other memory allocation can take place in the region that is used for mapping, including the use of the VirtualAlloc
"
The low fragmentation heap is intended to prevent even relatively small allocations from failing. I.e. it avoids 1 byte holes so 2 byte allocations will remain possible for longer. Your allocations are not small by 32 bits standards.
Realistically, this is going to hurt. If you really really need it, reimplement memory mapped files. All the necessary functions are available. Use a vectored exception handler to page in the source, and use QueryWorkingSet
to figure out if pages are dirty.
来源:https://stackoverflow.com/questions/10291864/mapping-of-several-big-files-into-memory