How to avoid running out of memory in high memory usage application? C / C++

前端 未结 15 2031
天命终不由人
天命终不由人 2021-02-19 14:04

I have written a converter that takes openstreetmap xml files and converts them to a binary runtime rendering format that is typically about 10% of the original size. Input file

15条回答
  •  别那么骄傲
    2021-02-19 14:30

    You don't need to switch to 64-bit machines, nor you need most of the 1000 things suggested by others. What you need is a more thoughtful algorithm.

    Here are some things you can do to help out with this situation:

    • If you're on Windows, utilize File Maps (sample code). This will give access to the file via a single buffer pointer as though you read the whole file in memory, only without actually doing that. Recent versions of Linux Kernel have a similar mechanism.
    • If you can, and it looks like you could, scan the file sequentially and avoid creating an in-memory DOM. This will greatly decrease your load-time as well as memory requirements.
    • Use Pooled Memory! You will probably have many tiny objects, such as nodes, points and whatnot. Use a pooled memory to help out (I'm assuming you're using an unmanaged language. Search for Pooled allocation and memory pools).
    • If you're using a managed language, at least move this particular part into an unmanaged language and take control of the memory and file reading. Managed languages have a non-trivial overhead both in memory footprint and performance. (Yes, I know this is tagged "C++"...)
    • Attempt to design an in-place algorithm, where you read and process only the minimum amount of data at a time, so your memory requirements would go down.

    Finally, let me point out that complex tasks require complex measures. If you think you can afford a 64-bit machine with 8GB of RAM, then just use "read file into memory, process data, write output" algorithm, even if it takes a day to finish.

提交回复
热议问题