I\'m currently working on a project for medical image processing, that needs a huge amount of memory. Is there anything I can do to avoid heap fragmentation and to speed up
I gues you're using something unmanaged, because in managed platforms the system (garbage collector) takes care of fragmentation.
For C/C++ you can use some other allocator, than the default one. (there were alrady some threads about allocators on stackowerflow).
Also, you can create your own data storage. For example, in the project I'm currently working on, we have a custom storage (pool) for bitmaps (we store them in a large contigous hunk of memory), because we have a lot of them, and we keep track of heap fragmentation and defragment it when the fragmentation is to big.