问题
As I know STL has automatic memory management. But when I use something like top
or ps -aux
to show the memory usage of a process, it shows even the STL object are out of scope, these memory is still possessed by process.
Here is an example:
void run()
{
map<int, int> a;
for(int i = 0; i < 1000000; i++)
{
a[i] = i;
} // 64376K memory used by process
}
int main()
{
run();
sleep(5); // still 64376 memory used
map<int, int> a;
for(int i = 0; i < 1000000; i++)
{
a[i] = i;
} // still 64376 memory used
return 0;
}
The process possesses 64376KB memory in run()
and memory doesn't release after function run()
. But these memory seems to be used by the second map
.
After I use valgrind --tool=massif
to check what happened, I got a normal result.
So here comes my question
- why process memory trends doesn't match with the code and
valgrind
- How does the different STL objects share the same allocated memory.
回答1:
This is completely normal. That's how operating systems work. If they spent all their time reclaiming tiny portions of memory from tiny processes, they'd never do anything else.
You just have to trust that their complicated algorithms know what they're doing to get the best performance for your system.
There are layers on layers on layers of logic that allocate physical RAM all the way up to the process's virtual memory.
Going into extreme detail about how operating systems work would be both beyond the scope of this post, and pointless. However, you could enrol on a relevant teaching course if you really wanted to grok it all.
If we were to forget about caching and virtual memory and such, even a simple rule of thumb might be summarised as follows: releasing memory from your program tells the OS it can have it back; that doesn't mean the OS must take it back.
来源:https://stackoverflow.com/questions/53724942/stl-not-release-memory-from-system-level