Why does GCC's ifstream >> double allocate so much memory?

感情迁移 提交于 2021-01-21 12:10:27

问题


I need to read a series of numbers from a space-separated human-readable file and do some math, but I've run into some truly bizarre memory behavior just reading the file.

If I read the numbers and immediately discard them...

#include <fstream>

int main(int, char**) {
    std::ifstream ww15mgh("ww15mgh.grd");
    double value;
    while (ww15mgh >> value);
    return 0;
}

My program allocates 59MB of memory according to valgrind, scaling linearly with respect to the size of the file:

$ g++ stackoverflow.cpp
$ valgrind --tool=memcheck --leak-check=yes ./a.out 2>&1 | grep total
==523661==   total heap usage: 1,038,970 allocs, 1,038,970 frees, 59,302,487 

But, if I use ifstream >> string instead and then use sscanf to parse the string, my memory usage looks a lot more sane:

#include <fstream>
#include <string>
#include <cstdio>

int main(int, char**) {
    std::ifstream ww15mgh("ww15mgh.grd");
    double value;
    std::string text;
    while (ww15mgh >> text)
        std::sscanf(text.c_str(), "%lf", &value);
    return 0;
}
$ g++ stackoverflow2.cpp
$ valgrind --tool=memcheck --leak-check=yes ./a.out 2>&1 | grep total
==534531==   total heap usage: 3 allocs, 3 frees, 81,368 bytes allocated

To rule out the IO buffer as the issue, I've tried both ww15mgh.rdbuf()->pubsetbuf(0, 0); (which makes the program take ages and still do 59MB worth of allocations) and pubsetbuf with an enormous stack-allocated buffer (still 59MB). The behavior reproduces when compiled on either gcc 10.2.0 and clang 11.0.1 when using /usr/lib/libstdc++.so.6 from gcc-libs 10.2.0 and /usr/lib/libc.so.6 from glibc 2.32. The system locale is set to en_US.UTF-8 but this also reproduces if I set the environment variable LC_ALL=C.

The ARM CI environment where I first noticed the problem is cross-compiled on Ubuntu Focal using GCC 9.3.0, libstdc++6 10.2.0 and libc 2.31.

Following advice in the comments, I tried LLVM's libc++ and get perfectly sane behavior with the original program:

$ clang++ -std=c++14 -stdlib=libc++ -I/usr/include/c++/v1 stackoverflow.cpp
$ valgrind --tool=memcheck --leak-check=yes ./a.out 2>&1 | grep total
==700627==   total heap usage: 3 allocs, 3 frees, 8,664 bytes allocated

So, this behavior seems to be unique to GCC's implementation of fstream. Is there something I could do differently in constructing or using the ifstream that would avoid allocating tons of heap memory when compiled in a GNU environment? Is this a bug in their <fstream>?

As discovered in the comments discussion, the actual memory footprint of the program is perfectly sane (84kb), it's just allocating and freeing the same small bit of memory hundreds thousands of times, which creates a problem when using custom allocators like ASAN which avoid re-using heap space. I posted a follow-up question asking how to cope with this kind of problem at the "ASAN" level.

A gitlab project that reproduces the issue in its CI pipeline was generously contributed by Stack Overflow user @KamilCuk.


回答1:


It really doesn't. The number 59,302,487 shown by valgrind is the sum of all allocations, and does not represent the actual memory consumption of the program.

It turns out that the libstdc++ implementation of the relevant operator>> creates a temporary std::string for scratch space, and reserves 32 bytes for it. This is then deallocated immediately after being used. See num_get::do_get. With overhead, this perhaps actually allocates 56 bytes or so, which multiplied by about 1 million repetitions does mean, in a sense, that a total of 59 megabytes were allocated, and of course this is why that number scales linearly with the number of inputs. But it was the same 56 bytes being allocated and freed over and over again. This is perfectly innocent behavior by libstdc++ and isn't a leak or excessive memory consumption.

I didn't check the libc++ source, but a good bet would be that it uses scratch space on the stack instead of the heap.

As determined in comments, your real problem is that you are running this under AddressSanitizer, which delays the reuse of freed memory in order to help catch use-after-free errors. I have some thoughts about how to address that (no pun intended) and will post them on How do I exclude allocations in a tight loop from ASAN?




回答2:


Unfortunately, the C++ stream-based I/O library is generally underused since everybody "knows" that it performs poorly, so there's a chicken and egg problem there - bad opinion leads to little use leads to sparse bug reports leads to low pressure for a fix.

I'd say that the largest user of C++'s streams is the basic CS/IT education sector and "quick one-off scripts" (that will invariably outlive the author), and there nobody really cares about performance.

What you're seeing is just a wasteful implementation - it constantly allocates and deallocates somewhere in the guts, but it doesn't leak memory as far as I can tell. I don't think that there's any sort of a "pattern" that will guarantee better performance in a non-brittle way while using stream I/O.

The best strategy to win at this in an embedded setting is not to play the game at all. Forget about the C++ stream I/O and all'll be good. There are alternative formatted I/O libraries that bring back the C++'s type safety and perform much better and then you're not beholden to standard library implementation bugs/inefficiencies. Or just use sscanf if you don't want to add dependencies.



来源:https://stackoverflow.com/questions/65703206/why-does-gccs-ifstream-double-allocate-so-much-memory

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!