C++ return value optimization
问题 This code: #include <vector> std::vector<float> getstdvec() { std::vector<float> v(4); v[0] = 1; v[1] = 2; v[2] = 3; v[3] = 4; return v; } int main() { std::vector<float> v(4); for (int i = 0; i != 1000; ++i) { v = getstdvec(); } } My incorrect understanding here is that the function getstdvec shouldn't have to actually allocate the vector that it's returning. When I run this in valgrind/callgrind, I see there are 1001 calls to malloc; 1 for the initial vector declaration in main, and 1000