I know that manual dynamic memory allocation is a bad idea in general, but is it sometimes a better solution than using, say, std::vector
?
To give a crude e
It is always better to use std::vector
/std::array
, at least until you can conclusively prove (through profiling) that the T* a = new T[100];
solution is considerably faster in your specific situation. This is unlikely to happen: vector
/array
is an extremely thin layer around a plain old array. There is some overhead to bounds checking with vector::at
, but you can circumvent that by using operator[]
.