In order to give functions the option to modify the vector I can\'t do
curr = myvec.at( i );
doThis( curr );
doThat( curr );
doStuffWith( curr );
You can use a reference:
int &curr = myvec.at(i);
// do stuff with curr
The at
member function does bounds checking to make sure the argument is within the size of the vector
. Profiling is only way to know exactly how much slower it is compared to operator[]
. Using a reference here allows you to do the lookup once and then use the result in other places. And you can make it a reference-to-const
if you want to protect yourself from accidentally changing the value.
Operator[] might be faster than at
, because it isn't required to do bounds checking.
You can make curr
a reference to do what you want.
MyClass & curr = myvec.at(i);
You might also do some benchmarking before getting worried. Modern processors can handle thousands of operations per second quite easily.
The reason the first doesn't work is that you're not setting a pointer or iterator to the address of the ith variable. Instead you're setting curr equal to the value of the ith variable and then modifying curr. I'm assuming that doThis and doThat are references.
Do this:
MyObject& curr = myvec.at( i );
From my own tests with similar code (compiled under gcc and Linux), operator[]
can be noticeably faster than at
, not because of the bounds checking, but because of the overhead of exception handling. Replacing at
(which throws an exception on out-of-bounds) with my own bounds checking that raised an assert on out-of-bounds gave a measurable improvement.
Using a reference, as Kristo said, lets you only incur the bounds checking overhead once.
Ignoring bounds checking and exception handling overhead, both operator[]
and at
should be optimized to equivalent to direct array access or direct access via pointer.
As Chris Becke said, though, there's no substitute for profiling.
The complexity of at()
is constant, i.e., in practice this means that it must be designed to not have any relevant performance penalty.
You can use []
, which is also constant complexity, but does not check bounds. This would be equivalent to using pointer arithmetic and, thus, potentially a bit faster than the former.
In any case, vector is specifically designed for constant performance access to any of its elements. So this should be the least of your worries.
If it is the case, that you load up a vector, then process it without adding or deleting any more elements, then consider getting a pointer to the underlying array, and using array operations on that to 'avoid the vector overhead'.
If you are adding or deleting elements as part of your processing, then this is not safe to do, as the underlying array may be moved at any point by the vector itself.