Incrementing: x++ vs x += 1

孤街醉人 提交于 2019-11-27 05:34:23

Any sane or insane compiler will produce identical machine code for both.

Assuming you talk about applying these to base types and no own classes where they could make a huge difference they can produce the same output especially when optimization is turned on. To my surprise I often found in decompiled applications that x += 1 is used over x++ on assembler level(add vs inc).

Any decent compiler should be able to recognize that the two are the same so in the end there should be no performance difference between them.

If you want to convince yourself just do a benchmark..

Mike Dunlavey

When you say "it could add up in the long run" - don't think about it that way.

Rather, think in terms of percentages. When you find the program counter is in that exact code 10% or more of the time, then worry about it. The reason is, if the percent is small, then the most you could conceivably save by improving it is also small.

If the percent of time is less than 10%, you almost certainly have much bigger opportunities for speedup in other parts of the code, almost always in the form of function calls you could avoid.

Here's an example.

Consider you're a lazy compiler implementer and wouldn't bother writing OPTIMIZATION routines in the machine-code-gen module.

x = x + 1;

would get translated to THIS code:

mov $[x],$ACC
iadd $1,$ACC
mov $ACC,$[x]

And x++ would get translated to:

incr $[x] ;increment by 1

if ONE instruction is executed in 1 machine cycle, then x = x + 1 would take 3 machine cycles where as x++ would take 1 machine cycle. (Hypothetical machine used here).

BUT luckily, most compiler implementers are NOT lazy and will write optimizations in the machine-code-gen module. So x = x+1 and x++ SHOULD take equal time to execute. :-P

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!