Float versus Integer arithmetic performance on modern chips
问题 Consider a Viterbi decoder on an additive model. It spends its time doing additions and comparisons. Now, consider two: one with C/C++ float as the data type, and another with int . On modern chips, would you expect int to run significantly faster than float ? Or will the wonders of pipelining (and the absence of multiplication and division) make it all come out about even? 回答1: Depends on what you mean by significantly . I usually expect to see ints perform about 2x faster, but it all