When to use `short` over `int`?

那年仲夏 提交于 2019-12-03 05:36:44

问题


There are many questions that asks for difference between the short and int integer types in C++, but practically, when do you choose short over int?


回答1:


(See Eric's answer for more detailed explanation)

Notes:

  • Generally, int is set to the 'natural size' - the integer form that the hardware handles most efficiently
  • When using short in an array or in arithmetic operations, the short integer is converted into int, and so this can introduce a hit on the speed in processing short integers
  • Using short can conserve memory if it is narrower than int, which can be important when using a large array
  • Your program will use more memory in a 32-bit int system compared to a 16-bit int system

Conclusion:

  • Use int unless you conserving memory is critical, or your program uses a lot of memory (e.g. many arrays). In that case, use short.



回答2:


You choose short over int when:

Either

  • You want to decrease the memory footprint of the values you're storing (for instance, if you're targeting a low-memory platform),
  • You want to increase performance by increasing either the number of values that can be packed into a single memory page (reducing page faults when accessing your values) and/or in the memory caches (reducing cache misses when accessing values), and profiling has revealed that there are performance gains to be had here,
  • Or you are sending data over a network or storing it to disk, and want to decrease your footprint (to take up less disk space or network bandwidth). Although for these cases, you should prefer types which specify exactly the size in bits rather than int or short, which can vary based on platform (as you want a platform with a 32-bit short to be able to read a file written on a platform with a 16-bit short). Good candidates are the types defined in stdint.h.

And:

  • You have a numeric value which does not need to take on any values that can't be stored in a short on your target platform (for a 16-bit short, this is -32768-32767, or 0-65535 for a 16-bit unsigned short).
  • Your target platform (or one of you r target platforms) uses less memory for a short than for an int. The standard only guarantees that short is not larger than int, so implementations are allowed to have the same size for a short and for an int.

Note:

chars can also be used as arithmetic types. An answer to "When should I use char instead of short or int?" would read very similarly to this one, but with different numbers (-128-127 for an 8-bit char, 0-255 for an 8-bit unsigned char)

In reality, you likely don't actually want to use the short type specifically. If you want an integer of specific size, there are types defined in <cstdint> that should be preferred, as, for example, an int16_t will be 16 bits on every system, whereas you cannot guarantee the size of a short will be the same across all targets your code will be compiled for.




回答3:


In general, you don't prefer short over int.

The int type is the processor's native word size
Usually, an int is the processor's word size.

For example, with a 32-bit word size processor, an int would be 32 bits. The processor is most efficient using 32-bits. Assuming that short is 16-bit, the processor still fetches 32-bits from memory. So no efficiency here; actually it's longer because the processor may have to shift the bits to be placed in the correct position in a 32-bit word.

Choosing a smaller data type There are standardized data types that are bit specific in length, such as uint16_t. These are preferred to the ambiguous types of char, short, and int. These width specific data types are usually used for accessing hardware, or compressing space (such as message protocols).

Choosing a smaller range
The short data type is based on range not bit width. On a 32-bit system, both short and int may have the same 32-bit length.

Once reason for using short is because the value will never go past a given range. This is usually a fallacy because programs will change and the data type could overflow.

Summary
Presently, I do not use short anymore. I use uint16_t when I access 16-bit hardware devices. I use unsigned int for quantities, including loop indices. I use uint8_t, uint16_t and uint32_t when size matters for data storage. The short data type is ambiguous for data storage, since it is a minimum. With the advent of stdint header files, there is no longer any need for short.




回答4:


If you don't have any specific constraints imposed by your architecture, I would say you can always use int. The type short is meant for specific systems where memory is a precious resource.



来源:https://stackoverflow.com/questions/24371077/when-to-use-short-over-int

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!