Why weren't new (bit width specific) printf() format option strings adoped as part of C99?

别来无恙 提交于 2019-12-29 04:40:26

问题


While researching how to do cross-platform printf() format strings in C (that is, taking into account the number of bits I expect each integer argument to printf() should be) I ran across this section of the Wikipedia article on printf(). The article discusses non-standard options that can be passed to printf() format strings, such as (what seems to be a Microsoft-specific extension):

printf("%I32d\n", my32bitInt);

It goes on to state that:

ISO C99 includes the inttypes.h header file that includes a number of macros for use in platform-independent printf coding.

... and then lists a set of macros that can be found in said header. Looking at the header file, to use them I would have to write:

 printf("%"PRId32"\n", my32bitInt);

My question is: am I missing something? Is this really the standard C99 way to do it? If so, why? (Though I'm not surprised that I have never seen code that uses the format strings this way, since it seems so cumbersome...)


回答1:


The C Rationale seems to imply that <inttypes.h> is standardizing existing practice:

<inttypes.h> was derived from the header of the same name found on several existing 64-bit systems.

but the remainder of the text doesn't write about those macros, and I don't remember they were existing practice at the time.

What follows is just speculation, but educated by experience of how standardization committees work.

One advantage of the C99 macros over standardizing additional format specifier for printf (note that C99 also did add some) is that providing <inttypes.h> and <stdint.h> when you already have an implementation supporting the required features in an implementation specific way is just writing two files with adequate typedef and macros. That reduces the cost of making existing implementation conformant, reduces the risk of breaking existing programs which made use of the existing implementation specifics features (the standard way doesn't interfere) and facilitate the porting of conformant programs to implementation who don't have these headers (they can be provided by the program). Additionally, if the implementation specific ways already varied at the time, it doesn't favorize one implementation over another.




回答2:


Correct, this is how the C99 standard says you should use them. If you want truly portablt code that is 100% standards-conformant to the letter, you should always print an int using "%d" and an int32_t using "%"PRId32.

Most people won't bother, though, since there are very few cases where failure to do so would matter. Unless you're porting your code to Win16 or DOS, you can assume that sizeof(int32_t) <= sizeof(int), so it's harmless to accidentally printf an int32_t as an int. Likewise, a long long is pretty much universally 64 bits (although it is not guaranteed to be so), so printing an int64_t as a long long (e.g. with a %llx specifier) is safe as well.

The types int_fast32_t, int_least32_t, et al are hardly ever used, so you can imagine that their corresponding format specifiers are used even more rarely.




回答3:


You can always cast upwards and use %jd which is the intmax_t format specifier.

printf("%jd\n", (intmax_t)(-2));

I used intmax_t to show that any intXX_t can be used, but simply casting to long is much better for the int32_t case, then use %ld.




回答4:


I can only speculate about why. I like AProgrammer's answer above, but there's one aspect overlooked: what are you going to add to printf as a format modifier? There are already two different ways that numbers are used in a printf format string (width and precision). Adding a third kind of number to say how many bits of precision are in the argument would be great, but where are you going to put it without confusing people? Unfortunatey one of the flaws in C is that printf was not designed to be extensible.

The macros are awful, but when you have to write code that is portable across 32-bit and 64-bit platforms, they are a godsend. Definitely saved my bacon.

I think the answer to your question why is either

  • Nobody could think of a better way to do it, or
  • The standards committee couldn't agree on anything they felt was clearly better.



回答5:


Another possibility: backward compatibility. If you add more format specifiers to printf, or additional options, it is possible that a specifier in some pre-C99 code would have a format string interpreted differently.

With the C99 change, you're not changing the functionality of printf.



来源:https://stackoverflow.com/questions/1183679/why-werent-new-bit-width-specific-printf-format-option-strings-adoped-as-pa

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!