What are the applications/benefits of an 80-bit extended precision data type?

前端 未结 5 867
故里飘歌
故里飘歌 2020-12-09 04:34

Yeah, I meant to say 80-bit. That\'s not a typo...

My experience with floating point variables has always involved 4-byte multiples, like singles (32 bit),

5条回答
  •  予麋鹿
    予麋鹿 (楼主)
    2020-12-09 05:06

    For me the use of 80 bits is ESSENTIAL. This way I get high-order (30,000) eigenvalues and eigenvectors of symmetric matrices with four more figures when using the GOTO library for vector inner products, viz., 13 instead of 9 significant figures for the kind of matrices that I use in relativistic atomic calculations, which is necessary to avoid falling into the sea of negative-energy states. My other option is using quadruple-precision arithmetic that increases CPU time 60-70 times and also increases RAM requirements. Any calculation relying on inner products of large vectors will benefit. Of course, in order to keep partial inner product results within registers it is necessary to use assembler language, as in the GOTO libraries. This is how I came to love my old Opteron 850 processors, which I will be using as long as they last for that part of my calculations.

    The reason 80 bits is fast, whereas greater precision is so much slower, is that the CPU's standard floating-point hardware has 80-bit registers. Therefore, if you want the extra 16 bits (11 extra bits of mantissa, four extra bits of exponent and one extra bit effectively unused), then it doesn't really cost you much to extend from 64 to 80 bits -- whereas to extend beyond 80 bits is extremely costly in terms of run time. So, you might as well use 80-bit precision if you want it. It is not cost-free to use, but it comes pretty cheap.

提交回复
热议问题