How are floating point operations emulated in software? [closed]

故事扮演 提交于 2020-01-20 07:04:26

问题


How does software perform floating point arithmetic when the CPU has no (or buggy) floating point unit? Examples would be the PIC, AVR, and 8051 microcontrollers architectures.


回答1:


"Emulated" is the wrong term in the context of PIC, AVR and 8051. Floating-point emulation refers to the emulation of FPU hardware on architectures that have an FPU option but for which not all parts include the FPU. This allows a binary containing floating point instructions to run on a variant without an FPU. Where used, FPU emulation is implemented as an invalid-instruction exception handler; when an FPU instruction is encountered but no FPU is present, an exception occurs, and the handler reads the instruction value and implements the operation in software.

However none of the architectures you have listed define an FPU or FPU instructions so there is nothing to emulate. Instead in these cases floating-point operations are implemented entirely in software, and the compiler generates code to invoke floating-point routines as necessary. For example the expression x = y * z ; will generate code that is equivalent to a function call x = _fmul( y, z ) ;. In fact if you look at the linker map output from a build containing floating point operations you will probably see routine symbol names such as _fmul, _fdiv and the like - these functions are intrinsic to the compiler.




回答2:


Floating-point is just scientific notation in base-2. Both the mantissa and exponent are integers, and softfloat libraries will break up floating-part operations into operations that affect the mantissa and exponent, which can use the CPU integer support.

For example, (x 2n) * (y 2m) = x * y 2n+m.

Often a normalization step will also be needed to keep the floating point representation canonical, but it might be possible to perform multiple operations before normalization. Also since IEEE-754 stores the exponent with a bias, that will have to be considered.




回答3:


Floating point are not "emulated". In general they are stored as explained in IEEE754.

Fixed point is a different implementation type. The number 2,54 can be implemented in fixed point or floating point.

Software implementation VS FPU (Floating point unit)

Some modern MCU like ARM cortex M4F have a floating point unit and can dot the floating point operations (like multiplication, division, addition) in hardware much faster than software wouldl do.

In 8bit MCU like AVR, PIC and 8051 the operations are done only in software (a division may take up to hundreds of operations). It will have to thread separatelly the mantissa (fraction) part and the exponant part plus all special cases (e.g. NaN). The compiler often has many routines to threat an same operation (e.g. division) and will chose depending of optimization (size/speed) and other parameters (e.g. if it knows the numbers are always positive ...)




回答4:


There is an another SO question that covers what C/C++ standards require from floating points numbers. So, strictly speaking, float can be represented any any form which compiler prefers. But practically if you floating point implementation differs significantly from IEEE754 then you can expect a lot of bugs caused by programmes who are used to IEEE754. And a compiler has to be programmer friendly and should not make troubles be exploiting unspecified places of standards. So in most cases floating point numbers will be represented the same way, as they are represented on all other architectures, including x86. Fixed point arithmetic is just too different.

In case of AVR's and PIC's compiler knows that there is no fpu available, so it will translate every single operation to a bunch of commands that CPU supports. It will have to normalize both operands to a common exponent, then perform operation on mantissa like on integral numbers, then adjust exponent. This is quite a lot of operations so emulated floating point is slow. And, beside that, if you optimize for size, every floating point operation may become a function call.

And on ARM arch things may be quite weird. There are ARM's with FPU and without. And you may want to have universal application which will run on both. In such case there is a tricky (and slow) scheme. Application uses FPU commands. If your CPU does not have FPU, then such command will trigger an interrupt and in it OS will emulate the instruction, clear error bit and return control to an application. But that scheme occurred to be very slow an is not commonly used.



来源:https://stackoverflow.com/questions/39810751/how-are-floating-point-operations-emulated-in-software

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!