Double-precision operations: 32-bit vs 64-bit machines
Why don't we see twice better performance when executing a 64-bit operations (e.g. Double precision operation) on a 64-bit machine, compared to executing on a 32-bit machine? In a 32-bit machine, don't we need to fetch from memory twice as much? more importantly, dont we need twice as much cycles to execute a 64-bit operation? “64-bit machine” is an ambiguous term but usually means that the processor's General-Purpose Registers are 64-bit wide. Compare 8086 and 8088 , which have the same instruction set and can both be called 16-bit processors in this sense . When the phrase is used in this