What is the benefit of SIMD on a superscalar out-of-order CPU?

霸气de小男生 提交于 2020-07-31 03:24:08

问题


I've been reading up on the recently available AVX-512 instructions, and I feel like there is a basic concept that I'm not understanding. What is the benefit of SIMD on a superscalar CPU that already performs out of order execution?

Consider the following pseudo assembly code. With SIMD:

load 16 floats to register simd-a
load 16 floats to register simd-b
multiply register simd-a by simd-b as 16 floats to register c
store the results to memory

And this without SIMD:

load a float to register a
load a float to register b
multiply register a and register b as floats to c
store register c to memory

load a float to register a (contiguous to prior load to a)
load a float to register b (contiguous to prior load to b)
multiply register a and register b as floats to c
store register c to memory (contiguous to previous stored result)

[continued for 16 floats]

It's been a while since I've done low-level work like this, but it seems to me that the CPU could convert the non-SIMD example to run like this in data order:

  1. 32 load instructions processed in parallel (likely as just two requests to cache/memory if memory is properly aligned)
  2. 16 multiply instructions executed in parallel once the loads complete
  3. 16 stores to memory which again would be only a single request to cache/memory if things are properly aligned

Essentially, it feels like the CPU could be intelligent enough to perform at the same speed in both cases. Obviously there's something I'm missing here as we continue to add additional and wider SIMD instructions to ISAs, so where does the practical value come from for these type of instructions?


回答1:


The difference is mainly the feasibility of realizing such a design in hardware. Superscalar architectures aren't very scalable for various reasons. For example, it would be difficult to rename that many registers in one cycle, because the things you're renaming might be dependent (if it was really translated SIMD code they wouldn't be, but you can't know that). The physical register file would need a boatload of extra read and write ports, that's pretty annoying. Wider registers by contrast are easy. The forwarding network would explode in size. A lot of µops would have to be inserted into the active window every cycle, a lot of them would have to be woken up and dispatched, and a lot of them have to retire. Since the machine is now being flooded with an order of magnitude more µops, you'd probably want to support a bigger active window, otherwise it has effectively become smaller (for equivalent code it becomes less effective).

The whole memory business is harder too, since now you'd have to support a lot of accesses in a cycle (that all have to go through separate translations, have ordering constraints applied to them, participate in forwarding, and so forth), instead of just wider accesses (which is relatively easy).

Basically this hypothetical design takes a lot of things that are already hard to implement efficiently with a reasonable power and area budget, and then makes them even harder. The complexity of many of those things scales approximately quadratically with the number of µops that you want to put through them in a cycle, not linearly.

Adding wider SIMD, they way they have been doing, is largely just copy-pasting the SIMD unit (hence the annoying semantics of most AVX and AVX2 instructions) and giving some things a higher bit-width. There is no bad scaling if you do it that way.



来源:https://stackoverflow.com/questions/42793823/what-is-the-benefit-of-simd-on-a-superscalar-out-of-order-cpu

标签
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!