When should we use prefetch?

后端 未结 6 1819
花落未央
花落未央 2020-12-03 12:27

Some CPU and compilers supply prefetch instructions. Eg: __builtin_prefetch in GCC Document. Although there is a comment in GCC\'s document, but it\'s too short to me.

6条回答
  •  悲&欢浪女
    2020-12-03 13:26

    Here's a brief summary of cases that I'm aware of in which software prefetching may prove especially useful. Some may not apply to all hardware.

    This list should be read from the point of view that the most obvious place software prefetches could be used is where the stream of accesses can be predicted in software, and yet this case isn't necessarily such an obvious win for SW prefetch because out-of-order processing often ends up having a similar effect since it can execute behind existing misses in order to get more misses in flight.

    So this list is more a "in light of the fact that SW prefetch isn't as obviously useful as it might first seem, here are some places it might still be useful anyways", often compared to the alternative of either just letting out-of-order processing do its thing or just using "plain loads" to load some values before they are needed.

    Fitting more loads in the out-of-order window

    Although out-of-order processing can potentially expose the same type of MLP (Memory-Level Parallelism) as software prefetches, there are limits inherent to the total possible lookahead distance after a cache miss. These include reorder-buffer capacity, load buffer capacity, scheduler capacity and so on. See this blog post for an example of where extra work seriously hinders MLP because the CPU can't run ahead far enough to get enough loads executing at once.

    In this case, software prefetch allows you to effectively stuff more loads earlier in the instruction stream. As an example, imagine you have a loop which performs one load and then 20 instructions worth of work on the loaded data, and your CPU has an out-of-order buffer of 100 instructions and that loads are independent from each other (e.g,. accessing an array with a known stride).

    After the first miss, you can run ahead 99 more instructions which will be composed of 95 non-load and 5 load instructions (including the first load). So your MLP is inherently limited to 5 by the size of the out-of-order buffer. If instead you paired every load with two software prefetches to a location say 6 or more iterations ahead, you'd end up instead with 90 non-load instructions, 5 loads and 5 software prefetches and since all those loads are you just doubled your MLP to 102.

    There is of course no limit of one additional prefetch per load: you could add more to hit higher numbers, but there is a point of diminishing and then negative returns as you hit the MLP limits of your machine and the prefetches take up resources you'd rather spend on other things.

    This is similar to software pipelining, where you load data for a future iteration, and then don't touch that register until after a significant amount of other work. This was mostly used on in-order machines to hide latency of computation as well as memory. Even on a RISC with 32 architectural registers, software-pipelining typically can't place the loads as far ahead of use as an optimal prefetch-distance on a modern machine; the amount of work a CPU can do during one memory latency has grown a lot since the early days of in-order RISCs.

    In-order machines

    Not all machines are bit out-of-order cores: in-order CPUs are still common in some places (especially outside x86), and you'll also find "weak" out of order cores that don't have the capability to run ahead very far and so partly act like in-order machines.

    On these machines software prefetches may help gain MLP that you wouldn't otherwise be able access (of course, an in-order machine probably doesn't support a lot of inherent MLP otherwise).

    Working around hardware prefetch restrictions

    Hardware prefetch may have restrictions which you could work around using software prefetch.

    For example, Leeor's answer has an example of hardware prefetch stopping at page boundaries, while software prefetch doesn't have any such restriction.

    Another example might be any time that hardware prefetch is too aggressive or too conservative (after all it has to guess at your intentions): you might use software prefetch instead since you know exactly how your application will behave.

    Examples of the latter include prefetching discontiguous areas: such as rows in a sub-matrix of a larger matrix: hardware prefetch won't understand the boundaries of the "rectangular" region and will constantly prefetch beyond the end of each row, and then take a bit of time to pick up the new row pattern. Software prefetching can get this exactly right: never issuing any useless prefetches at all (but it often requires ugly splitting of loops).

    If you do enough software prefetches, the hardware prefeteches should in theory mostly shut down, because the activity of the memory subsystem is one heuristic they use to decide whether to activate.

    Counterpoint

    I should note here that software prefetching is not equivalent to hardware prefetching when it comes to possible speedups for cases the hardware prefetching can pick up: hardware prefetching can be considerably faster. That is because hardware prefetching can start working closer to memory (e.g., from the L2) where it has a lower latency to memory and also access to more buffers (in the so-called "superqueue" on Intel chips) and so more concurrency. So if you turn off hardware prefetching and try to implement a memcpy or some other streaming load with pure software prefetching, you'll find that it is likely slower.

    Special load hints

    Prefetching may give you access to special hints that you can't achieve with regular loads. For example x86 has the prefetchnta, prefetcht0, prefetcht1, and prefetchw instructions which hint to the processor how to treat the loaded data in the caching subsystem. You can't achieve the same effect with plain loads (at least on x86).


    2 It's not actually as simple as just adding a single prefetch to the loop, since after the first five iterations, the loads will start hitting already prefetched values, reducing your MLP back to 5 - but the idea still holds. A real implementation would also involve reorganizing the loop so that the MLP can be sustained (e.g., "jamming" the loads and prefetches together every few iterations).

提交回复
热议问题