问题
I read the following statement in Patterson & Hennessy's Computer Organization and Design textbook:
As processors go to both longer pipelines and issuing multiple instructions per clock cycle, the branch delay becomes longer, and a single delay slot is insufficient.
I can understand why "issuing multiple instructions per clock cycle" can make a single delay slot insufficient, but I don't know why "longer pipelines" cause it.
Also, I do not understand why longer pipelines cause the branch delay to become longer. Even with longer pipelines (step to finish one instruction), there's no guarantee that the cycle will increase, so why will the branch delay increase?
回答1:
If you add any stages before the stage that detects branches (and evaluates taken/not-taken for conditional branches), 1 delay slot no longer hides the "latency" between the branch entering the first stage of the pipeline and the correct program-counter address after the branch being known.
The first fetch stage needs info from later in the pipeline to know what to fetch next, because it doesn't itself detect branches. For example, in superscalar CPUs with branch prediction, they need to predict which block of instructions to fetch next, separately and earlier from predicting which way a branch goes after it's already decoded.
1 delay slot is only sufficient in MIPS because branch conditions are evaluated in the ID stage, before the normal EX stage. (Original MIPS is a classic 5-stage RISC: IF ID EX MEM WB.) See Wikipedia's article on the classic RISC pipeline for much more details, specifically the control hazards section.
That's why MIPS is limited to simple conditions like beq
(find any mismatches from an XOR), or bltz
(sign bit check). It cannot do anything that requires an adder for carry propagation (so a general blt
between two registers is only a pseudo-instruction).
This is very restrictive: a longer front-end can absorb the latency from a larger/more associative L1 instruction cache, and register fetch + evaluating a branch condition all in the same cycle as decoding is probably pretty tight for timing.
Raising the clock speed or even just detecting data hazards (unlike original MIPS) would probably require adding another stage, or maybe moving branch evaluation to EX.
A superscalar pipeline would probably want some buffering in instruction fetch to avoid bubbles, and a multi-ported register file might be slightly slower.
So, as well as making 1 branch delay slot insufficient by the very nature of superscalar execution, a longer pipeline also makes it worse.
But instead of introducing more branch delay slots to hide this branch delay, the actual solution is branch prediction.
来源:https://stackoverflow.com/questions/56285354/why-do-longer-pipelines-make-a-single-delay-slot-insufficient