So there is \"int 3\" which is an interrupt instruction used for breakpoints in debuggers.
But then there is also \"int 1\" which is used for single stepping. But wh
Below are some quote from the Intel Software Developer Manual Vol. 3B, Chapter 17:
The Intel 64 and IA-32 architectures dedicate two interrupt vectors to handling debug exceptions: vector 1 (debug exception, #DB) and vector 3 (breakpoint exception, #BP).
For the debug exception:
The debug-exception handler is usually a debugger program or part of a larger software system. The processor generates a debug exception for any of several conditions. The debugger checks flags in the DR6 and DR7 registers to determine which condition caused the exception and which other conditions might apply.
For the breakpoint exception:
The breakpoint exception (interrupt 3) is caused by execution of an INT 3 instruction. Debuggers use breakpoint exceptions ... as a mechanism for suspending program execution to examine registers and memory locations.
With the Intel386 and later IA-32 processors, it is more convenient to set breakpoints with the breakpoint-address registers (DR0 through DR3). However, the breakpoint exception still is useful for breakpointing debuggers, because a breakpoint exception can call a separate exception handler. The breakpoint exception is also useful when it is necessary to set more breakpoints than there are debug registers or when breakpoints are being placed in the source code of a program under development.
So we can see, breakpoint exception enables you to suspend the program execution, while the debug exception checks for several conditions and treat them differently.
With debug exception only, you will not be able to break at a location you want to. Only after you break at some location, you can then configure the processor for single-step or other things, which are consumed by the debug exception.
INT 3 is a single-byte op-code. So it can over-write any existing instruction with controllable side-effect to break into the execution of current program flow. Without it, how could you have the chance to set the single-step flag in EFLAGS at an appropriate time with no side-effect?
So the two-step break-and-then-debug mechanism is necessary.
The whole flow is:
First, wire a debugger as the handler to both int 1(#DB) and int 3(#BP).
Then put int3 to where you want to break in. Then debugger has the chance to step in.
Once debugger starts to handle the int3 (#BP), if you want single-stepping, tell the debugger to set the Trap Flag (TF) in EFLAGS. Then CPU will generate a int 1 (#DB) after each single instruction. Since debugger is also wired to int 1 (#DB), it will have a chance to step in, too.
(I discussed with one of my friends about how debugger works. He wrote a debugger before.)
It seems the INT 3 (#BP) is the most important one. You can explicitly place an INT 3 instruction at the location you want to break into. Or you can let the debugger to do that for you.
Once the INT 3 is hit, CPU will save the context of the broken program and switch to the INT 3 handler, which is usually part of the debugger. Now, the broken program is suspended because the execution is in the exception #3 handler now. The debugger is just a normal Windows or whatever desktop application. It can use the normal desktop message-loop to wait for user's commands to decide how to treat the program being debugged. So it seems both the debugee and the debugger are waiting now. But the reasons are very different.
Then programmer (the human) can instruct the debugger (the software) to examine the saved context of the debugee. Or just restore debugee's saved context and let it resume. Or it may set the TF flag in EFLAGS so that a #DB will be generated by the processor after each instruction.
But often, users may not want single-stepping at the instruction level. They may want to debug at the C statements level, which can be composed of many instructions. So the debugger can use the debug information, such as the PDB file, to find the location info. If users want to single-step at the C statement level, the debugger can find the beginning instruction of next C statement and rewrite the 1st byte of it with an INT 3. And then everything starts all over again.
It's just a delicate cooperation between human, the debugger software and the processor.
A related thread: Strange memory content display in Visual Studio debug mode