On most architectures, instructions are all fixed-length. This makes program loading and executing straightforward. On x86/x64, instructions are variable length, so a disa
When fetching an instruction, the CPU first analyses its first byte (the opcode). Sometimes it's sufficient to know the total length of the instruction. Sometimes it tells the CPU to analyse subsequent bytes to determine the length. But all in all, the encoding is not ambiguous.
Yes, the command stream gets screwed up if you insert random bytes in the middle willy-nilly. That's to be expected; not every byte sequence constitutes valid machine code.
Now, about your particular example. The original command was XOR EAX, EAX
(33 C0). The encoding of XOR is one of those second byte dependent ones. The first byte - 33 - means XOR. The second byte is the ModR/M byte. It encodes the operands - whether it's a register pair, a register and a memory location, etc. The initial value C0 in 32-bit mode corresponds to operands EAX, EAX. The value 90 that you've inserted corresponds to operands EDX, [EAX+offset], and it means that the ModR/M byte is followed by 32 bits of offset. The next four bytes of the command stream are not interpreted as commands anymore - they're the offset in the mangled XOR command.
So by messing with the second byte, you've turned a 2-byte command into a 6-byte one.
Then the CPU (and the disassembler) resumes fetching after those four. It's in the middle of the ADD ESP, 4
instruction, but the CPU has no way of knowing that. It starts with the 04 byte, the third one in the ADD encoding. The first few bytes at that point still make sense as commands, but since you've ended up in the middle, the original instruction sequence is utterly lost.