We have 8-bit, 16-bit, 32-bit and 64-bit hardware architectures and operating systems. But not, say, 42-bit or 69-bit ones.
Why? Is it something fundamental that makes 2
As others have pointed out, in the early days, things weren't so clear cut: words came in all sorts of oddball sizes.
But the push to standardize on 8bit bytes was also driven by memory chip technology. In the early days, many memory chips were organized as 1bit per address. Memory for n-bit words was constructed by using memory chips in groups of n (with corresponding address lines tied together, and each chips single data bit contributing to one bit of the n-bit word).
As memory chip densities got higher, manufacturers packed multiple chips in a single package. Because the most popular word sizes in use were multiples of 8 bits, 8-bit memory was particularly popular: this meant it was also the cheapest. As more and more architectures jumped onto the 8 bit byte bandwagon, the price premium for memory chips that didn't use 8 bit bytes got bigger and bigger. Similar arguments account for moves from 8->16, 16->32, 32->64.
You can still design a system with 24 bit memory, but that memory will probably be much more expensive than a similar design using 32 bit memory. Unless there is a really good reason to stick at 24 bits, most designers would opt for 32 bits when its both cheaper and more capable.