We know that codepoints can be in this interval 0..10FFFF which is less than 2^21. Then why do we need UTF-32 when all codepoints can be represented by 3 bytes? UTF-24 shoul
UTF-24 has no added value.
If space matters, UTF-8 can encode all existing unicode characters (0...0x10FFFF) in the same 3 bytes or less (and in most cases will need less than 3 bytes). So UTF-8 is more compact than UTF-24.
If space doesn't matter, UTF-32 is faster than UTF-24, because computers work better with power-of-2 aligned data.