Why does a character in Java take twice as much space to store as a character in C?
In Java characters are 16-bit and C they are 8-bit.
A more general question is why is this so?
To find out why you need to look at history and come to conclusions/opinions on the subject.
When C was developed in the USA, ASCII was pretty standard there and you only really needed 7-bits, but with 8 you could handle some non-ASCII characters as well. It might seem more than enough. Many text based protocols like SMTP (email), XML and FIX, still only use ASCII character. Email and XML encode non ASCII characters. Binary files, sockets and stream are still only 8-bit byte native.
BTW: C can support wider characters, but that is not plain char
When Java was developed 16-bit seemed like enough to support most languages. Since then unicode has been extended to characters above 65535 and Java has had to add support for codepoints which is UTF-16 characters and can be one or two 16-bit characters.
So making a byte a byte and char an unsigned 16-bit value made sense at the time.
BTW: If your JVM supports -XX:+UseCompressedStrings it can use bytes instead of chars for Strings which only use 8-bit characters.