After reading this old article measuring the memory consumption of several object types, I was amazed to see how much memory Strings use in Java:
The article points out two things:
The overhead is due to including a char[] object reference, and three ints: an offset, a length, and space for storing the String's hashcode, plus the standard overhead of simply being an object.
Slightly different from String.intern(), or a character array used by String.substring() is using a single char[] for all Strings, this means you do not need to store the object reference in your wrapper String-like object. You would still need the offset, and you introduce a (large) limit on how many characters you can have in total.
You would no longer need the length if you use a special end of string marker. That saves four bytes for the length, but costs you two bytes for the marker, plus the additional time, complexity, and buffer overrun risks.
The space-time trade-off of not storing the hash may help you if you do not need it often.
For an application that I've worked with, where I needed super fast and memory efficient treatment of a large number of strings, I was able to leave the data in its encoded form, and work with byte arrays. My output encoding was the same as my input encoding, and I didn't need to decode bytes to characters nor encode back to bytes again for output.
In addition, I could leave the input data in the byte array it was originally read into - a memory mapped file.
My objects consisted of an int offset (the limit suited my situation), an int length, and an int hashcode.
java.lang.String was the familiar hammer for what I wanted to do, but not the best tool for the job.