Best way to shorten UTF8 string based on byte length

对着背影说爱祢 提交于 2019-11-28 10:55:47

Here are two possible solution - a LINQ one-liner processing the input left to right and a traditional for-loop processing the input from right to left. Which processing direction is faster depends on the string length, the allowed byte length, and the number and distribution of multibyte characters and is hard to give a general suggestion. The decision between LINQ and traditional code I probably a matter of taste (or maybe speed).

If speed matters, one could think about just accumulating the byte length of each character until reaching the maximum length instead of calculating the byte length of the whole string in each iteration. But I am not sure if this will work because I don't know UTF-8 encoding well enough. I could theoreticaly imagine that the byte length of a string does not equal the sum of the byte lengths of all characters.

public static String LimitByteLength(String input, Int32 maxLength)
{
    return new String(input
        .TakeWhile((c, i) =>
            Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength)
        .ToArray());
}

public static String LimitByteLength2(String input, Int32 maxLength)
{
    for (Int32 i = input.Length - 1; i >= 0; i--)
    {
        if (Encoding.UTF8.GetByteCount(input.Substring(0, i + 1)) <= maxLength)
        {
            return input.Substring(0, i + 1);
        }
    }

    return String.Empty;
}
ruffin

I think we can do better than naively counting the total length of a string with each addition. LINQ is cool, but it can accidentally encourage inefficient code. What if I wanted the first 80,000 bytes of a giant UTF string? That's a lot of unnecessary counting. "I've got 1 byte. Now I've got 2. Now I've got 13... Now I have 52,384..."

That's silly. Most of the time, at least in l'anglais, we can cut exactly on that nth byte. Even in another language, we're less than 6 bytes away from a good cutting point.

So I'm going to start from @Oren's suggestion, which is to key off of the leading bit of a UTF8 char value. Let's start by cutting right at the n+1th byte, and use Oren's trick to figure out if we need to cut a few bytes earlier.

Three possibilities

If the first byte after the cut has a 0 in the leading bit, I know I'm cutting precisely before a single byte (conventional ASCII) character, and can cut cleanly.

If I have a 11 following the cut, the next byte after the cut is the start of a multi-byte character, so that's a good place to cut too!

If I have a 10, however, I know I'm in the middle of a multi-byte character, and need to go back to check to see where it really starts.

That is, though I want to cut the string after the nth byte, if that n+1th byte comes in the middle of a multi-byte character, cutting would create an invalid UTF8 value. I need to back up until I get to one that starts with 11 and cut just before it.

Code

Notes: I'm using stuff like Convert.ToByte("11000000", 2) so that it's easy to tell what bits I'm masking (a little more about bit masking here). In a nutshell, I'm &ing to return what's in the byte's first two bits and bringing back 0s for the rest. Then I check the XX from XX000000 to see if it's 10 or 11, where appropriate.

I found out today that C# 6.0 might actually support binary representations, which is cool, but we'll keep using this kludge for now to illustrate what's going on.

The PadLeft is just because I'm overly OCD about output to the Console.

So here's a function that'll cut you down to a string that's n bytes long or the greatest number less than n that's ends with a "complete" UTF8 character.

public static string CutToUTF8Length(string str, int byteLength)
{
    byte[] byteArray = Encoding.UTF8.GetBytes(str);
    string returnValue = string.Empty;

    if (byteArray.Length > byteLength)
    {
        int bytePointer = byteLength;

        // Check high bit to see if we're [potentially] in the middle of a multi-byte char
        if (bytePointer >= 0 
            && (byteArray[bytePointer] & Convert.ToByte("10000000", 2)) > 0)
        {
            // If so, keep walking back until we have a byte starting with `11`,
            // which means the first byte of a multi-byte UTF8 character.
            while (bytePointer >= 0 
                && Convert.ToByte("11000000", 2) != (byteArray[bytePointer] & Convert.ToByte("11000000", 2)))
            {
                bytePointer--;
            }
        }

        // See if we had 1s in the high bit all the way back. If so, we're toast. Return empty string.
        if (0 != bytePointer)
        {
            returnValue = Encoding.UTF8.GetString(byteArray, 0, bytePointer); // hat tip to @NealEhardt! Well played. ;^)
        }
    }
    else
    {
        returnValue = str;
    }

    return returnValue;
}

I initially wrote this as a string extension. Just add back the this before string str to put it back into extension format, of course. I removed the this so that we could just slap the method into Program.cs in a simple console app to demonstrate.

Test and expected output

Here's a good test case, with the output it create below, written expecting to be the Main method in a simple console app's Program.cs.

static void Main(string[] args)
{
    string testValue = "12345“”67890”";

    for (int i = 0; i < 15; i++)
    {
        string cutValue = Program.CutToUTF8Length(testValue, i);
        Console.WriteLine(i.ToString().PadLeft(2) +
            ": " + Encoding.UTF8.GetByteCount(cutValue).ToString().PadLeft(2) +
            ":: " + cutValue);
    }

    Console.WriteLine();
    Console.WriteLine();

    foreach (byte b in Encoding.UTF8.GetBytes(testValue))
    {
        Console.WriteLine(b.ToString().PadLeft(3) + " " + (char)b);
    }

    Console.WriteLine("Return to end.");
    Console.ReadLine();
}

Output follows. Notice that the "smart quotes" in testValue are three bytes long in UTF8 (though when we write the chars to the console in ASCII, it outputs dumb quotes). Also note the ?s output for the second and third bytes of each smart quote in the output.

The first five characters of our testValue are single bytes in UTF8, so 0-5 byte values should be 0-5 characters. Then we have a three-byte smart quote, which can't be included in its entirety until 5 + 3 bytes. Sure enough, we see that pop out at the call for 8.Our next smart quote pops out at 8 + 3 = 11, and then we're back to single byte characters through 14.

 0:  0::
 1:  1:: 1
 2:  2:: 12
 3:  3:: 123
 4:  4:: 1234
 5:  5:: 12345
 6:  5:: 12345
 7:  5:: 12345
 8:  8:: 12345"
 9:  8:: 12345"
10:  8:: 12345"
11: 11:: 12345""
12: 12:: 12345""6
13: 13:: 12345""67
14: 14:: 12345""678


 49 1
 50 2
 51 3
 52 4
 53 5
226 â
128 ?
156 ?
226 â
128 ?
157 ?
 54 6
 55 7
 56 8
 57 9
 48 0
226 â
128 ?
157 ?
Return to end.

So that's kind of fun, and I'm in just before the question's five year anniversary. Though Oren's description of the bits had a small error, that's exactly the trick you want to use. Thanks for the question; neat.

If a UTF-8 byte has a zero-valued high order bit, it's the beginning of a character. If its high order bit is 1, it's in the 'middle' of a character. The ability to detect the beginning of a character was an explicit design goal of UTF-8.

Check out the Description section of the wikipedia article for more detail.

firda

Shorter version of ruffin's answer. Takes advantage of the design of UTF8:

    public static string LimitUtf8ByteCount(this string s, int n)
    {
        // quick test (we probably won't be trimming most of the time)
        if (Encoding.UTF8.GetByteCount(s) <= n)
            return s;
        // get the bytes
        var a = Encoding.UTF8.GetBytes(s);
        // if we are in the middle of a character (highest two bits are 10)
        if (n > 0 && ( a[n]&0xC0 ) == 0x80)
        {
            // remove all bytes whose two highest bits are 10
            // and one more (start of multi-byte sequence - highest bits should be 11)
            while (--n > 0 && ( a[n]&0xC0 ) == 0x80)
                ;
        }
        // convert back to string (with the limit adjusted)
        return Encoding.UTF8.GetString(a, 0, n);
    }

Is there a reason that you need the database column to be declared in terms of bytes? That's the default, but it's not a particularly useful default if the database character set is variable width. I'd strongly prefer declaring the column in terms of characters.

CREATE TABLE length_example (
  col1 VARCHAR2( 10 BYTE ),
  col2 VARCHAR2( 10 CHAR )
);

This will create a table where COL1 will store 10 bytes of data and col2 will store 10 characters worth of data. Character length semantics make far more sense in a UTF8 database.

Assuming you want all the tables you create to use character length semantics by default, you can set the initialization parameter NLS_LENGTH_SEMANTICS to CHAR. At that point, any tables you create will default to using character length semantics rather than byte length semantics if you don't specify CHAR or BYTE in the field length.

Avi Pinto

Following Oren Trutner's comment here are two more solutions to the problem:
here we count the number of bytes to remove from the end of the string according to each character at the end of the string, so we don't evaluate the entire string in every iteration.

string str = "朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣" 
int maxBytesLength = 30;
var bytesArr = Encoding.UTF8.GetBytes(str);
int bytesToRemove = 0;
int lastIndexInString = str.Length -1;
while(bytesArr.Length - bytesToRemove > maxBytesLength)
{
   bytesToRemove += Encoding.UTF8.GetByteCount(new char[] {str[lastIndexInString]} );
   --lastIndexInString;
}
string trimmedString = Encoding.UTF8.GetString(bytesArr,0,bytesArr.Length - bytesToRemove);
//Encoding.UTF8.GetByteCount(trimmedString);//get the actual length, will be <= 朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣潬昣昸昸慢正 

And an even more efficient(and maintainable) solution: get the string from the bytes array according to desired length and cut the last character because it might be corrupted

string str = "朣楢琴执执 瑩浻牡楧硰执执獧浻牡楧敬瑦 瀰 絸朣杢执獧扻捡杫潲湵 潣" 
int maxBytesLength = 30;    
string trimmedWithDirtyLastChar = Encoding.UTF8.GetString(Encoding.UTF8.GetBytes(str),0,maxBytesLength);
string trimmedString = trimmedWithDirtyLastChar.Substring(0,trimmedWithDirtyLastChar.Length - 1);

The only downside with the second solution is that we might cut a perfectly fine last character, but we are already cutting the string, so it might fit with the requirements.
Thanks to Shhade who thought about the second solution

This is another solution based on binary search:

public string LimitToUTF8ByteLength(string text, int size)
{
    if (size <= 0)
    {
        return string.Empty;
    }

    int maxLength = text.Length;
    int minLength = 0;
    int length = maxLength;

    while (maxLength >= minLength)
    {
        length = (maxLength + minLength) / 2;
        int byteLength = Encoding.UTF8.GetByteCount(text.Substring(0, length));

        if (byteLength > size)
        {
            maxLength = length - 1;
        }
        else if (byteLength < size)
        {
            minLength = length + 1;
        }
        else
        {
            return text.Substring(0, length); 
        }
    }

    // Round down the result
    string result = text.Substring(0, length);
    if (size >= Encoding.UTF8.GetByteCount(result))
    {
        return result;
    }
    else
    {
        return text.Substring(0, length - 1);
    }
}

All of the other answers appear to miss the fact that this functionality is already built into .NET, in the Encoder class. For bonus points, this approach will also work for other encodings.

public static String LimitByteLength(string input, int maxLength)
{
    if (string.IsNullOrEmpty(input) || Encoding.UTF8.GetByteLength(input) <= maxLength)
    {
        return message;
    }

    var encoder = Encoding.UTF8.GetEncoder();
    byte[] buffer = new byte[maxLength];
    char[] messageChars = message.ToCharArray();
    encoder.Convert(
        chars: messageChars,
        charIndex: 0,
        charCount: messageChars.Length,
        bytes: buffer,
        byteIndex: 0,
        byteCount: buffer.Length,
        flush: false,
        charsUsed: out int charsUsed,
        bytesUsed: out int bytesUsed,
        completed: out bool completed);

    // I don't think we can return message.Substring(0, charsUsed)
    // as that's the number of UTF-16 chars, not the number of codepoints
    // (think about surrogate pairs). Therefore I think we need to
    // actually convert bytes back into a new string
    return Encoding.UTF8.GetString(bytes, 0, bytesUsed);
}
public static string LimitByteLength3(string input, Int32 maxLenth)
    {
        string result = input;

        int byteCount = Encoding.UTF8.GetByteCount(input);
        if (byteCount > maxLenth)
        {
            var byteArray = Encoding.UTF8.GetBytes(input);
            result = Encoding.UTF8.GetString(byteArray, 0, maxLenth);
        }

        return result;
    }
易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!