问题
I'm wondering how to write unicode (utf-8) to a binary file. Here's the background: I've got a 40 byte header (10 ints), and a table with a variable number of triple-int structs. Writing these was cake.
Now, I want to add a bunch of strings to the end of the file.
Writing regular ASCII based strings is easy:
value = ('ab')
s = struct.Struct('2s')
packed_data = s.pack(value)
I learned how to do this from the Interpret strings as packed binary data.
But is there a way to do this for unicode (utf-8) based strings?
Any ideas? Has anyone done this before?
回答1:
Unicode != UTF-8. UTF-8 is a binary encoding of Unicode, so just write the UTF-8 string just as you would an ASCII string. No need to pack an encoded string either. It's already "just a bunch of bytes".
# coding: utf8
import struct
text = u'我是美国人。'
encoded_text = text.encode('utf8')
# proof packing is redundant...
format = '{0}s'.format(len(encoded_text))
packed_text = struct.pack(format,encoded_text)
print encoded_text == packed_text # result: True
So just encode your Unicode strings and append them to the file after writing your packed ints.
回答2:
unicode.encode('utf-8') will return a byte string encoded in UTF-8; just check for the length before packing.
来源:https://stackoverflow.com/questions/4542961/writing-unicode-to-binary-file-in-python