endianness

No UTF-32 big-endian in C#?

拥有回忆 提交于 2019-12-05 06:44:12
In C#, Encoding.UTF32 is UTF-32 little-endian, Encoding.BigEndianUnicode is UTF-16 big-endian, Encoding.Unicode is UTF-16 little-endian. But I can't find any for UTF-32 big-endian. I'm developing a simple textviewer and don't think there are many documents encoded in UTF-32 big-endian but I want to prepare for that too, just in case. Doesn't C# support UTF32 big-endian? BTW Java supports it. It does support big endian on UTF-32. Just create the encoding yourself using the overloaded constructor : Encoding e = new UTF32Encoding(true /*bigEndian*/, true /*byteOrderMark*/); The encodings

ByteBuffer getInt() question

谁都会走 提交于 2019-12-05 06:03:37
We are using Java ByteBuffer for socket communication with a C++ server. We know Java is Big-endian and Socket communication is also Big-endian. So whenever the byte stream received and put into a ByteBuffer by Java, we call getInt() to get the value. No problem, no conversion. But if somehow we specifically set the ByteBuffer byte order to Little-endian (my co-worker actually did this), will the Java automatically convert the Big-endian into the Little-endian when the data is put into the ByteBuffer? Then the getInt() of the Little-endian version will return a right value to you? I guess the

How to transform phrases and words into MD5 hash?

会有一股神秘感。 提交于 2019-12-05 05:03:26
问题 Can anyone, please, explain to me how to transform a phrase like "I want to buy some milk" into MD5? I read Wikipedia article on MD5, but the explanation given there is beyond my comprehension: "MD5 processes a variable-length message into a fixed-length output of 128 bits. The input message is broken up into chunks of 512-bit blocks (sixteen 32-bit little endian integers)" "sixteen 32-bit little endian integers" is already hard for me. I checked the Wiki article on little endians and didn't

What endianness does Python use to write into files?

社会主义新天地 提交于 2019-12-05 04:30:11
问题 When using file.write() with 'wb' flag does Python use big or litte endian, or sys.byteorder value ? how can i be sure that the endianness is not random, I am asking because I am mixing ASCII and binary data in the same file and for the binary data i use struct.pack() and force it to little endian, but I am not sure what happen to the ASCII data ! Edit 1: since the downvote, I'll explain more my question ! I am writing a file with ASCII and binary data, in a x86 PC, the file will be sent over

When is htonl(x) != ntohl(x) ? (Or when is converting to and from Network Byte Order not equivalent on the same machine?)

青春壹個敷衍的年華 提交于 2019-12-05 04:28:07
In regards to htonl and ntohl . When would either of these two lines of code evaluate to false. htonl(x) == ntohl(x); htonl(ntohl(x)) == htonl(htonl(x)); In other words, when are these two operations not equivalent on the same machine ? The only scenario I can think of is a machine that does not work on 2's complement for representing integers. Is the reason largely historical, for coding clarity, or for something else? Do any modern architectures or environments exists today where these converting to and from network byte order on the same machine is not the same code in either direction? I

how is data stored at bit level according to “Endianness”?

余生颓废 提交于 2019-12-05 03:38:59
问题 I read about Endianness and understood squat... so I wrote this main() { int k = 0xA5B9BF9F; BYTE *b = (BYTE*)&k; //value at *b is 9f b++; //value at *b is BF b++; //value at *b is B9 b++; //value at *b is A5 } k was equal to A5 B9 BF 9F and (byte)pointer " walk " o/p was 9F BF b9 A5 so I get it bytes are stored backwards...ok. ~ so now I thought how is it stored at BIT level... I means is "9f"(1001 1111) stored as "f9"(1111 1001)? so I wrote this int _tmain(int argc, _TCHAR* argv[]) { int k

SQL Server binary(128) convert from little endian to big endian

橙三吉。 提交于 2019-12-05 02:48:53
how to convert a binary(128) from little endian to big endian in SQL Server? try something like this: declare @little binary(4) set @little = 0x02010000 select @little [bigEndian], cast(reverse(@little) as binary(4)) [littleEndian] OUTPUT: bigEndian littleEndian ---------- ------------ 0x02010000 0x00000102 (1 row(s) affected) 来源: https://stackoverflow.com/questions/2416557/sql-server-binary128-convert-from-little-endian-to-big-endian

Convert uint64_t to byte array portably and optimally in Clang

China☆狼群 提交于 2019-12-05 00:20:19
If you want to convert uint64_t to a uint8_t[8] (little endian). On a little endian architecture you can just do an ugly reinterpret_cast<> or memcpy() , e.g: void from_memcpy(const std::uint64_t &x, uint8_t* bytes) { std::memcpy(bytes, &x, sizeof(x)); } This generates efficient assembly: mov rax, qword ptr [rdi] mov qword ptr [rsi], rax ret However it is not portable. It will have different behaviour on a little endian machine. For converting uint8_t[8] to uint64_t there is a great solution - just do this: void to(const std::uint8_t* bytes, std::uint64_t &x) { x = (std::uint64_t(bytes[0]) <<

Finding endian-ness programmatically at compile-time using C++11

本秂侑毒 提交于 2019-12-04 23:50:29
I have referred many questions in SO on this topic, but couldn't find any solution so far. One natural solution was mentioned here: Determining endianness at compile time . However, the related problems mentioned in the comments & the same answer. With some modifications, I am able to compile a similar solution with g++ & clang++ ( -std=c++11 ) without any warning. static_assert(sizeof(char) == 1, "sizeof(char) != 1"); union U1 { int i; char c[sizeof(int)]; }; union U2 { char c[sizeof(int)]; int i; }; constexpr U1 u1 = {1}; constexpr U2 u2 = {{1}}; constexpr bool IsLittleEndian () { return u1

What should I #include to use 'htonl'?

♀尐吖头ヾ 提交于 2019-12-04 23:40:48
I want to use the htonl function in my ruby c extension, but don't want to use any of the other internet stuff that comes with it. What would be the most minimalistic file to #include that is still portable? Looking through the header files on my computer, I can see that either machine/endian.h or sys/_endian.h would let me use them, although I am not sure if that is a good idea. The standard header is: #include <arpa/inet.h> You don't have to worry about the other stuff defined in that header. It won't affect your compiled code, and should have only a minor effect on compilation time. EDIT: