endianness

when to use hton/ntoh and when to convert data myself?

杀马特。学长 韩版系。学妹 提交于 2019-12-18 09:04:23
问题 to convert a byte array from another machine which is big-endian, we can use: long long convert(unsigned char data[]) { long long res; res = 0; for( int i=0;i < DATA_SIZE; ++i) res = (res << 8) + data[i]; return res; } if another machine is little-endian, we can use long long convert(unsigned char data[]) { long long res; res = 0; for( int i=DATA_SIZE-1;i >=0 ; --i) res = (res << 8) + data[i]; return res; } why do we need the above functions? shouldn't we use hton at sender and ntoh when

How to test your code on a machine with big-endian architecture?

試著忘記壹切 提交于 2019-12-18 04:35:08
问题 Both ideone.com and codepad.org have Little-Endian architechtures. I want to test my code on some machine with Big-Endian architechture (for example - Solaris - which I don't have). Is there some easy way that you know about? 回答1: Googling "big endian online emulator" lead me to PearPC. I assume that if you have the patience you can install Mandrake Linux, get gcc, and go party. 回答2: QEMU supports emulating several big-endian architectures. Note that some architectures support both

dealing with endianness in c++

非 Y 不嫁゛ 提交于 2019-12-18 04:26:16
问题 I am working on translating a system from python to c++. I need to be able to perform actions in c++ that are generally performed by using Python's struct.unpack (interpreting binary strings as numerical values). For integer values, I am able to get this to (sort of) work, using the data types in stdint.h : struct.unpack("i", str) ==> *(int32_t*) str; //str is a char* containing the data This works properly for little-endian binary strings, but fails on big-endian binary strings. Basically, I

How do you write (portably) reverse network byte order?

最后都变了- 提交于 2019-12-18 02:48:26
问题 Background When designing binary file formats, it's generally recommended to write integers in network byte order. For that, there are macros like htonhl() . But for a format such as WAV, actually the little endian format is used. Question How do you portably write little endian values, regardless of if the CPU your code runs on is a big endian or little endian architecture? (Ideas: can the standard macros ntohl() and htonl() be used "in reverse" somehow? Or should the code just test runtime

Why are both little- and big-endian in use?

只愿长相守 提交于 2019-12-17 22:13:34
问题 Why are both little- and big-endian still in use today , after ~40 years of binary computer-science? Are there algorithms or storage formats that work better with one and much worse with the other? Wouldn't it be better if we all switched to one and stick with it? 回答1: When adding two numbers (on paper or in a machine), you start with the least significant digits and work towards the most significant digits. (Same goes for many other operations). On the Intel 8088, which had 16-bit registers

When does Endianness become a factor?

霸气de小男生 提交于 2019-12-17 21:42:17
问题 Endianness from what I understand, is when the bytes that compose a multibyte word differ in their order, at least in the most typical case. So that an 16-bit integer may be stored as either 0xHHLL or 0xLLHH . Assuming I don't have that wrong, what I would like to know is when does Endianness become a major factor when sending information between two computers where the Endian may or may not be different. If I transmit a short integer of 1, in the form of a char array and with no correction,

What are the benefits of the different endiannesses?

生来就可爱ヽ(ⅴ<●) 提交于 2019-12-17 19:19:50
问题 Why did some processor manifacturers decide to use Little endian Big endian Middle endian Any others? ? I've heard that with big endian one can find out faster, if a number is negative or positive, because that bit is the first one. (This doesn't matter on modern CPUs, as individual bit can't be accessed anymore.) 回答1: The benefit of little endianness is that a variable can be read as any length using the same address. For example a 32 bit variable can be read as an 8 bit or 16 bit variable

Java : DataInputStream replacement for endianness

半腔热情 提交于 2019-12-17 19:19:47
问题 Below is my code that replaces the DataInputStream to wrap an InputStream, but provides extra methods to read little endian data types in addition to the normal methods that read big endian types. Feel free to use it if you'd like. I have a few reservations as follows. Notice the methods that do not change functionality (the functions that read big endian types). There is no way I could implement the DataInputStream as the base class and use its methods, like read(), readInt(), readChar(),

how to convert double between host and network byte order?

徘徊边缘 提交于 2019-12-17 16:39:33
问题 Could somebody tell me how to convert double precision into network byte ordering. I tried uint32_t htonl(uint32_t hostlong); uint16_t htons(uint16_t hostshort); uint32_t ntohl(uint32_t netlong); uint16_t ntohs(uint16_t netshort); functions and they worked well but none of them does double (float) conversion because these types are different on every architecture. And through the XDR i found double-float precision format representations (http://en.wikipedia.org/wiki/Double_precision) but no

Is using an union in place of a cast well defined?

余生长醉 提交于 2019-12-17 12:47:35
问题 I had a discussion this morning with a colleague regarding the correctness of a "coding trick" to detect endianness. The trick was: bool is_big_endian() { union { int i; char c[sizeof(int)]; } foo; foo.i = 1; return (foo.c[0] == 1); } To me, it seems that this usage of an union is incorrect because setting one member of the union and reading another is not well-defined. But I have to admit that this is just a feeling and I lack actual proofs to strengthen my point. Is this trick correct ? Who