endianness

How are array values stored in Little Endian vs. Big Endian architecture

心已入冬 提交于 2019-12-08 00:21:46
问题 I am inquiring about how to tell when one element in an array has finished and another is beginning in an endian architecture. I have 2 arrays where the size of long is 8 and the size of char is 1 long x[2] = {0x012345,0xFEDC}; char c[12] = {'a','b','c','d','e','f','g','h','0','1','2','3'}; And I was wondering how these values would be stored in the different Endian architectures if we consider x starting at memory address 0x100 and c starting at memory address 0x200 . I thought that the Big

Endianness conversion and g++ warnings

夙愿已清 提交于 2019-12-07 20:52:05
问题 I've got the following C++ code : template <int isBigEndian, typename val> struct EndiannessConv { inline static val fromLittleEndianToHost( val v ) { union { val outVal __attribute__ ((used)); uint8_t bytes[ sizeof( val ) ] __attribute__ ((used)); } ; outVal = v; std::reverse( &bytes[0], &bytes[ sizeof(val) ] ); return outVal; } inline static void convertArray( val v[], uint32_t size ) { // TODO : find a way to map the array for (uint32_t i = 0; i < size; i++) for (uint32_t i = 0; i < size;

How to properly get little-endian integer in java

自闭症网瘾萝莉.ら 提交于 2019-12-07 19:39:49
问题 I need to get 64-bit little-endian integer as byte-array with upper 32 bits zeroed and lower 32 bits containing some integer number, let say it's 51. Now I was doing it in this way: byte[] header = ByteBuffer .allocate(8) .order(ByteOrder.LITTLE_ENDIAN) .putInt(51) .array(); But I'm not sure is it the right way. Am I doing it right? 回答1: What about trying the following: private static byte[] encodeHeader(long size) { if (size < 0 || size >= (1L << Integer.SIZE)) { throw new

how are integers stored in memory?

冷暖自知 提交于 2019-12-07 12:23:29
问题 I'm confused when I was reading an article about Big/Little Endian. Code goes below: #include <iostream> using namespace std; int i = 12345678; int main() { char *p = (char*)&i; //line-1 if(*p == 78) //line-2 cout << "little endian" << endl; if(*p == 12) cout << "big endian" << endl; } Question: In line-1, can I do the conversion using static_cast<char*>(&i) ? In line-2, according to the code, if it's little-endian, then 78 is stored in the lowest byte, else 12 is stored in the lowest byte.

Inline ntohs() / ntohl() in C++ / Boost ASIO

妖精的绣舞 提交于 2019-12-07 11:57:00
问题 Hi I'm using C++ / Boost ASIO and I have to inline ntohl() for performance reasons. Each data packet contains 256 int32s, hence a lot of calls to ntohl() . Has anyone done this? Here is the compiled assembly output out of VC10++ with all optimizations turned on: ; int32_t d = boost::asio::detail::socket_ops::network_to_host_long(*pdw++); mov esi, DWORD PTR _pdw$[esp+64] mov eax, DWORD PTR [esi] push eax call DWORD PTR __imp__ntohl@4 I've also tried the regular ntohl() provided by winsock. Any

Bitwise Not Operator (~ in C) with regards to little endian and big endian

我们两清 提交于 2019-12-07 08:30:41
问题 This is in relation to a homework assignment but this is not the homework assignment. I'm having difficultly understanding if there is a difference on how the bitwise not ( ~ in C) would affected signed int and unsigned int when compiled on a big endian machine vs. a little endian machine. Are the bytes really "backwards" and if so does the bitwise not (and other operators) cause different resulting int s be produced depending on the machine type? While we are at it, is the answer the same

Bitfield endianness in gcc

随声附和 提交于 2019-12-07 06:03:01
问题 The endianness of bitfields is implementation defined. Is there a way to check, at compile time, whether via some macro or other compiler flag, what gcc's bitfield endianness actually is? In other words, given something like: struct X { uint32_t a : 8; uint32_t b : 24; }; Is there a way for me to know at compile time whether or not a is the first or last byte in X ? 回答1: On Linux systems, you can check the __BYTE_ORDER macro to see if it is __LITTLE_ENDIAN or __BIG_ENDIAN . While this is not

Fast little-endian to big-endian conversion in ASM

北城余情 提交于 2019-12-07 04:10:05
问题 I have an array of uint-types in C#, After checking if the program is working on a little-endian machine, I want to convert the data to a big-endian type. Because the amount of data can become very large but is always even, I was thinking to consider two uint types as an ulong type, for a better performance and program it in ASM, so I am searching for a very fast (the fastest if possible) Assembler-algorithm to convert little-endian in big-endian. 回答1: For a large amount of data, the bswap

How can I reverse the byte order of an NSInteger or NSUInteger in objective-c

时间秒杀一切 提交于 2019-12-07 02:54:38
问题 This is a somewhat of a follow up to this posting but with a different question so I felt I should ask in a separate thread. I am at the point where I have four consecutive bytes in memory that I have read in from a file. I'd like to store these as a bit array (the actual int value of them does not matter until later). When I print out what is in my int, I notice that it seems to be stored in reverse order (little endian). Does anyone have a good method for reversing the order of the bytes.

No UTF-32 big-endian in C#?

放肆的年华 提交于 2019-12-07 02:05:41
问题 In C#, Encoding.UTF32 is UTF-32 little-endian, Encoding.BigEndianUnicode is UTF-16 big-endian, Encoding.Unicode is UTF-16 little-endian. But I can't find any for UTF-32 big-endian. I'm developing a simple textviewer and don't think there are many documents encoded in UTF-32 big-endian but I want to prepare for that too, just in case. Doesn't C# support UTF32 big-endian? BTW Java supports it. 回答1: It does support big endian on UTF-32. Just create the encoding yourself using the overloaded