endianness

How to BSWAP the lower 32-bit of 64-bit register?

拜拜、爱过 提交于 2019-12-05 23:22:42
问题 I've been looking for the answer for how to use BSWAP for lower 32-bit sub-register of 64-bit register. For example, 0x0123456789abcdef is inside RAX register, and I want to change it to 0x01234567efcdab89 with a single instruction (because of performance). So I tried following inline function: #define BSWAP(T) { \ __asm__ __volatile__ ( \ "bswap %k0" \ : "=q" (T) \ : "q" (T)); \ } And the result was 0x00000000efcdab89 . I don't understand why the compiler acts like this. Does anybody know

Best approach to write/read binary data in Little or Big Endian with C#?

99封情书 提交于 2019-12-05 20:55:12
Ok, if i've got a binary file encoded either in little endian or big endian under .NET, what is the best way to read / write to it? In the .NET framework i've only managed to found BinaryWritters / BinaryReaders which use little endian as default, so my approach was implement my own BinaryReader / BinaryWritter for reading / writting data in big endian, but I wonder if there is a better aproach. I like this one: Miscellaneous Utility Library 来源: https://stackoverflow.com/questions/80784/best-approach-to-write-read-binary-data-in-little-or-big-endian-with-c

Reading double to platform endianness with union and bit shift, is it safe?

末鹿安然 提交于 2019-12-05 20:13:39
All the examples I've seen of reading a double of known endianness from a buffer to the platform endianness involve detecting the current platform's endianess and performing byte-swapping when necessary. On the other hand, I've seen another way of doing the same thing except for integers that uses bit shifting ( one such example ). This got me thinking that it might be possible to use a union and the bitshift technique to read doubles (and floats) from buffers, and a quick test implementation seemed to work (at least with clang on x86_64): #include <stdio.h> #include <stdint.h> #include

Swap bits in c++ for a double

时光总嘲笑我的痴心妄想 提交于 2019-12-05 17:37:40
Im trying to change from big endian to little endian on a double. One way to go is to use double val, tmp = 5.55; ((unsigned int *)&val)[0] = ntohl(((unsigned int *)&tmp)[1]); ((unsigned int *)&val)[1] = ntohl(((unsigned int *)&tmp)[0]); But then I get a warning: "dereferencing type-punned pointer will break strict-aliasing rules" and I dont want to turn this warning off. Another way to go is: #define ntohll(x) ( ( (uint64_t)(ntohl( (uint32_t)((x << 32) >> 32) )) << 32) | ntohl( ((uint32_t)(x >> 32)) ) ) val = (double)bswap_64(unsigned long long(tmp)); //or val = (double)ntohll(unsigned long

How to determine the endian mode the processor is running in?

会有一股神秘感。 提交于 2019-12-05 17:15:41
How do I determine the endian mode the ARM processor is running in using only assembly language. I can easily see the Thumb/ARM state reading bit 5 of the CPSR, but I don't know if there a corresponding bit in the CPSR or elsewhere for endianness. ;silly example trying to execute ARM code when I may be in Thumb mode.... MRS R0,CPSR ANDS R0,#0x20 BNE ThumbModeIsActive B ARMModeIsActive I've got access to the ARM7TDMI data sheet, but this document does not tell me how to read the current state. What assembly code do I use to determine the endianness? Let's assume I'm using an ARM9 processor.

Command-line to reverse byte order/change endianess

回眸只為那壹抹淺笑 提交于 2019-12-05 14:51:24
问题 I'm hacking around in some scripts trying to parse some data written by Javas DataOutputStream#writeLong(...) . Since java always seems to write big endian, I have a problem feeding the bytes to od . This is due to the fact that od always assumes that the endianess matches the endianess of the arch that you are currently on, and I'm on a little endian machine. I'm looking for an easy one-liner to reverse the byte order. Let's say that you know that the last 8 bytes of a file is a long written

Supporting byte ordering in Linux user space

巧了我就是萌 提交于 2019-12-05 14:26:55
I'm writing a program on Linux in C to analyze core files produced from an embedded system. The core files might be little endian (ARM) or big endian (MIPS), and the program to analyze them might be running on a little endian host (x86) or big endian (PowerPC). By looking at the headers I know whether the core is LE or BE. I'd rather my program not need to know whether the host it runs on is little or big endian, I'd like to use an API to handle it for me. If there is no better option, I guess I'll start relying on #ifdef __BIG_ENDIAN__. In the Linux kernel we have cpu_to_le32 et al to convert

Converting grouped hex characters into a bitstring in Perl

六月ゝ 毕业季﹏ 提交于 2019-12-05 13:05:43
I have some 256-character strings of hexadecimal characters which represent a sequence of bit flags, and I'm trying to convert them back into a bitstring so I can manipulate them with & , | , vec and the like. The hex strings are written in integer-wide big-endian groups, such that a group of 8 bytes like "76543210" should translate to the bitstring "\x10\x32\x54\x76" , i.e. the lowest 8 bits are 00001000 . The problem is that pack 's " h " format works on one byte of input at a time, rather than 8, so the results from just using it directly won't be in the right order. At the moment I'm doing

Relation between endianness and stack-growth direction

不问归期 提交于 2019-12-05 13:02:48
Is there a relation between endianness of a processor and the direction of stack growth? For example, x86 architecture is little endian and the stack grows downwards (i.e. it starts at highest address and grows towards lower address with each push operation). Similarly, in SPARC architecture , which is big endian , the stack starts at lowest address and grows upwards towards higher addresses. This relationship pattern is seen in almost all architectures. I believe there must be a reason for this unsaid convention. Can this be explained from computer architecture or OS point of view? Is this

Send a struct over a socket with correct padding and endianness in C

元气小坏坏 提交于 2019-12-05 10:53:02
I have several structures defined to send over different Operating Systems (tcp networks). Defined structures are: struct Struct1 { uint32_t num; char str[10]; char str2[10];} struct Struct2 { uint16_t num; char str[10];} typedef Struct1 a; typedef Struct2 b; The data is stored in a text file. Data Format is as such: 123 Pie Crust Struct1 a is stored as 3 separate parameters. However, struct2 is two separate parameters with both 2nd and 3rd line stored to the char str[] . The problem is when I write to a server over the multiple networks, the data is not received correctly. There are numerous