endianness

Extracting record from big endian data

空扰寡人 提交于 2019-12-10 17:35:48
问题 I have the following code for network protocol implementation. As the protocol is big endian, I wanted to use the Bit_Order attribute and High_Order_First value but it seems I made a mistake. With Ada.Unchecked_Conversion; with Ada.Text_IO; use Ada.Text_IO; with System; use System; procedure Bit_Extraction is type Byte is range 0 .. (2**8)-1 with Size => 8; type Command is (Read_Coils, Read_Discrete_Inputs ) with Size => 7; for Command use (Read_Coils => 1, Read_Discrete_Inputs => 4); type

BigEndian, LittleEndian Confusion In Xuggler

我们两清 提交于 2019-12-10 17:08:27
问题 Previously I posed a question about converting a byte[] to short[] and a new problem I encountered is converting/ not converting the data from byte[] to BigEndian. Here is what is going on: I am using TargetDataLine to read data into a byte[10000] . The AudioFormat object has BigEndian set to true , arbitrarily. This byte[] needs to be converted to short[] so that it can be encoded using Xuggler I don't know whether the AudioFormat BigEndian should be set to true or false. I have tried both

How will this code will work on a big endian machine?

爷,独闯天下 提交于 2019-12-10 14:46:30
问题 If I have the code: uint64_t a = 0x1111222233334444; uint32_t b = 0; b = a; printf("a is %llx ",a); printf("b is %x ",b); and the output is : a is 1111222233334444 b is 33334444 Questions : Will the behavior be same on big-endian machine? If I assign a's value in b or do a typecast will the result be same in big endian? 回答1: The code you have there will work the same way. This is because the behavior of downcasting is defined by the C standard. However, if you did this: uint64_t a =

Big Endian and Little Endian support for byte ordering

房东的猫 提交于 2019-12-10 14:34:38
问题 We need to support 3 hardware platforms - Windows (little Endian) and Linux Embedded (big and little Endian). Our data stream is dependent on the machine it uses and the data needs to be broken into bit fields. I would like to write a single macro (if possible) to abstract away the detail. On Linux I can use bswap_16 / bswap_32 / bswap_64 for Little Endian conversions. However, I can't find this in my Visual C++ includes. Is there a generic built-in for both platforms (Windows and Linux)? If

Endianness, “Most Significant”, and “Least Significant”

北城余情 提交于 2019-12-10 13:14:55
问题 I've read descriptions online describing big and little endian. However, they all seem to basically read the same way and I am still confused on the the actual implementation regarding "most" and "least" significant bytes. I understand that little endian values evaluate the "least significant" values first and under big endian the "most significant" bytes are evaluated first. However, I'm unclear as to the meaning of "most" and "least" significant. I think it would help me to understand if I

Is C# Endian sensitive?

穿精又带淫゛_ 提交于 2019-12-10 12:33:39
问题 Is C# ever Endian sensitive, for example, will code such as this: int a = 1234567; short b = *(short*)&i; always assign the same value to b. If so, what value will it be? If not, what good ways are there to deal with endianness if code with pointers in? 回答1: C# doesn't define the endianness. In reality, yes it will probably always be little-endian (IIRC even on IA64, but I haven't checked), but you should ideally check BitConverter.IsLittleEndian if endianness is important - or just use bit

C Endian Conversion : bit by bit

那年仲夏 提交于 2019-12-10 12:26:07
问题 I have a special unsigned long (32 bits) and I need to convert the endianness of it bit by bit - my long represents several things all smooshed together into one piece of binary. How do I do it? 回答1: Endianness is a word-level concept where the bytes are either stored most-significant byte first (big endian) or least-significant byte first (little endian). Data transferred over a network is typically big endian (so-called network byte order). Data stored in memory on a machine can be in

Reading double to platform endianness with union and bit shift, is it safe?

末鹿安然 提交于 2019-12-10 10:37:44
问题 All the examples I've seen of reading a double of known endianness from a buffer to the platform endianness involve detecting the current platform's endianess and performing byte-swapping when necessary. On the other hand, I've seen another way of doing the same thing except for integers that uses bit shifting (one such example). This got me thinking that it might be possible to use a union and the bitshift technique to read doubles (and floats) from buffers, and a quick test implementation

Converting grouped hex characters into a bitstring in Perl

痴心易碎 提交于 2019-12-10 06:29:57
问题 I have some 256-character strings of hexadecimal characters which represent a sequence of bit flags, and I'm trying to convert them back into a bitstring so I can manipulate them with & , | , vec and the like. The hex strings are written in integer-wide big-endian groups, such that a group of 8 bytes like "76543210" should translate to the bitstring "\x10\x32\x54\x76" , i.e. the lowest 8 bits are 00001000 . The problem is that pack 's " h " format works on one byte of input at a time, rather

Optimal and portable conversion of endian in c/c++

北战南征 提交于 2019-12-10 04:03:06
问题 Given a binary file with 32-bit little-endian fields that I need to parse, I want to write parsing code that compiles correctly independent of endianness of machine that executes that code. Currently I use uint32_t fromLittleEndian(const char* data){ return uint32_t(data[3]) << (CHAR_BIT*3) | uint32_t(data[2]) << (CHAR_BIT*2) | uint32_t(data[1]) << CHAR_BIT | data[0]; } this, however generate inoptimal assembly. On my machine g++ -O3 -S produces: _Z16fromLittleEndianPKc: .LFB4: .cfi_startproc