endianness

Little-Endian Byte Order (iOS, BLE scan response)

ⅰ亾dé卋堺 提交于 2019-12-11 07:16:56
问题 In basic terms, I am using nativescript for a cross platform application. This application interacts with BLE devices. The BLE devices send a scan response after the advertisement packet, which I am able to retrieve as an NSData object on the iOS side. Here is the (pseudo) output from the description property: <54fca267 0b00>. The output represents the hardware address of the peripheral. The hardware guys are telling me this is in little-endian, and I know it is a 48-bit string. It is coming

Java vs. C#: BigInteger hex string yields different result?

点点圈 提交于 2019-12-11 03:05:56
问题 Question: This code in Java: BigInteger mod = new BigInteger("86f71688cdd2612ca117d1f54bdae029", 16); produces (in java) the number 179399505810976971998364784462504058921 However, when I use C#, BigInteger mod = BigInteger.Parse("86f71688cdd2612ca117d1f54bdae029", System.Globalization.NumberStyles.HexNumber); // base 16 i don't get the same number, I get: -160882861109961491465009822969264152535 However, when I create the number directly from decimal, it works BigInteger mod = BigInteger

mmap big endian vs. little endian

感情迁移 提交于 2019-12-11 02:55:13
问题 If I use mmap to write uint32_t 's, will I run into issues with big endian/little endian conventions? In particular, if I write some data mmap 'ed on a big-endian machine, will I run into issues when I try to read that data on a little-endian machine? 回答1: If you're using mmap, your probably concerned about speed and efficiency. You basically have a few choices. Wrap all your reads and writes with htonl, htons, ntohl, ntohs functions. Calling htonl (host to network) order on Windows will

Convert char from big endian to little endian in C

和自甴很熟 提交于 2019-12-11 01:59:34
问题 I'm trying to convert a char variable from big endian to little endian. Here it is exactly: char name[12]; I know how to convert an int between big and little endian, but the char is messing me up. I know I have to convert it to integer form first, which I have. For converting an int this is what I used: (item.age >> 24) | ((item.age >> 8) & 0x0000ff00) | ((item.age << 8) & 0x00ff0000) | (item.age << 24); For converting the char, I'd like to do it in the same way, if possible, just because

Sending UDP packets in the correct Endianness

泪湿孤枕 提交于 2019-12-11 01:05:07
问题 Hi Guys I’m having trouble understanding network byte ordering and the order in which data is sent and received over UDP. I'm using C# I have a structure holding: message.start_id = 0x7777CCCC; message.message _id = 0xBBB67000; more data the message definition has [StructLayout(LayoutKind.Sequential)] I first convert the structure to a byte array using: public byte[] StructureToByteArray(object obj) { int len = Marshal.SizeOf(obj); byte[] arr = new byte[len]; IntPtr ptr = Marshal.AllocHGlobal

Should I use bit-fields for mapping incoming serial data?

匆匆过客 提交于 2019-12-11 01:00:00
问题 We have data coming in over serial (Bluetooth), which maps to a particular structure. Some parts of the structure are sub-byte size, so the "obvious" solution is to map the incoming data to a bit-field. What I can't work out is whether the bit-endianness of the machine or compiler will affect it (which is difficult to test), and whether I should just abandon the bit-fields altogether. For example, we have a piece of data which is 1.5 bytes, so we used the struct: { uint8_t data1; // lsb uint8

Endian issue with casting a packet to a struct

十年热恋 提交于 2019-12-11 00:59:23
问题 I'm using libtrace to parse network packets but am having, what I think is, an endian issue. Here is the libtrace definition of a Radiotap packet: typedef struct libtrace_radiotap_t { uint8_t it_version; /**< Radiotap version */ uint8_t it_pad; /**< Padding for natural alignment */ uint16_t it_len; /**< Length in bytes of the entire Radiotap header */ uint32_t it_present; /**< Which Radiotap fields are present */ } PACKED libtrace_radiotap_t; So I cast my libtrace_packet_t to this Radiotap

Difference between C# and java big endian bytes using miscutil

扶醉桌前 提交于 2019-12-10 23:57:19
问题 I'm using the miscutil library to communicate between and Java and C# application using a socket. I am trying to figure out the difference between the following code (this is Groovy, but the Java result is the same): import java.io.* def baos = new ByteArrayOutputStream(); def stream = new DataOutputStream(baos); stream.writeInt(5000) baos.toByteArray().each { println it } /* outputs - 0, 0, 19, -120 */ and C#: using (var ms = new MemoryStream()) using (EndianBinaryWriter writer = new

BMP file format contradiction between little and big endian

╄→尐↘猪︶ㄣ 提交于 2019-12-10 23:37:11
问题 I have two BMP files, a windows screenshot and a linux generated file using GIMP. What I noticed is that all the data in the headers is stored in big endian format. The biWidth , biHeight and biPlanes fields of the DIB header are all in big endian, also "the size of the BMP file in bytes" (the second field from Bitmap File Header) is big endian, which contradicts wikipedia, where it says: "All of the integer values are stored in little-endian format" I looked into GIMP's source code and I

What's the standard-defined endianness of std::wstring?

霸气de小男生 提交于 2019-12-10 20:46:58
问题 I know the UTF-16 has two types of endiannesses: big endian and little endian. Does the C++ standard define the endianness of std::wstring? or it is implementation-defined? If it is standard-defined, which page of the C++ standard provide the rules on this issue? If it is implementation-defined, how to determine it? e.g. under VC++. Does the compiler guarantee the endianness of std::wstring is strictly dependent on the processor? I have to know this; because I want to send the UTF-16 string