endianness

Convert Little Endian to Big Endian

守給你的承諾、 提交于 2019-12-17 06:36:14
问题 I just want to ask if my method is correct to convert from little endian to big endian, just to make sure if I understand the difference. I have a number which is stored in little-endian, here are the binary and hex representations of the number: ‭0001 0010 0011 0100 0101 0110 0111 1000‬ ‭12345678‬ In big-endian format I believe the bytes should be swapped, like this: 1000 0111 0110 0101 0100 0011 0010 0001 ‭87654321 Is this correct? Also, the code below attempts to do this but fails. Is

How to get little endian data from big endian in c# using bitConverter.ToInt32 method?

喜夏-厌秋 提交于 2019-12-17 05:06:12
问题 i am making application in c#.In that application i have byte array containing hex values. Here i am getting data as a big endian but i want it as a little endian. Here i am using Bitconverter.toInt32 method for converting that value to integer. But my problem is that before converting value,i have to copy that 4 byte data into temporary array from source byte array and then reverse that temporary byte array. I cant reverse source array because it contains other data also. Because of that my

Javascript Typed Arrays and Endianness

吃可爱长大的小学妹 提交于 2019-12-17 04:47:31
问题 I'm using WebGL to render a binary encoded mesh file. The binary file is written out in big-endian format (I can verify this by opening the file in a hex editor, or viewing the network traffic using fiddler). When I try to read the binary response using a Float32Array or Int32Array, the binary is interpreted as little-endian and my values are wrong: // Interpret first 32bits in buffer as an int var wrongValue = new Int32Array(binaryArrayBuffer)[0]; I can't find any references to the default

Is there any “standard” htonl-like function for 64 bits integers in C++?

柔情痞子 提交于 2019-12-17 04:23:14
问题 I'm working on an implementation of the memcache protocol which, at some points, uses 64 bits integer values. These values must be stored in "network byte order". I wish there was some uint64_t htonll(uint64_t value) function to do the change, but unfortunately, if it exist, I couldn't find it. So I have 1 or 2 questions: Is there any portable (Windows, Linux, AIX) standard function to do this ? If there is no such function, how would you implement it ? I have in mind a basic implementation

Is there any “standard” htonl-like function for 64 bits integers in C++?

泄露秘密 提交于 2019-12-17 04:22:06
问题 I'm working on an implementation of the memcache protocol which, at some points, uses 64 bits integer values. These values must be stored in "network byte order". I wish there was some uint64_t htonll(uint64_t value) function to do the change, but unfortunately, if it exist, I couldn't find it. So I have 1 or 2 questions: Is there any portable (Windows, Linux, AIX) standard function to do this ? If there is no such function, how would you implement it ? I have in mind a basic implementation

C# little endian or big endian?

笑着哭i 提交于 2019-12-17 02:28:44
问题 In the documentation of hardware that allows us to control it via UDP/IP, I found the following fragment: In this communication protocol, DWORD is a 4 bytes data, WORD is a 2 bytes data, BYTE is a single byte data. The storage format is little endian, namely 4 bytes (32bits) data is stored as: d7-d0, d15-d8, d23-d16, d31-d24; double bytes (16bits) data is stored as: d7-d0 , d15-d8. I am wondering how this translates to C#? Do I have to convert stuff before sending it over? For example, if I

Endian conversion of signed ints

和自甴很熟 提交于 2019-12-14 01:32:36
问题 I am receiving big endian data over UDP and converting it to little endian. The source says the integers are signed but when I swap the bytes of the signed ints (specifically 16-bit) I get unrealistic values. When I swap them as unsigned ints I get what I expect. I suppose the source documentation could be incorrect and is actually sending unsigned 16-bit ints. But why would that matter? The values are all supposed to be positive and well under 16-bit INT_MAX so overflow should not be an

Is C Endian neutral?

|▌冷眼眸甩不掉的悲伤 提交于 2019-12-13 23:21:27
问题 Is C endian-neutral? Ok, another way of asking this question. I am currently translating a lot of code from C to Matlab on the same platform (PC). Do I need to care about endianess? Both are endian-neutral languages but C (not so sure), Matlab (pretty sure). By the same token I am also translating C to Python. So my question, has anybody in his experience, (translating from C to another endian-neutral language) met an unexpected problem with big/little endianness. Obviously we are only

Are VTK files endian independent when read in visualization softwares such as Paraview?

喜欢而已 提交于 2019-12-13 21:15:39
问题 I am working on a file whose endian is different from my desktop and I need to convert it, but When I visualized the vtk it worked. So are vtkreaders of vtk files endian independent 回答1: ParaView can read VTK files written using either big- or little-endian byte ordering and will try to work out which format has been used when writing a given file. From the VTK File format documentation: Binary Files. Binary files in VTK are portable across different computer systems as long as you observe

char16_t and char32_t endianness

大兔子大兔子 提交于 2019-12-13 16:43:21
问题 In C11, support for portable wide char types char16_t and char32_t are added for UTF-16 and UTF-32 respectively. However, in the technical report, there is no mention of endianness for these two types. For example, the following snippet in gcc-4.8.4 on my x86_64 computer when compiled with -std=c11 : #include <stdio.h> #include <uchar.h> char16_t utf16_str[] = u"十六"; // U+5341 U+516D unsigned char *chars = (unsigned char *) utf16_str; printf("Bytes: %X %X %X %X\n", chars[0], chars[1], chars[2