endianness

htonl() vs __builtin_bswap32()

筅森魡賤 提交于 2019-12-21 17:39:00
问题 __builtin_bswap32() is used to reverse bytes (it's used for littel/big endian issues (from gcc)). htonl() is used to reverse bytes too (conversion from host to network). I checked both functions and they returns the same result. Are there some one who can confirm that both functions do the same thing? (standard refences are appreciated) 回答1: Just look at source code : (example from glib 2.18) #undef htonl #undef ntohl uint32_t htonl (x) uint32_t x; { #if BYTE_ORDER == BIG_ENDIAN return x;

Netty and ByteOrder

我怕爱的太早我们不能终老 提交于 2019-12-21 17:04:14
问题 Due to poor documentation and lack of experience with Netty, i faced with little problem. I have no clue how can i set a default ByteOrder. I need a Little-Endian set by default. I'll be glad, if someone will give me some hints about this. 回答1: You could use Bootstrap.setOption() to do this. serverBootstrap.setOption("child.bufferFactory", new HeapChannelBufferFactory(ByteOrder.LITTLE_ENDIAN)); ... or ... clientBootstrap.setOption("bufferFactory", new HeapChannelBufferFactory(ByteOrder.LITTLE

Is vec_sld endian sensitive?

為{幸葍}努か 提交于 2019-12-21 16:47:13
问题 I'm working on a PowerPC machine with in-core crypto. I'm having trouble porting AES key expansion from big endian to little endian using built-ins. Big endian works, but little endian does not. The algorithm below is the snippet presented in an IBM blog article. I think I have the issue isolated to line 2 below: typedef __vector unsigned char uint8x16_p8; uint8x64_p8 r0 = {0}; r3 = vec_perm(r1, r1, r5); /* line 1 */ r6 = vec_sld(r0, r1, 12); /* line 2 */ r3 = vcipherlast(r3, r4); /* line 3 *

Node.JS Big-Endian UCS-2

妖精的绣舞 提交于 2019-12-21 13:08:38
问题 I'm working with Node.JS. Node's buffers support little-endian UCS-2, but not big-endian, which I need. How would I do so? 回答1: According to wikipedia, UCS-2 should always be big-endian so it's odd that node only supports little endian. You might consider filing a bug. That said, switching endian-ness is fairly straight-forward since it's just a matter of byte order. So just swap bytes around to go back and forth between little and big endian, like so: function swapBytes(buffer) { var l =

iPhone platform: endianness (detection & swapping)

ⅰ亾dé卋堺 提交于 2019-12-21 06:01:46
问题 I'm doing some endian-sensitive file manipulation on iPhone. Are there standard macros or #defines in that environment that indicate native endianness and offer swapping if necessary? I know I can check in advance and just do the right thing for this particular architecture, but wondering if there are cleaner ways of doing the right thing. (The file format is little endian; if it were big-endian, I'd probably just use the htons/htonl family.) Thanks. 回答1: There is a full set of standard

(java) Writing in file little endian

╄→гoц情女王★ 提交于 2019-12-20 17:36:50
问题 I'm trying to write TIFF IFDs, and I'm looking for a simple way to do the following (this code obviously is wrong but it gets the idea across of what I want): out.writeChar(12) (bytes 0-1) out.writeChar(259) (bytes 2-3) out.writeChar(3) (bytes 4-5) out.writeInt(1) (bytes 6-9) out.writeInt(1) (bytes 10-13) Would write: 0c00 0301 0300 0100 0000 0100 0000 I know how to get the writing method to take up the correct number of bytes (writeInt, writeChar, etc) but I don't know how to get it to write

What's the most Pythonic way of determining endianness?

痴心易碎 提交于 2019-12-20 10:59:39
问题 I'm trying to find the best way of working out whether the machine my code is running on is big-endian or little-endian. I have a solution that works (although I haven't tested it on a big-endian machine) but it seems a bit clunky: import struct little_endian = (struct.pack('@h', 1) == struct.pack('<h', 1)) This is just comparing a 'native' two-byte pack to a little-endian pack. Is there a prettier way? 回答1: The answer is in the sys module: >>> import sys >>> sys.byteorder 'little' Of course

Will a char array differ in ordering in a little endian or big endian system

喜欢而已 提交于 2019-12-20 09:45:05
问题 I have an array char c[12] = {'a','b','c','d','e','f','g','h','0','1','2','3'} In hexadecimal these values would be {0x61, 0x62, 0x63, 0x64, 0x65, 0x66, 0x67, 0x68, 0x30, 0x31, 0x32, 0x33} What I was wondering is whether the array would be stored in memory differently in a big endian or little endian system? I thought that they would be the same because the endian systems work by determining how to store an element by the least or most significant bits of a single element in the array and

convert int to char array Big Endian

[亡魂溺海] 提交于 2019-12-20 07:19:20
问题 I found this code on SO. unsigned char * SerializeInt(unsigned char *buffer, int value) { /* Write big-endian int value into buffer; assumes 32-bit int and 8-bit char. */ buffer[0] = value >> 24; buffer[1] = value >> 16; buffer[2] = value >> 8; buffer[3] = value; return buffer + 4; } You can see the code claims it is writing an integer into the buffer, in a Big Endian Way. My question is: Does this function work correctly on both Little endian and Big endian machines? In other words, on both

Convert Little-endian ByteArray to Big-endian in AS3

余生颓废 提交于 2019-12-20 05:50:36
问题 How to convert Little-endian ByteArray to Big-endian in AS3? I convert bitmapData to Big-endian ByteArray and then push it into memory with Adobe Alchemy. And then when i read it from memory i get Little-endian ByteArray. How to get Big-endian. I use this example code http://blog.debit.nl/2009/03/using-bytearrays-in-actionscript-and-alchemy/ (Memory allocation in C with direct access in Actionscript (FAST!!)) Code: var ba:ByteArray = currentBitmapData.getPixels( currentBitmapData.rect ); ba