endianness

Endianness of integers in Python

戏子无情 提交于 2019-11-28 07:33:14
I'm working on a program where I store some data in an integer and process it bitwise. For example, I might receive the number 48, which I will process bit-by-bit. In general the endianness of integers depends on the machine representation of integers, but does Python do anything to guarantee that the ints will always be little-endian? Or do I need to check endianness like I would in C and then write separate code for the two cases? I ask because my code runs on a Sun machine and, although the one it's running on now uses Intel processors, I might have to switch to a machine with Sun

How to make GCC generate bswap instruction for big endian store without builtins?

隐身守侯 提交于 2019-11-28 07:32:09
I'm working on a function that stores a 64-bit value into memory in big endian format. I was hoping that I could write portable C99 code that works on both little and big endian platforms and have modern x86 compilers generate a bswap instruction automatically without any builtins or intrinsics . So I started with the following function: #include <stdint.h> void encode_bigend_u64(uint64_t value, void *vdest) { uint64_t bigend; uint8_t *bytes = (uint8_t*)&bigend; bytes[0] = value >> 56; bytes[1] = value >> 48; bytes[2] = value >> 40; bytes[3] = value >> 32; bytes[4] = value >> 24; bytes[5] =

How do I convert a big-endian struct to a little endian-struct?

风格不统一 提交于 2019-11-28 06:27:42
I have a binary file that was created on a unix machine. It's just a bunch of records written one after another. The record is defined something like this: struct RECORD { UINT32 foo; UINT32 bar; CHAR fooword[11]; CHAR barword[11]; UNIT16 baz; } I am trying to figure out how I would read and interpret this data on a Windows machine. I have something like this: fstream f; f.open("file.bin", ios::in | ios::binary); RECORD r; f.read((char*)&detail, sizeof(RECORD)); cout << "fooword = " << r.fooword << endl; I get a bunch of data, but it's not the data I expect. I'm suspect that my problem has to

How to sort a list by byte-order for AWS-Calls

大憨熊 提交于 2019-11-28 05:56:55
问题 Having a look at http://associates-amazon.s3.amazonaws.com/signed-requests/helper/index.html The following Name-Value Pairs: Service=AWSECommerceService Version=2011-08-01 AssociateTag=PutYourAssociateTagHere Operation=ItemSearch SearchIndex=Books Keywords=harry+potter Timestamp=2015-09-26T14:10:56.000Z AWSAccessKeyId=123 The name-value pairs have been sorted according to byte-order Should result in AWSAccessKeyId=123 AssociateTag=PutYourAssociateTagHere Keywords=harry%20potter Operation

Bit conversion tool in Objective-C

為{幸葍}努か 提交于 2019-11-28 05:28:45
问题 Are there any built in utilities or macros in the objective-c libraries for iOS that will allow you to convert bytes to and from integers with respect to endianess? Please don't tell me to use bit-shifting operations. I am trying to avoid writing custom code to do this if it already exists. I would like the code to convert NSData* to primitive types (int, uint, short, etc) and to convert primitive types back to NSData*. 回答1: You can get the bytes from NSData by accessing the bytes property.

Convert big endian to little endian when reading from a binary file [duplicate]

爱⌒轻易说出口 提交于 2019-11-28 05:27:57
This question already has an answer here: How do I convert between big-endian and little-endian values in C++? 30 answers I've been looking around how to convert big-endian to little-endians. But I didn't find any good that could solve my problem. It seem to be there's many way you can do this conversion. Anyway this following code works ok in a big-endian system. But how should I write a conversion function so it will work on little-endian system as well? This is a homework, but it just an extra since the systems at school running big-endian system. It's just that I got curious and wanted to

Reverse byte order of EAX register

﹥>﹥吖頭↗ 提交于 2019-11-28 05:16:55
问题 Example: 0xAABBCCDD will turn into 0xDDCCBBAA My program crashes, due to Access Violation exception right in the first XOR operation. It seems like there's a better naive solution, using shifting or rotating, but anyways, here's the code: ;; ######################################################################### .486 .model flat, stdcall option casemap :none ; case sensitive ;; ######################################################################### include \masm32\include\masm32.inc

Understanding htonl() and ntohl()

强颜欢笑 提交于 2019-11-28 03:38:21
问题 I am trying to use unix sockets to test sending some udp packets to localhost. It is my understanding that when setting ip address and port in order to send packets, I would fill my sockaddr_in with values converted to network-byte order. I am on OSX and I'm astonished that this printf("ntohl: %d\n", ntohl(4711)); printf("htonl: %d\n", htonl(4711)); printf("plain: %d\n", 4711); Prints ntohl: 1729232896 htonl: 1729232896 plain: 4711 So neither function actually returns the plain value. I would

PostGIS Geometry saving: “Invalid endian flag value encountered.”

和自甴很熟 提交于 2019-11-28 01:58:28
I have a Spring Roo + Hibernate project which takes a JTS well-known text (WKT) String input from the client application, converts it into a JTS Geometry object, and then attempts to write it to the PostGIS database. I had some problems with the JDBC connection and types , but these seem to have been resolved with: @Column(columnDefinition = "Geometry", nullable = true) private Geometry centerPoint; And the conversion does: Geometry geom = new WKTReader(new GeometryFactory(new PrecisionModel(), 4326)).read(source); However now when Hibernate tries to write my Geometry object to the database, I

Why does std::bitset expose bits in little-endian fashion?

∥☆過路亽.° 提交于 2019-11-28 01:57:12
When I use std::bitset<N>::bitset( unsigned long long ) this constructs a bitset and when I access it via the operator[] , the bits seems to be ordered in the little-endian fashion. Example: std::bitset<4> b(3ULL); std::cout << b[0] << b[1] << b[2] << b[3]; prints 1100 instead of 0011 i.e. the ending (or LSB) is at the little (lower) address, index 0. Looking up the standard, it says initializing the first M bit positions to the corresponding bit values in val Programmers naturally think of binary digits from LSB to MSB (right to left). So the first M bit positions is understandably LSB → MSB,