endianness

uint8 little endian array to uint16 big endian

╄→гoц情女王★ 提交于 2019-12-24 01:17:45
问题 In Python2.7, from an USB bulk transfer I get an image frame from a camera: frame = dev.read(0x81, 0x2B6B0, 1000) I know that one frame is 342x260 = 88920 pixels little endian, because that I read 2x88920 = 177840 (0x2B6B0) from the bulk transfer. How can I convert the content of the frame array that is typecode=B into an uint16 big endian array? 回答1: Something like this should do the trick: frame_short_swapped = array.array('H', ((j << 8) | i for (i,j) in zip(frame[::2], frame[1::2]))) It

Read and write file bit by bit

百般思念 提交于 2019-12-24 00:58:39
问题 There is a .jpg file for example or some other file. I want to read it bit by bit. I do this: open(FH, "<", "red.jpg") or die "Error: $!\n"; my $str; while(<FH>) { $str .= unpack('B*', $_); } close FH; Well it gives me $str with 0101001 of the file. After that I do this: open(AB, ">", "new.jpg") or die "Error: $!\n"; binmode(AB); print AB $str; close AB; but it doesn't work. How can I do it? and how to do that that it would work regardless of byte order(cross-platform)? 回答1: Problems: You're

Is there need to convert byte order for strings?

假装没事ソ 提交于 2019-12-23 23:06:55
问题 Is there need to convert to network/host byte ordering when sending and receiving strings. The available functions (such as htons()) only work with 16 and 32 bit integers. I also know for a fact that a single char shouldn't make a difference, as generally, it is a byte large. However what about strings ? The following is a code snippet int len; recv(fd, &len, sizeof (int), 0); len = ntohl(len); char* string = malloc(sizeof (char) * (len + 1)); int received = recv(fd, string, sizeof (char) *

Sending the array of arbitrary length through a socket. Endianness

試著忘記壹切 提交于 2019-12-23 17:50:13
问题 I'm fighting with socket programming now and I've encountered a problem, which I don't know how to solve in a portable way. The task is simple : I need to send the array of 16 bytes over the network, receive it in a client application and parse it. I know, there are functions like htonl, htons and so one to use with uint16 and uint32. But what should I do with the chunks of data greater than that? Thank you. 回答1: You say an array of 16 bytes. That doesn't really help. Endianness only matters

BufferedImage bytes have a different byte order, when running from Eclipse and the command line

老子叫甜甜 提交于 2019-12-23 13:14:04
问题 I was trying to convert a BufferedImage 's byte[] from 32-bit RGBA to 24-bit RGB. According to this answer the fastest way to get the byte[] from the image is: byte[] pixels = ((DataBufferByte) bufferedImage.getRaster().getDataBuffer()).getData(); So I iterate over all bytes assuming their order is R G B A and for every 4 bytes, I write the first 3 in an output byte[] (i.e. ignoring the alpha value). This works fine when run from Eclipse and the bytes are converted correctly. However when I

How can I change the byte order (from network to host, and vice versa) of an IPV6 address?

主宰稳场 提交于 2019-12-23 12:26:09
问题 I am aware of ntoh{s,l} and hton{s,l} , which work on integers of 2 and 4 bytes. Now, I am facing the problem to translate an IPv6 address, which is 16 bytes long. Is there a ready-made function for that purpose? TIA, Jir 回答1: I'm not sure that ntoh and hton are relevant in IPv6. You don't have a native 128-bit type, do you? According to http://www.mail-archive.com/users@ipv6.org/msg00195.html: IPv6 addresses are expected to be representd in network byte order whenever they are in binary form

How to convert a ByteString to an Int and dealing with endianness?

谁说我不能喝 提交于 2019-12-23 10:15:57
问题 I need to read a binary format in Haskell. The format is fairly simple: four octets indicating the length of the data, followed by the data. The four octets represent an integer in network byte-order. How can I convert a ByteString of four bytes to an integer? I want a direct cast (in C, that would be *(int*)&data ), not a lexicographical conversion. Also, how would I go about endianness? The serialized integer is in network byte-order, but the machine may use a different byte-order. I tried

How to byte reverse NSData output in Swift the littleEndian way?

限于喜欢 提交于 2019-12-23 09:30:25
问题 Hey guys I have this output from NSData: <00000100 84000c00 071490fe 4dfbd7e9> So how could I byte reverse it in Swift and have this output: <00000001 0084000c 1407fe90 fb4de9d7>? 回答1: This should work to swap each pair of adjacent bytes in the data. The idea is to interpret the bytes as an array of UInt16 integers and use the built-in byteSwapped property. func swapUInt16Data(data : NSData) -> NSData { // Copy data into UInt16 array: let count = data.length / sizeof(UInt16) var array =

Detecting endianness programmatically in a C++ program

混江龙づ霸主 提交于 2019-12-23 09:03:17
问题 Is there a programmatic way to detect whether or not you are on a big-endian or little-endian architecture? I need to be able to write code that will execute on an Intel or PPC system and use exactly the same code (i.e. no conditional compilation). 回答1: I don't like the method based on type punning - it will often be warned against by compiler. That's exactly what unions are for ! bool is_big_endian(void) { union { uint32_t i; char c[4]; } bint = {0x01020304}; return bint.c[0] == 1; } The

Detecting endianness programmatically in a C++ program

て烟熏妆下的殇ゞ 提交于 2019-12-23 09:03:08
问题 Is there a programmatic way to detect whether or not you are on a big-endian or little-endian architecture? I need to be able to write code that will execute on an Intel or PPC system and use exactly the same code (i.e. no conditional compilation). 回答1: I don't like the method based on type punning - it will often be warned against by compiler. That's exactly what unions are for ! bool is_big_endian(void) { union { uint32_t i; char c[4]; } bint = {0x01020304}; return bint.c[0] == 1; } The