byte

how to cast an int array to a byte array in C

丶灬走出姿态 提交于 2019-12-08 08:39:16
问题 hey i would like to know how you could cast an Int array in C to an byte array and what would be the declaration method. I would appreciate if it is simpler and no use of pointers. thanks for the comments ex: int addr[500] to byte[] Plus I would also want the ending byte array to have the same array name. 回答1: If you are trying to reinterpret the memory behind the int array as an array of bytes, and only then: int ints[500]; char *bytes = (char *) ints; You cannot do this without resorting to

Uploading files to file server

心不动则不痛 提交于 2019-12-08 08:34:57
问题 Using the link below, I wrote a code for my application. I am not able to get it right though, Please refer the link and help me ot with it... Uploading files to file server using webclient class The following is my code:- protected void Button1_Click(object sender, EventArgs e) { filePath = FileUpload1.FileName; try { WebClient client = new WebClient(); NetworkCredential nc = new NetworkCredential(uName, password); Uri addy = new Uri("\\\\192.168.1.3\\upload\\"); client.Credentials = nc;

How to split 16-bit unsigned integer into array of bytes in python?

时光毁灭记忆、已成空白 提交于 2019-12-08 08:24:33
问题 I need to split a 16-bit unsigned integer into an array of bytes (i.e. array.array('B') ) in python. For example: >>> reg_val = 0xABCD [insert python magic here] >>> print("0x%X" % myarray[0]) 0xCD >>> print("0x%X" % myarray[1]) 0xAB The way I'm currently doing it seems very complicated for something so simple: >>> import struct >>> import array >>> reg_val = 0xABCD >>> reg_val_msb, reg_val_lsb = struct.unpack("<BB", struct.pack("<H", (0xFFFF & reg_val))) >>> myarray = array.array('B') >>>

How to perform unpadding after decryption of stream using CryptoPP

↘锁芯ラ 提交于 2019-12-08 08:00:08
问题 I've got the stream to decrypt. I divide it into blocks and pass each block to the method below. The data I need to decrypt is encrypted by 16 - bytes blocks and if the last block is less than 16, then all the rest bytes are filled by padding. Then in the moment of decryption I'm getting my last block result as the value including these additional padding bytes. How can I determine the length of original data and return only it or determine the padding bytes and remove them, considering

What is the best way (performance-wise) to test whether a value falls within a threshold?

别来无恙 提交于 2019-12-08 07:58:12
问题 That is, what is the fastest way to do the test if( a >= ( b - c > 0 ? b - c : 0 ) && a <= ( b + c < 255 ? b + c : 255 ) ) ... if a, b, and c are all unsigned char aka BYTE . I am trying to optimize an image scanning process to find a sub-image, and a comparison such as this is done about 3 million times per scan, so even minor optimizations could be helpful. Not sure, but maybe some sort of bitwise operation? Maybe adding 1 to c and testing for less-than and greater-than without the or-equal

How does Bitmap.Save(Stream, ImageFormat) format the data?

烈酒焚心 提交于 2019-12-08 06:08:49
问题 I have a non transparent, colour bitmap with length 2480 and width 3507. Using Bitmap.GetPixel(int x, int y) I am able to get the colour information of each pixel in the bitmap. If I squirt the bitmap into a byte[]: MemoryStream ms = new MemoryStream(); bmp.Save(ms, ImageFormat.Bmp); ms.Position = 0; byte[] bytes = ms.ToArray(); then I'd expect to have the same information, i.e. I can go to bytes[1000] and read the colour information for that pixel. It turns out that my array of bytes is

Issues with Bytes from a Microcontroller in Python

亡梦爱人 提交于 2019-12-08 06:01:56
问题 I am using Python to read micro controller values in a windows based program. The encodings / byte decodings and values have begun to confuse me. Here is my situation: In the software, I am allowed to call a receive function once per byte received by the Python interpreter, once per line (not quite sure what that is) or once per message which I assume is the entire transmission from the micro controller. I am struggling with the best way to decode these values. The microcontroller is putting

Why is Python's .decode('cp037') not working on specific binary array?

别说谁变了你拦得住时间么 提交于 2019-12-08 04:18:16
问题 When printing out DB2 query results I'm getting the following error on column 'F00002' which is a binary array. UnicodeEncodeError: 'ascii' codec can't encode character u'\xe3' in position 2: ordinal not in range(128) I am using the following line: print result[2].decode('cp037') ...just as I do the first two columns where the same code works fine. Why is this not working on the third column and what is the proper decoding/encoding? 回答1: Notice that the error is about encoding to ASCII, not

how to work out payload size from html5 websocket

给你一囗甜甜゛ 提交于 2019-12-08 02:47:59
问题 how can I tell an html5 message payload length over websockets? Im aware the base protocol consists of an op code, then length is determine by the next 1 to 8 bytes, then the following values is the hashing and then the rest is payload! Im creating a java serverside application that will receivea message from a html5 client and then handle, iI can handle messages up to 256 bytes. Any help to work this out even manually would be great (i.e how to habdle the bytes that determine the payload)

Typing Python sequence to Cython array (and back)

╄→гoц情女王★ 提交于 2019-12-08 02:34:43
问题 I have successfully used Cython for the first time to significantly speed up packing nibbles from one list of integers ( bytes ) into another (see Faster bit-level data packing), e.g. packing the two sequential bytes 0x0A and 0x0B into 0xAB . def pack(it): """Cythonize python nibble packing loop, typed""" cdef unsigned int n = len(it)//2 cdef unsigned int i return [ (it[i*2]//16)<<4 | it[i*2+1]//16 for i in range(n) ] While the resulting speed is satisfactory, I am curious whether this can be