block

Linux: writes are split into 512K chunks

為{幸葍}努か 提交于 2021-02-04 05:50:27
问题 I have a user-space application that generates big SCSI writes (details below). However, when I'm looking at the SCSI commands that reach the SCSI target (i.e. the storage, connected by the FC) something is splitting these writes into 512K chunks. The application basically does 1M-sized direct writes directly into the device: fd = open("/dev/sdab", ..|O_DIRECT); write(fd, ..., 1024 * 1024); This code causes two SCSI WRITEs to be sent, 512K each. However, if I issue a direct SCSI command,

Linux: writes are split into 512K chunks

与世无争的帅哥 提交于 2021-02-04 05:50:20
问题 I have a user-space application that generates big SCSI writes (details below). However, when I'm looking at the SCSI commands that reach the SCSI target (i.e. the storage, connected by the FC) something is splitting these writes into 512K chunks. The application basically does 1M-sized direct writes directly into the device: fd = open("/dev/sdab", ..|O_DIRECT); write(fd, ..., 1024 * 1024); This code causes two SCSI WRITEs to be sent, 512K each. However, if I issue a direct SCSI command,

Two variables in one shared memory

為{幸葍}努か 提交于 2021-01-29 14:40:28
问题 Is there a way to use one shared memory, shmid = shmget (shmkey, 2*sizeof(int), 0644 | IPC_CREAT); For two variables with different values? int *a, *b; a = (int *) shmat (shmid, NULL, 0); b = (int *) shmat (shmid, NULL, 0); // use the same block of shared memory ?? Thank you very much! 回答1: Apparently (reading the manual) shmat gets you here a single block of memory, of size 2*sizeof(int) . If so, then you can just adjust the pointer: int *a, *b; a = shmat(shmid, NULL, 0); b = a+1; Also,

perl remove string block from file and save to file

我与影子孤独终老i 提交于 2021-01-29 09:00:52
问题 I have a file that looks like this: string 1 { abc { session 1 } fairPrice { ID LU0432618274456 Source 4 service xyz } } string 2 { abc { session 23 } fairPrice { ID LU036524565456171 Source 4 service tzu } } My program should read in the file with a search-parameter given (for example "string 1") and search the complete block until "}" and remove that part from the file. Can someone assist on that...I have some code so far but how can I do the removal and saving to the same file again? my

Chrome blocks download at second attempt

↘锁芯ラ 提交于 2021-01-29 07:04:47
问题 I have an anchor on my page with href set to a PDF file URL and download set to a string. With first attempt the file is downloaded ok but second time and consecutive attempts it doesn't work with the error "This site attempted to download multiple files automatically". When I try with firefox the download works every single time. What's going on here? 回答1: Setting -> advanced -> setting content -> Automatic downloads You have two options. Use the slider next to Do not allow any site to

In C++ block scope, is re-using stack memory the area of optimization?

孤街醉人 提交于 2021-01-29 06:57:56
问题 I tested the following codes: void f1() { int x = 1; cout << "f1 : " << &x << endl; } void f2() { int x = 2; cout << "f2 : " << &x << endl; } void f3() { { int x = 3; cout << "f3_1: " << &x << endl; } { int x = 4; cout << "f3_2: " << &x << endl; } } int main() { f1(); f2(); f3(); } in release build, the output is... f1 : 00FAF780 f2 : 00FAF780 f3_1: 00FAF780 f3_2: 00FAF780 <-- I expected but in debug build, f1 : 012FF908 f2 : 012FF908 f3_1: 012FF908 f3_2: 012FF8FC <-- what?? I thought the

Read blocks from a file object until x bytes from the end

十年热恋 提交于 2021-01-07 02:46:48
问题 I need to read chunks of 64KB in loop, and process them, but stop at the end of file minus 16 bytes : the last 16 bytes are a tag metadata. The file might be super large, so I can't read it all in RAM. All the solutions I find are a bit clumsy and/or unpythonic. with open('myfile', 'rb') as f: while True: block = f.read(65536) if not block: break process_block(block) If 16 <= len(block) < 65536 , it's easy: it's the last block ever. So useful_data = block[:-16] and tag = block[-16:] If len

Read blocks from a file object until x bytes from the end

寵の児 提交于 2021-01-07 02:46:42
问题 I need to read chunks of 64KB in loop, and process them, but stop at the end of file minus 16 bytes : the last 16 bytes are a tag metadata. The file might be super large, so I can't read it all in RAM. All the solutions I find are a bit clumsy and/or unpythonic. with open('myfile', 'rb') as f: while True: block = f.read(65536) if not block: break process_block(block) If 16 <= len(block) < 65536 , it's easy: it's the last block ever. So useful_data = block[:-16] and tag = block[-16:] If len

What kind of padding should AES use?

心不动则不痛 提交于 2020-12-03 07:42:09
问题 I have implemented the AES encryption (homework), but I stumble upon the problem of padding the messages. If my messages are arrays of bytes as such: public byte[] encrypt(byte[] message) { int size = (int) Math.ceil(message.length / 16.0); byte[] result = new byte[size * 16]; for (int i = 0; i < size; i++) { if ((i+1) * 16 > message.length){ //padding here???? } else { byte[] block = Arrays.copyOfRange(message, i * 16, (i + 1) * 16); byte[] encryptedBlock = encryptBlock(block); System