checksum

Google Drive MD5 checksum for files

拟墨画扇 提交于 2019-12-04 07:36:41
问题 I'm not a programmer, just a regular user of Google Drive. I want to see if the files are uploaded correctly. I go through a whole process in the OAuth 2.0 Playground that lists all files, shows the MD5 checksums but also lots of information per file. If I upload a new file it's hard to search for it and verify its md5 checksum. Is there an easier way (through an app, maybe?) to show/list MD5 checksums for the uploaded files? I wonder why the Details pane doesn't have it, only lists the file

Compare checksum of files between two servers and report mismatch

孤街浪徒 提交于 2019-12-04 04:01:47
I have to compare checksum of all files in /primary and /secondary folders in machineA with files in this folder /bat/snap/ which is in remote server machineB . The remote server will have lots of files along with the files we have in machineA . If there is any mismatch in checksum then I want to report all those files that have issues in machineA with full path and exit with non zero status code. If everything is matching then exit zero. I wrote one command (not sure whether there is any better way to write it) that I am running on machineA but its very slow. Is there any way to make it

What algorithm to use to calculate a check digit?

混江龙づ霸主 提交于 2019-12-04 03:26:15
What algorithm to use to calculate a check digit for a list of digits? The length of the list is between 8 and 12 digits. see also: How to generate a verification code/number? The Luhn algorithm is good enough for the credit card industry... As RichieHindle points out, the Luhn algorithm is pretty good. It will detect (but not correct) any one error or transposition (except a transposition of 0 and 9). You could also consider the algorithm for ISBN check digits , although for old-style ISBN, the check digit is sometimes "X", which may be a problem for you if you're using integer fields. New

Get MD5 Checksum for Very Large Files

只愿长相守 提交于 2019-12-04 03:08:36
问题 I've written a script that reads through all files in a directory and returns md5 hash for each file. However, it renders nothing for a rather large file. I assume that the interpreter has some value set for maximum processing time, and since it takes too long to get this value, it just skips along to other files. Is there anyway to get an md5 checksum for large files through PHP? If not, could it be done through a chron job with cpanel? I gave it a shot there but it doesn't seem that my

Is there a checksum algorithm that also supports “subtracting” data from it?

心不动则不痛 提交于 2019-12-04 02:25:28
I have a system with roughly a 100 million documents, and I'd like to keep track of their modifications between mirrors. In order to exchange information about modifications effectively, I want to send information about modified documents by days, not by each separate document. Something like this: [ 2012/03/26, cs26], [ 2012/03/25, cs25], [ 2012/03/24, cs24], ... where each cs is the checksum of timestamps of all documents created on a particular day. Now, the problem I'm running into is that I don't know of an algorithm that could "subtract" data from the checksum when a document is being

UDP checksum calculation

爱⌒轻易说出口 提交于 2019-12-03 19:03:07
问题 The UDP header struct defined at /usr/include/netinet/udp.h is as follows struct udphdr { u_int16_t source; u_int16_t dest; u_int16_t len; u_int16_t check; }; What value is stored in the check field of the header? How to verify if the checksum is correct? I meant on what data is the checksum computed? (Is it just the udp header or udp header plus the payload that follows it?) Thanks. 回答1: The UDP checksum is performed over the entire payload, and the other fields in the header, and some

Generating Luhn Checksums

隐身守侯 提交于 2019-12-03 18:54:59
问题 There are lots of implementations for validating Luhn checksums but very few for generating them. I've come across this one however in my tests it has revealed to be buggy and I don't understand the logic behind the delta variable. I've made this function that supposedly should generated Luhn checksums but for some reason that I haven't yet understood the generated checksums are invalid half of the time. function Luhn($number, $iterations = 1) { while ($iterations-- >= 1) { $stack = 0;

What is the inverse of crc32_combine()'s matrix trick?

青春壹個敷衍的年華 提交于 2019-12-03 16:33:56
zlib's crc32_combine() takes crcA, crcB, and lengthB to calculate crcAB. # returns crcAB crc32_combine(crcA, crcB, lenB) Using concepts from Mark Adler's awesome posts here and here I was able to produce crc32_trim_trailing.pl which takes crcAB, crcB, and lengthB to calculate crcA (I use this to peel off padding of a known length and value). # prints crcA perl crc32_trim_trailing.pl $crcAB $crcB $lenB Unfortunately, this uses the principles of the slow method described, where each null byte must be peeled off one at a time. It's slow, but is a good proof of concept. I've been working to make a

How reliable is the adler32 checksum?

半世苍凉 提交于 2019-12-03 10:02:37
I wonder how reliable the adler32 checksum is, compared to e.g. md5 checksums? It was told on wikipedia that adler32 is "much less reliable" than md5, so I wonder how much, and in which way? More specifically, I'm wondering if it is reliable enough as a consistency check for long-time archiving of (tar) files of size 20GB+? For details on the error-checking capabilities of the Adler-32 checksum, see for example Revisiting Fletcher and Adler Checksums. Maxino, 2006. This paper contains an analysis on the Hamming distance provided by these two checksums, and provides an indication of the

What hash algorithms are parallelizable? Optimizing the hashing of large files utilizing on multi-core CPUs

坚强是说给别人听的谎言 提交于 2019-12-03 09:35:12
I'm interested in optimizing the hashing of some large files (optimizing wall clock time). The I/O has been optimized well enough already and the I/O device (local SSD) is only tapped at about 25% of capacity, while one of the CPU cores is completely maxed-out. I have more cores available, and in the future will likely have even more cores. So far I've only been able to tap into more cores if I happen to need multiple hashes of the same file, say an MD5 AND a SHA256 at the same time. I can use the same I/O stream to feed two or more hash algorithms, and I get the faster algorithms done for