checksum

8-bit XOR checksum using Javascript

感情迁移 提交于 2019-12-11 04:15:32
问题 I'm trying to mimic a windows application that formats a message and sends it over UART via USB to a device that shows that message. The application calculates a checksum and pastes that after the message, otherwise the device will not accept the command. The checksum is NOT a crc8 checksum, but what is it, then? Using a USB monitor, I've seen the following test cases: ASCII: <L1><PA><IB><MA><WC><OM>Test! HEX: 3c4c313e3c50413e3c49423e3c4d413e3c57433e3c4f4d3e5465737421 Checksum: 6A ASCII: <L1>

Rails: Storing a 256 bit checksum as binary in database

夙愿已清 提交于 2019-12-11 03:35:18
问题 I'm trying to store a SHA-2 256 bit checksum in a column: create_table :checksums do |t| t.binary :value, :null => false, :limit => 32 end I'm storing in the value like so: c = Checksum.new big_num = Digest::SHA2.new.update("some string to be checksum'd").hexdigest.to_i(16) c.value = big_num On the assignment of big_num to c.value I get: NoMethodError: undefined method `gsub' for #<Bignum:0x00000001ea48f8> Anybody know what I'm doing wrong? 回答1: If you're going to be storing your SHA2 in a

Image file cheksum as a unique content compare optimalisation

偶尔善良 提交于 2019-12-11 02:33:45
问题 Users are uploading fotos to our php build system. Some of them we are marking as forbidden because of not relevant content. I´m searching for optimalisation of an 'AUTO-COMPARE' algorithm which is skipping these marked as forbidden fotos. Every upload need to be compared to many vorbinden. Possible solutions: 1/ Store forbidden files and compare whole content - works well but is slow. 2/ Store image file checksum and compare the checksums - this is the idea to improve the speed. 3/ Any

checksum mismatch on subversion merge

自闭症网瘾萝莉.ら 提交于 2019-12-10 22:16:26
问题 I'm using Subversion 1.6.17 on a SuSE host to try and merge a single branch into a local updated working copy. I'm getting portions of the merge, then it stops on the same file every time I try with: svn: Checksum mismatch for 'path/to/javascript/files/myjavascriptfile.js': expected checksum: 685b3a63667d3eb4cc4a09ccc960ea2c actual checksum: 7c4dfb8a7065aa2c616a1680c1703914 I've checked the .svn/text-base/ version of the file, and md5sum shows the correct "expected" checksum, as does an

Scapy TCP Checksum Recalculation Odd behaviour

那年仲夏 提交于 2019-12-10 19:55:33
问题 I'm trying to do a TCP ACK Spoofing. I sniff one ACK packet from a pcap file and send it in a loop incrementing its ACK number as well as another option field. Sniffing Part: (Prespoofing) from scapy.all import * from struct import unpack, pack pkt = sniff(offline="mptcpdemo.pcap", filter="tcp", count=15) i=6 while True: ack_pkt = pkt[i] if ack_pkt.sprintf('%TCP.flags%') == 'A': break i+=1 del ack_pkt.chksum del ack_pkt[TCP].chksum print ack_pkt.chksum, ack_pkt[TCP].chksum hex2pkt = ack_pkt._

tcp checksum and tcp offloading

巧了我就是萌 提交于 2019-12-10 18:08:22
问题 i am using raw sockets to create my own socket. i need to set the tcp_checksum. i have tried a lot of references but all are not working (i am using wireshark for testing). could you help me please. by the way, i read somewhere that if you set tcp_checksum=0. then the hardware will calculate the checksum automatically for you. is this true? i tried it, but in wireshark the tcp_checksum gives a value of 0X000 and says tcp offload. i also read about tcp offloading, and didn't understand, is it

wkhtmltopdf generates a different checksum on every run

喜欢而已 提交于 2019-12-10 15:41:40
问题 I'm trying to verify that the content generated from wkhtmltopdf is the same from run to run, however every time I run wkhtmltopdf I get a different hash / checksum value against the same page. We are talking something real basic like using an html page of: <html> <body> <p> This is some text</p> </body </html> I get a different md5 or sha256 hash every time I run wkhtmltopdf using an amazing line of: ./wkhtmltopdf example.html ~/Documents/a.pdf And using a python hasher of: def shasum

Use a combination of SHA1+MD5

送分小仙女□ 提交于 2019-12-10 15:22:29
问题 I'm trying use a secure way to create checksum for files (Larger than 10GB !). SHA256 is secure enough for me but this algorithm is so process expensive and it is not suitable. Well I know that both SHA1 and MD5 checksums are insecure through the collisions. So I just think the fastest and the safest way is combining MD5 with SHA1 like : SHA1+MD5 and I don't think there is way to get file (Collision) with the same MD5 and SHA1 both at a same time . So is combining SHA1+MD5 secure enough for

Getting a file checksum directly from the filesystem instead of calculating it explicitly

依然范特西╮ 提交于 2019-12-10 14:55:41
问题 I'm guessing that a typical filesystem tends to keep some kind of checksum/CRC/hash of every file it manages, so it can detect file corruption. Is that guess correct? And if yes, is there a way to access it? I'm primarily interested in Windows and NTFS, but comments on other platforms would be welcome as well... Language is unimportant at this point, but I'd like to avoid assembler if possible. Thanks. 回答1: OK, it appears that what I'm asking is impossible. BTW, this was also discussed here:

checksums on zip files

廉价感情. 提交于 2019-12-10 13:59:33
问题 I am currently working on a tool that uploads a group of files, then uses md5 checksums to compare the files to the last batch that were uploaded and tells you which files have changed. For regular files this is working fine but some of the uploaded files are zip archives, which almost always have changed, even when the files inside it are the same. Is there a way to perform a different type of checksum to check if these files have changed without having to unzip each one individually and