crc

Storing CRC into an AXF/ELF file

青春壹個敷衍的年華 提交于 2019-11-30 21:22:01
I'm currently working on a C program in the LPCXpresso (eclipse-based) tool-chain on Windows 7, an IDE with gcc targeting the an NXP Cortex M3 microprocessor. It provides a simple way to compile-link-program the microprocessor over JTAG. The result of a build is an AXF file (ELF format) that is loaded by a debug configuration. The loaded program resides in Flash memory from 0x00000 to 0x3FFFB. I'd like to include a 4-byte CRC-32 at 0x3FFFC to validate the program at start-up. I added another section and use the gcc __attribute__ directive to access that memory location. uint32_t crc32_build _

CEIWEI CheckSum CRC校验精灵v2.1 CRC3/CRC4/CRC5/CRC6/CRC8CRC10/CRC11/CRC16/CRC24/CRC32/CRC40/CRC64/CRC82/Adler32

我的未来我决定 提交于 2019-11-30 20:53:57
CEIWEI CheckSum CRC校验精灵 是一款通用的循环冗余校验码CRC(Cyclic Redundancy Check)、MD5、SHA1、SHA2、SHA3、HAVAL、SHAKE、TIGER、BLAKE、RIPEMD、GOST等算法Hash校验的专业工具软件。 CRC校验支持:CRC3、CRC4、CRC5、CRC6、CRC7、CRC8、CRC11、CRC12、CRC13、CRC14、CRC15、CRC16、CRC17、CRC21、CRC24、 CRC30、CRC31、CRC32、CRC40、CRC64、CRC82、Adler32全面的105种CRC算法,支持显示标准的多项式、初始值、数据反转以及结果异或值。支持Windows资源管理器外壳扩展,方便快捷的调用显示文件的CRC/Hash信息。 支持计算的数据:16进制HEX、字符串、文件,字符串支持ANSI、UTF8、Unicode、Unicode BigEndian编码方式。 支持Windows系统版本: WinXP、Win2003、WinVista、Win7、Win2008、Win8、Win2012、Win2016、Win10,32/64位系统。 支持语言:简体中文、繁体中文、英文三种语言。 下载: 本地下载 来源: http://www.ceiwei.com/mt/news/shownews.php?id=26

_mm_crc32_u64 poorly defined

核能气质少年 提交于 2019-11-30 20:40:50
Why in the world was _mm_crc32_u64(...) defined like this? unsigned int64 _mm_crc32_u64( unsigned __int64 crc, unsigned __int64 v ); The "crc32" instruction always accumulates a 32-bit CRC, never a 64-bit CRC (It is, after all, CRC32 not CRC64). If the machine instruction CRC32 happens to have a 64-bit destination operand, the upper 32 bits are ignored, and filled with 0's on completion, so there is NO use to EVER have a 64-bit destination. I understand why Intel allowed a 64-bit destination operand on the instruction (for uniformity), but if I want to process data quickly, I want a source

Need help in correcting issues in CRC-ITU check method written in Javascript (node.js)

☆樱花仙子☆ 提交于 2019-11-30 18:11:38
问题 We are trying to code GPS device listener on Javascript. While doing this, we are unable to develop the right script for the CRC-ITU error check. The explanation for the crc code generation from protocol document is as below A check code may be used by the terminal or the server to distinguish whether the received information is error or not. To prevent errors occur during data transmission, error check is added to against data misoperation, so as to increase the security and efficiency of

Storing CRC into an AXF/ELF file

谁说胖子不能爱 提交于 2019-11-30 17:09:04
问题 I'm currently working on a C program in the LPCXpresso (eclipse-based) tool-chain on Windows 7, an IDE with gcc targeting the an NXP Cortex M3 microprocessor. It provides a simple way to compile-link-program the microprocessor over JTAG. The result of a build is an AXF file (ELF format) that is loaded by a debug configuration. The loaded program resides in Flash memory from 0x00000 to 0x3FFFB. I'd like to include a 4-byte CRC-32 at 0x3FFFC to validate the program at start-up. I added another

Getting the CRC checksum of a byte array and adding it to that byte array

六月ゝ 毕业季﹏ 提交于 2019-11-30 13:48:52
I have this byte array: static byte[] buf = new byte[] { (byte) 0x01, (byte) 0x04, (byte)0x00, (byte)0x01,(byte)0x00, (byte) 0x01}; Now, the CRC checksum of this byte array is supposed to be 0x60, 0x0A. I want the Java code to recreate this checksum, however I cant seem to recreate it. I have tried crc16: static int crc16(final byte[] buffer) { int crc = 0xFFFF; for (int j = 0; j < buffer.length ; j++) { crc = ((crc >>> 8) | (crc << 8) )& 0xffff; crc ^= (buffer[j] & 0xff);//byte to int, trunc sign crc ^= ((crc & 0xff) >> 4); crc ^= (crc << 12) & 0xffff; crc ^= ((crc & 0xFF) << 5) & 0xffff; }

compute crc of file in python

心已入冬 提交于 2019-11-30 12:37:01
问题 I want to calculate the CRC of file and get output like: E45A12AC . Here's my code: #!/usr/bin/env python import os, sys import zlib def crc(fileName): fd = open(fileName,"rb") content = fd.readlines() fd.close() for eachLine in content: zlib.crc32(eachLine) for eachFile in sys.argv[1:]: crc(eachFile) This calculates the CRC for each line, but its output (e.g. -1767935985 ) is not what I want. Hashlib works the way I want, but it computes the md5: import hashlib m = hashlib.md5() for line in

CRC16 ISO 13239 Implementation

帅比萌擦擦* 提交于 2019-11-30 10:20:06
i'm trying to implement Crc16 in C#. I already tried many different implementations, but most of them gives me different values. Here are some of the codes that i already used. private static int POLYNOMIAL = 0x8408; private static int PRESET_VALUE = 0xFFFF; public static int crc16(byte[] data) { int current_crc_value = PRESET_VALUE; for (int i = 0; i < data.Length; i++) { current_crc_value ^= data[i] & 0xFF; for (int j = 0; j < 8; j++) { if ((current_crc_value & 1) != 0) { current_crc_value = (current_crc_value >> 1) ^ POLYNOMIAL; } else { current_crc_value = current_crc_value >> 1; } } }

CRC16 checksum: HCS08 vs. Kermit vs. XMODEM

空扰寡人 提交于 2019-11-30 07:09:57
I'm trying to add CRC16 error detection to a Motorola HCS08 microcontroller application. My checksums don't match, though. One online CRC calculator provides both the result I see in my PC program and the result I see on the micro. It calls the micro's result "XModem" and the PC's result "Kermit." What is the difference between the way those two ancient protocols specify the use of CRC16? you can implement 16 bit IBM, CCITT, XModem, Kermit, and CCITT 1D0F using the same basic code base. see http://www.acooke.org/cute/16bitCRCAl0.html which uses code from http://www.barrgroup.com/Embedded

_mm_crc32_u64 poorly defined

僤鯓⒐⒋嵵緔 提交于 2019-11-30 04:51:01
问题 Why in the world was _mm_crc32_u64(...) defined like this? unsigned int64 _mm_crc32_u64( unsigned __int64 crc, unsigned __int64 v ); The "crc32" instruction always accumulates a 32-bit CRC, never a 64-bit CRC (It is, after all, CRC32 not CRC64). If the machine instruction CRC32 happens to have a 64-bit destination operand, the upper 32 bits are ignored, and filled with 0's on completion, so there is NO use to EVER have a 64-bit destination. I understand why Intel allowed a 64-bit destination