Boost logo

Boost Users :

Subject: [Boost-users] [CRC] Differences in crc_16_type and crc_32_type
From: Ryan McConnehey (mccorywork_at_[hidden])
Date: 2010-03-05 00:35:15


Work is wanting to use standard CRC algorithms for both 16 and 32 bit
results. This would simplify documentation by not needing to use
examples to define the process. The CRC library has predefined four
common CRC algorithms. I've included three of the definitions for ease
of discussion. Using Wikipedia and the truncated polynomial values, I
identified the following algorithms.

typedef crc_optimal<16, 0x8005, 0, 0, true, true> crc_16_type;
typedef crc_optimal<16, 0x1021, 0xFFFF, 0, false, false> crc_ccitt_type;
typedef crc_optimal<32, 0x04C11DB7, 0xFFFFFFFF, 0xFFFFFFFF, true, true>
crc_32_type;

crc_16_type = crc-16-IBM
crc_ccitt_type = crc-16-ccitt
crc_32_type = crc-32-ieee 802.3

Now for the questions. How do I know the CRC parameters for each
algorithm? For example, if the reflection input for crc_ccitt_type was
changed from false to true, is the algorithm result still crc-16-ccitt?
Why are the parameters for crc_16_type and crc_32_type different? I
expected only the truncated polynomial to change, not the initial
remainder and final XOR value. Is it true that crc-32-ieee 802.3 is
also called crc-32-ccitt? If that is correct why is the crc_16_type so
different?

Any clarification to my understanding would be helpful.

Ryan


Boost-users list run by williamkempf at hotmail.com, kalb at libertysoft.com, bjorn.karlsson at readsoft.com, gregod at cs.rpi.edu, wekempf at cox.net