Subject: Re: [boost] Any interest in hashing algorithms SHA and/or FNV1a?
From: Jeff Flinn (Jeffrey.Flinn_at_[hidden])
Date: 2013-11-13 09:43:36
On 11/12/2013 1:52 PM, foster brereton wrote:
> I have been working on implementations of the SHA and FNV-1a class of
> hashing algorithms. Both are header-only.
> SHA (Secure Hash Algorithms)
> Implementations for SHA-1, -224, -256, -384, and -512. It does not
> implement the more recent SHA-3 (Keccak) algorithm. It would not be
> difficult to add support for 512/224 and 512/256 if they were desired.
> The routines support bit-level messages, meaning they can correctly
> digest messages that do not end on a byte boundary. More information
> FNV (FowlerâNollâVo)
> The header supports the FNV-1a algorithm, which has better
> distribution characteristics than its cousin, FNV-1. I have added
> support for 32-, 64-, 128-, 256-, and 1024-bit variants. This is not a
> cryptographically secure algorithm, however is much faster than its
> crypto cousins and as such is a solid algorithm for obtaining unique
> hash values for e.g., data structures. I have also implemented a
> constexpr variant of the algorithm (32-bit and 64-bit only) for const
> char* to further performance when needed by pushing the hash to
> compile-time computation.
> More information here:
> The algorithms are currently part of the Adobe Source Libraries on GitHub:
> (note: the master branch version needs updating to this one.)
> Would these algorithms be of general use to the Boost community? If so
> I would be willing to submit them formally.
Have you seen:
I'll take a look at your ASL links today. Do you have any comparisons
with openssl or libcryptocpp? I'm primarily interested in MD5 & SHA1 for
legacy reasons. My understanding is that these algorithms are not
parallelizable due to inherent data dependencies. In my situation I need
both MD5 & SHA1 over the same sequence. I'd like to treat these as a
composite hash. My thought is that this should allow the
compiler/processor to better schedule instructions leading to better
performance. That's my hypothesis anyway.
The other issue I have is that with the above mentioned api's each
duplicate buffering of incoming data. So in my case I've got the disk,
os, std::stream_buf, MD5 and SHA1 each doing buffering. Seems that these
could be merged and there would be benefit to interleaving disk io with
processing the composite hash on a 64 byte block basis. I know the
number of incoming full blocks and would like to call the vault's
crypto_v01 process_block functions for those and then call update with
the remaining partial block.
So, bottom line - I'm interested.