Is calculating an MD5 hash less CPU intensive than SHA-1 or SHA-2 on \"standard\" laptop x86 hardware? I\'m interested in general information, not specific to a certain chip
As someone who's spent a bit of time optimizing MD5 performance, I thought I'd supply more of a technical explanation than the benchmarks provided here, to anyone who happens to find this in the future.
MD5 does less "work" than SHA1 (e.g. fewer compression rounds), so one may think it should be faster. However, the MD5 algorithm is mostly one big dependency chain, which means that it doesn't exploit modern superscalar processors particularly well (i.e. exhibits low instructions-per-clock). SHA1 has more parallelism available, so despite needing more "computational work" done, it often ends up being faster than MD5 on modern superscalar processors.
If you do the MD5 vs SHA1 comparison on older processors or ones with less superscalar "width" (such as a Silvermont based Atom CPU), you'll generally find MD5 is faster than SHA1.
SHA2 and SHA3 are even more compute intensive than SHA1, and generally much slower.
One thing to note, however, is that some new x86 and ARM CPUs have instructions to accelerate SHA1 and SHA256, which obviously helps these algorithms greatly if the instructions are being used.
As an aside, SHA256 and SHA512 performance may exhibit similarly curious behaviour. SHA512 does more "work" than SHA256, however a key difference between the two is that SHA256 operates using 32-bit words, whilst SHA512 operates using 64-bit words. As such, SHA512 will generally be faster than SHA256 on a platform with a 64-bit word size, as it's processing twice the amount of data at once. Conversely, SHA256 should outperform SHA512 on a platform with a 32-bit word size.
Note that all of the above only applies to single buffer hashing (by far the most common use case). If you're fancy and computing multiple hashes in parallel, i.e. a multi-buffer SIMD approach, the behaviour changes somewhat.