I saw some guy who encrypt users password multiple times with MD5 to improve security. I\'m not sure if this works but it doesn\'t look good. So, does it make sense?
There is such a question on crypto.SE but it is NOT public now. The answer by Paŭlo Ebermann is:
For password-hashing, you should not use a normal cryptographic hash, but something made specially to protect passwords, like bcrypt.
See How to safely store a password for details.
The important point is that password crackers don't have to bruteforce the hash output space (2160 for SHA-1), but only the password space, which is much much smaller (depending on your password rules - and often dictionaries help). Thus we don't want a fast hash function, but a slow one. Bcrypt and friends are designed for this.
And similar question has these answers: The question is "Guarding against cryptanalytic breakthroughs: combining multiple hash functions" Answer by Thomas Pornin:
Combining is what SSL/TLS does with MD5 and SHA-1, in its definition of its internal "PRF" (which is actually a Key Derivation Function). For a given hash function, TLS defines a KDF which relies on HMAC which relies on the hash function. Then the KDF is invoked twice, once with MD5 and once with SHA-1, and the results are XORed together. The idea was to resist cryptanalytic breaks in either MD5 or SHA-1. Note that XORing the outputs of two hash functions relies on subtle assumptions. For instance, if I define SHB-256(m) = SHA-256(m) XOR C, for a fixed constant C, then SHB-256 is as good a hash function as SHA-256; but the XOR of both always yields C, which is not good at all for hashing purposes. Hence, the construction in TLS in not really sanctioned by the authority of science (it just happens not to have been broken). TLS-1.2 does not use that combination anymore; it relies on the KDF with a single, configurable hash function, often SHA-256 (which is, in 2011, a smart choice).
As @PulpSpy points out, concatenation is not a good generic way of building hash functions. This was published by Joux in 2004 and then generalized by Hoch and Shamir in 2006, for a large class of construction involving iterations and concatenations. But mind the fine print: this is not really about surviving weaknesses in hash functions, but about getting your money worth. Namely, if you take a hash function with a 128-bit output and another with a 160-bit output, and concatenate the results, then collision resistance will be no worse than the strongest of the two; what Joux showed is that it will not be much better either. With 128+160 = 288 bits of output, you could aim at 2144 resistance, but Joux's result implies that you will not go beyond about 287.
So the question becomes: is there a way, if possible an efficient way, to combine two hash functions such that the result is as collision-resistant as the strongest of the two, but without incurring the output enlargement of concatenation ? In 2006, Boneh and Boyen have published a result which simply states that the answer is no, subject to the condition of evaluating each hash function only once. Edit: Pietrzak lifted the latter condition in 2007 (i.e. invoking each hash function several times does not help).
And by PulpSpy:
I'm sure @Thomas will give a thorough answer. In the interm, I'll just point out that the collision resistance of your first construction, H1(m)||H2(M) is surprisingly not that much better than just H1(M). See section 4 of this paper:
http://web.cecs.pdx.edu/~teshrim/spring06/papers/general-attacks/multi-joux.pdf