I\'m using a Java scrypt library for password storage. It calls for an N
, r
and p
value when I encrypt things, which its documentation
So that it takes 250 ms to verify a password
The memory required for scrypt to operate is calculated as:
128 bytes ×
N_cost
×r_blockSizeFactor
for the parameters you quote (N=16384
, r=8
, p=1
)
128×16384×8 = 16,777,216 bytes = 16 MB
You have to take this into account when choosing parameters.
Bcrypt is "weaker" than Scrypt (although still three orders of magnitude stronger than PBKDF2) because it only requires 4 KB of memory. You want to make it difficult to parallelize cracking in hardware. For example, if a video card has 1.5 GB of on-board memory and you tuned scrypt to consume 1 GB of memory:
128×16384×512 = 1,073,741,824 bytes = 1 GB
then an attacker could not parallelize it on their video card. But then your application/phone/server would need to use 1 GB of RAM every time they calculated a password.
It helps me to think about the scrypt parameters as a rectangle. Where:
cost
(N) increases both memory usage and iterations.blockSizeFactor
(r) increases memory usage.The remaining parameter parallelization
(p) means that you have to do the entire thing 2, 3, or more times:
If you had more memory than CPU, you could calculate the three separate paths in parallel - requiring triple the memory:
But in all real-world implementations, it is calculated in series, tripling the calculations needed:
In reality, nobody has ever chosen a p
factor other than p=1
.
What are the ideal factors?
Graphical version of above; you're targeting ~250ms:
Notes:
r=8
curveAnd zoomed in version of above to the reasonable area, again looking at the ~250ms magnitude: