/dev/random Extremely Slow?

后端 未结 6 803
情深已故
情深已故 2020-12-01 06:23

Some background info: I was looking to run a script on a Red Hat server to read some data from /dev/random and use the Perl unpack() command to convert it to a hex string fo

相关标签:
6条回答
  • 2020-12-01 06:38

    use /dev/urandom, its cryptographically secure.

    good read: http://www.2uo.de/myths-about-urandom/

    "If you are unsure about whether you should use /dev/random or /dev/urandom, then probably you want to use the latter."

    When in doubt in early boot, wether you have enough entropy gathered. use the system call getrandom() instead. [1] (from Linux kernel >= 3.17) Its best of both worlds,

    • it blocks until (only once!) enough entropy is gathered,
    • after that it will never block again.

    [1] git kernel commit

    0 讨论(0)
  • 2020-12-01 06:45

    This fixed it for me. Use new SecureRandom() instead of SecureRandom.getInstanceStrong()

    Some more info can be found here : https://tersesystems.com/blog/2015/12/17/the-right-way-to-use-securerandom/

    0 讨论(0)
  • 2020-12-01 06:46

    If you want more entropy for /dev/random then you'll either need to purchase a hardware RNG or use one of the *_entropyd daemons in order to generate it.

    0 讨论(0)
  • 2020-12-01 06:49

    If you are using randomness for testing (not cryptography), then repeatable randomness is better, you can get this with pseudo randomness starting at a known seed. There is usually a good library function for this in most languages.

    It is repeatable, for when you find a problem and are trying to debug. It also does not eat up entropy. May be seed the pseudo random generator from /dev/urandom and record the seed in the test log. Perl has a pseudo random number generator you can use.

    0 讨论(0)
  • 2020-12-01 06:55

    On most Linux systems, /dev/random is powered from actual entropy gathered by the environment. If your system isn't delivering a large amount of data from /dev/random, it likely means that you're not generating enough environmental randomness to power it.

    I'm not sure why you think /dev/urandom is "slower" or higher quality. It reuses an internal entropy pool to generate pseudorandomness - making it slightly lower quality - but it doesn't block. Generally, applications that don't require high-level or long-term cryptography can use /dev/urandom reliably.

    Try waiting a little while then reading from /dev/urandom again. It's possible that you've exhausted the internal entropy pool reading so much from /dev/random, breaking both generators - allowing your system to create more entropy should replenish them.

    See Wikipedia for more info about /dev/random and /dev/urandom.

    0 讨论(0)
  • 2020-12-01 07:00

    This question is pretty old. But still relevant so I'm going to give my answer. Many CPUs today come with a built-in hardware random number generator (RNG). As well many systems come with a trusted platform module (TPM) that also provide a RNG. There are also other options that can be purchased but chances are your computer already has something.

    You can use rngd from rng-utils package on most linux distros to seed more random data. For example on fedora 18 all I had to do to enable seeding from the TPM and the CPU RNG (RDRAND instruction) was:

    # systemctl enable rngd
    # systemctl start rngd
    

    You can compare speed with and without rngd. It's a good idea to run rngd -v -f from command line. That will show you detected entropy sources. Make sure all necessary modules for supporting your sources are loaded. To use TPM, it needs to be activated through the tpm-tools. update: here is a nice howto.

    BTW, I've read on the Internet some concerns about TPM RNG often being broken in different ways, but didn't read anything concrete against the RNGs found in Intel, AMD and VIA chips. Using more than one source would be best if you really care about randomness quality.

    urandom is good for most use cases (except sometimes during early boot). Most programs nowadays use urandom instead of random. Even openssl does that. See myths about urandom and comparison of random interfaces.

    In recent Fedora and RHEL/CentOS rng-tools also support the jitter entropy. You can on lack of hardware options or if you just trust it more than your hardware.

    UPDATE: another option for more entropy is HAVEGED (questioned quality). On virtual machines there is a kvm/qemu VirtIORNG (recommended).

    0 讨论(0)
提交回复
热议问题