How to purge disk I/O caches on Linux?

后端 未结 5 2119
隐瞒了意图╮
隐瞒了意图╮ 2021-01-30 01:41

I need to do it for more predictable benchmarking.

5条回答
  •  长发绾君心
    2021-01-30 02:09

    Short good enough answer: (copy paste friendly)

    DISK=/dev/sdX # <===ADJUST THIS===
    sync
    echo 3 > /proc/sys/vm/drop_caches
    blockdev --flushbufs $DISK
    hdparm -F $DISK
    

    Explanation:

    sync: From the man page: flush file system buffers. Force changed blocks to disk, update the super block.

    echo 3 > /proc/sys/vm/drop_cache: from the kernel docs this will cause the kernel to drop clean caches

    blockdev --flushbufs /dev/sda: from the man page: call block device ioctls [to] flush buffers.

    hdparm -F /dev/sda: from the man page: Flush the on-drive write cache buffer (older drives may not implement this)

    Although the blockdev and hdparm commands look similar according to an answer above they issue different ioctls to the device.

    Long probably better way:

    (I'll assume that you have formatted the disk but you can adapt these commands if you want to write directly to the disk)

    Run this only once before the 1st benchmark:

    MOUNT=/mnt/test # <===ADJUST THIS===
    # create a file with psuedo-random data. We will read it
    # to fill the read cache of the HDD with garbage
    dd if=/dev/urandom of=$MOUNT/temp-hddread.tmp bs=64M count=16
    

    Run this every time you want to empty the caches:

    DISK=/dev/sdX # <===ADJUST THIS===
    MOUNT=/mnt/test # <===AND THIS===
    # create a file with psuedo-random data to fill the write cache
    # of the disk with garbage. Delete it afterwards it's not useful anymore
    dd if=/dev/urandom of=$MOUNT/temp-hddwrite.tmp bs=64M count=16
    rm $MOUNT/temp-hddwrite.tmp
    # see short good enough answer above
    sync
    echo 3 > /proc/sys/vm/drop_caches
    blockdev --flushbufs $DISK
    hdparm -F $DISK
    # read the file with pseudo-random data to fill any read-cache
    # the disk may have with garbage
    dd if=$MOUNT/temp-hddread.tmp of=/dev/null
    

    Run this when you're done.

    MOUNT=/mnt/test # <===ADJUST THIS===
    # delete the temporary file with pseudo-random data
    rm $MOUNT/temp-hddread.tmp
    

    Explanation:

    The disk will probably have some H/W cache. Some disks by design or due to bugs may not clear their caches when you issue the blockdev and hdparm commands. To compensate we write and read pseudo-random data hopping to fill these caches so that any cached data are removed from them. How much data you need to fill the cache depends on its size. In the commands above I'm using dd to read/write 16*64MB=1024MB, adjust the arguments if your HDD may have bigger cache (data sheets and experimentation are your friend and it doesn't hurt to specify values above the actual size of the cache). I'm using /dev/urandom as a source for random data because it's fast and we don't care about true randomness (we only care for high entropy because the disk firmware may be using compression before storing data to the cache). I'm creating /mnt/test/temp-hddread.tmp from the start and use it every time I want to read enough random data. I'm creating and deleting /mnt/test/temp-hddwrite.tmp each time I want to write enough random data.

    Credits

    I've wrote this answer based on the best parts of the existing answers.

提交回复
热议问题