Why is using BufferedInputStream to read a file byte by byte faster than using FileInputStream?

前端 未结 3 1237
挽巷
挽巷 2020-11-28 19:49

I was trying to read a file into an array by using FileInputStream, and an ~800KB file took about 3 seconds to read into memory. I then tried the same code except with the F

3条回答
  •  情歌与酒
    2020-11-28 20:19

    In FileInputStream, the method read() reads a single byte. From the source code:

    /**
     * Reads a byte of data from this input stream. This method blocks
     * if no input is yet available.
     *
     * @return     the next byte of data, or -1 if the end of the
     *             file is reached.
     * @exception  IOException  if an I/O error occurs.
     */
    public native int read() throws IOException;
    

    This is a native call to the OS which uses the disk to read the single byte. This is a heavy operation.

    With a BufferedInputStream, the method delegates to an overloaded read() method that reads 8192 amount of bytes and buffers them until they are needed. It still returns only the single byte (but keeps the others in reserve). This way the BufferedInputStream makes less native calls to the OS to read from the file.

    For example, your file is 32768 bytes long. To get all the bytes in memory with a FileInputStream, you will require 32768 native calls to the OS. With a BufferedInputStream, you will only require 4, regardless of the number of read() calls you will do (still 32768).

    As to how to make it faster, you might want to consider Java 7's NIO FileChannel class, but I have no evidence to support this.


    Note: if you used FileInputStream's read(byte[], int, int) method directly instead, with a byte[>8192] you wouldn't need a BufferedInputStream wrapping it.

提交回复
热议问题