fwrite() - effect of size and count on performance

≯℡__Kan透↙ 提交于 2019-11-29 09:12:11

The performance should not depend on either way, because anyone implementing fwrite would multiply size and count to determine how much I/O to do.

This is exemplified by FreeBSD's libc implementation of fwrite.c, which in its entirety reads (include directives elided):

/*
 * Write `count' objects (each size `size') from memory to the given file.
 * Return the number of whole objects written.
 */
size_t
fwrite(buf, size, count, fp)
    const void * __restrict buf;
    size_t size, count;
    FILE * __restrict fp;
{
    size_t n;
    struct __suio uio;
    struct __siov iov;

    /*
     * ANSI and SUSv2 require a return value of 0 if size or count are 0.
     */
    if ((count == 0) || (size == 0))
        return (0);

    /*
     * Check for integer overflow.  As an optimization, first check that
     * at least one of {count, size} is at least 2^16, since if both
     * values are less than that, their product can't possible overflow
     * (size_t is always at least 32 bits on FreeBSD).
     */
    if (((count | size) > 0xFFFF) &&
        (count > SIZE_MAX / size)) {
        errno = EINVAL;
        fp->_flags |= __SERR;
        return (0);
    }

    n = count * size;

    iov.iov_base = (void *)buf;
    uio.uio_resid = iov.iov_len = n;
    uio.uio_iov = &iov;
    uio.uio_iovcnt = 1;

    FLOCKFILE(fp);
    ORIENT(fp, -1);
    /*
     * The usual case is success (__sfvwrite returns 0);
     * skip the divide if this happens, since divides are
     * generally slow and since this occurs whenever size==0.
     */
    if (__sfvwrite(fp, &uio) != 0)
        count = (n - uio.uio_resid) / size;
    FUNLOCKFILE(fp);
    return (count);
}
Aleš Kotnik

The purpose of two arguments gets more clear, if you consider ther return value, which is the count of objects successfuly written/read to/from the stream:

fwrite(src, 1, 50000, dst); // will return 50000
fwrite(src, 50000, 1, dst); // will return 1

The speed might be implementation dependent although, I don't expect any considerable difference.

darda

I'd like to point you to my question, which ended up exposing an interesting performance difference between calling fwrite once and calling fwrite multiple times to write a file "in chunks".

My problem was that there's a bug in Microsoft's implementation of fwrite so files larger than 4GB cannot be written in one call (it hangs at fwrite). So I had to work around this by writing the file in chunks, calling fwrite in a loop until the data was completely written. I found that this latter method always returns faster than the single fwrite call.

I'm in Windows 7 x64 with 32 GB of RAM, which makes write caching pretty aggressive.

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!