Node reading file in specified chunk size

后端 未结 3 1788
野性不改
野性不改 2020-12-30 10:22

The goal: Upload large files to AWS Glacier without holding the whole file in memory.

I\'m currently uploading to glacier now using fs.readFileSync() and things are

3条回答
  •  难免孤独
    2020-12-30 10:38

    You may consider using below snippet where we read file in chunk of 1024 bytes

    var fs = require('fs');
    
    var data = '';
    
    var readStream = fs.createReadStream('/tmp/foo.txt',{ highWaterMark: 1 * 1024, encoding: 'utf8' });
    
    readStream.on('data', function(chunk) {
        data += chunk;
        console.log('chunk Data : ')
        console.log(chunk);// your processing chunk logic will go here
    
    }).on('end', function() {
        console.log('###################');
        console.log(data); 
    // here you see all data processed at end of file
        });
    

    Please Note : highWaterMark is the parameter used for chunk size Hope this Helps!

    Web Reference: https://stackabuse.com/read-files-with-node-js/ Changing readstream chunksize

提交回复
热议问题