Node.js fs.open() hangs after trying to open more than 4 named pipes (FIFOs)

依然范特西╮ 提交于 2019-12-22 18:50:36

问题


I have a node.js process that needs to read from multiple named pipes fed by different other processes as an IPC method.

I realized after opening and creating read streams from more than four fifos, that fs seems to no longer be able to open fifos and just hangs there.

It seems that this number is a bit low, considering that it is possible to open thousands of files concurrently without trouble (for instance by replacing mkfifo by touch in the following script).

I tested with node.js v10.1.0 on MacOS 10.13 and with node.js v8.9.3 on Ubuntu 16.04 with the same result.


The faulty script

And a script that displays this behavior:

var fs = require("fs");
var net = require("net");
var child_process = require('child_process');

var uuid = function() {
    for (var i = 0, str = ""; i < 32; i++) {
        var number = Math.floor(Math.random() * 16);
        str += number.toString(16);
    }
    return str;
}

function setupNamedPipe(cb) {
    var id = uuid();
    var fifoPath = "/tmp/tmpfifo/" + id;

    child_process.exec("mkfifo " + fifoPath, function(error, stdout, stderr) {
        if (error) {
            return;
        }

        fs.open(fifoPath, 'r+', function(error, fd) {
            if (error) {
                return;
            }

            var stream = fs.createReadStream(null, {
                fd
            });
            stream.on('data', function(data) {
                console.log("FIFO data", data.toString());
            });
            stream.on("close", function(){
                console.log("close");
            });
            stream.on("error", function(error){
                console.log("error", error);
            });

            console.log("OK");
            cb();
        });
    });
}

var i = 0;
function loop() {
    ++i;
    console.log("Open ", i);
    setupNamedPipe(loop);
}

child_process.exec("mkdir -p /tmp/tmpfifo/", function(error, stdout, stderr) {
    if (error) {
        return;
    }

    loop();
});

This script doesn't clean behind him, don't forget to rm -r /tmp/tmpfifo

Repl.it link


NOTE, The following part of this questions is related to what I already tried to answer the question but might not be central to it


Two interesting facts with this script

  • when writing twice in one of the FIFO, (ie echo hello > fifo) Node is then able to open one more fifo, but no longer receives from the one in which we wrote
  • when the read stream is created by directly providing the path to the fifo (instead of fd), the script doesn't block any more, but apparently no longer receive what is written in any of the FIFOs

Debug informations

I then tried to verify whether that could be related to some OS limit, for instance the number of file descriptor open.

Ouput of ulimit -a on the Mac is

core file size          (blocks, -c) 0
data seg size           (kbytes, -d) unlimited
file size               (blocks, -f) unlimited
max locked memory       (kbytes, -l) unlimited
max memory size         (kbytes, -m) unlimited
open files                      (-n) 256
pipe size            (512 bytes, -p) 1
stack size              (kbytes, -s) 8192
cpu time               (seconds, -t) unlimited
max user processes              (-u) 1418
virtual memory          (kbytes, -v) unlimited

Nothing points to some limit at 4.


C++ tentative

I then tried to write a similar script in C++. In C++ the script successfully open a hundred fifos.

Note that there are a few differences between the two implementations. In the C++ one,

  • the script only open the fifos,
  • there is no tentative for reading,
  • and no multithreading

#include <string>
#include <cstring>
#include <sys/stat.h>
#include <fcntl.h>
#include <iostream>

int main(int argc, char** argv)
{

    for (int i=0; i < 100; i++){
        std::string filePath = "/tmp/tmpfifo/" + std::to_string(i);
        auto hehe = open(filePath.c_str(), O_RDWR);
        std::cout << filePath << " " << hehe << std::endl;
    }

    return 0;
}

As a side note, the fifos need to be created before executing the script, for instance with

for i in $(seq 0 100); do mkfifo /tmp/tmpfifo/$i; done


Potential Node.js related issue

After a bit of search, it also seems to be linked to that issue on the Node.js Github:

https://github.com/nodejs/node/issues/1941.

But people seems to be complaining of the opposite behavior (fs.open() throwing EMFILE errors and not hanging silently...)


As you can see I tried to search in many directions and all of this lead me to my question:

Do you know what could cause this behavior?

Thank you


回答1:


So I asked the question on the Node.js Github, https://github.com/nodejs/node/issues/23220

From the solution:

Dealing with FIFOs is currently a bit tricky.

The open() system call blocks on FIFOs by default until the other side of the pipe has been opened as well. Because Node.js uses a threadpool for file-system operations, opening multiple pipes where the open() calls don’t finish exhausts this threadpool.

The solution is to open the file in non-blocking mode, but that has the difficulty that the other fs calls aren’t built with non-blocking file descriptors in mind; net.Socket is, however.

So, the solution would look something like this:

fs.open('path/to/fifo/', fs.constants.O_RDONLY | fs.constants.O_NONBLOCK, (err, fd) => {
  // Handle err
  const pipe = new net.Socket({ fd });
  // Now `pipe` is a stream that can be used for reading from the FIFO.
});


来源:https://stackoverflow.com/questions/52608586/node-js-fs-open-hangs-after-trying-to-open-more-than-4-named-pipes-fifos

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!