Using named pipes with bash - Problem with data loss

前端 未结 6 2201
余生分开走
余生分开走 2020-12-14 10:16

Did some search online, found simple \'tutorials\' to use named pipes. However when I do anything with background jobs I seem to lose a lot of data.

[[Edit: found a

6条回答
  •  無奈伤痛
    2020-12-14 11:11

    Your problem is if statement below:

    while true
    do
        if read txt <"$pipe"
        ....
    done
    

    What is happening is that your job queue server is opening and closing the pipe each time around the loop. This means that some of the clients are getting a "broken pipe" error when they try to write to the pipe - that is, the reader of the pipe goes away after the writer opens it.

    To fix this, change your loop in the server open the pipe once for the entire loop:

    while true
    do
        if read txt
        ....
    done < "$pipe"
    

    Done this way, the pipe is opened once and kept open.

    You will need to be careful of what you run inside the loop, as all processing inside the loop will have stdin attached to the named pipe. You will want to make sure you redirect stdin of all your processes inside the loop from somewhere else, otherwise they may consume the data from the pipe.

    Edit: With the problem now being that you are getting EOF on your reads when the last client closes the pipe, you can use jilles method of duping the file descriptors, or you can just make sure you are a client too and keep the write side of the pipe open:

    while true
    do
        if read txt
        ....
    done < "$pipe" 3> "$pipe"
    

    This will hold the write side of the pipe open on fd 3. The same caveat applies with this file descriptor as with stdin. You will need to close it so any child processes dont inherit it. It probably matters less than with stdin, but it would be cleaner.

提交回复
热议问题