mkfifo

O_RDWR on named pipes with poll()

时光总嘲笑我的痴心妄想 提交于 2019-11-28 16:55:17
I have gone through a variaty of different linux named pipe client/server implementations but most of them use the blocking defaults on reads/writes. As I am already using poll() to check other flags I though it would be a good idea to check for incoming FIFO data via poll() as well... After all the research I think that opening the pipe in O_RDWR mode is the only way to prevent an indefinitely number of EOF events on a pipe when no writer has opened it. This way both ends of the pipe are closed and other clients can open the writable end as well. To respond back I would use separate pipes...

Implementing pipelining in C. What would be the best way to do that?

时间秒杀一切 提交于 2019-11-28 12:42:39
I can't think of any way to implement pipelining in c that would actually work. That's why I've decided to write in here. I have to say, that I understand how do pipe/fork/mkfifo work. I've seen plenty examples of implementing 2-3 pipelines. It's easy. My problem starts, when I've got to implement shell, and pipelines count is unknown. What I've got now: eg. ls -al | tr a-z A-Z | tr A-Z a-z | tr a-z A-Z I transform such line into something like that: array[0] = {"ls", "-al", NULL"} array[1] = {"tr", "a-z", "A-Z", NULL"} array[2] = {"tr", "A-Z", "a-z", NULL"} array[3] = {"tr", "a-z", "A-Z",

python os.mkfifo() for Windows

只愿长相守 提交于 2019-11-28 12:35:46
Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in Windows it doesn't exist. What can be done? (I must use file-objects). Some extra details: I have a python

How do I use exec 3>myfifo in a script, and not have echo foo>&3 close the pipe?

安稳与你 提交于 2019-11-28 05:22:44
问题 Why can't I use exec 3>myfifo in the same manner in a bash script as I can in my terminal? I'm using named pipes to turn an awk filter into a simple "server", that should be able to take text input from clients, filter it, and flush on NUL. In terminal 1, the server is running like this: $ mkfifo to_server from_server; $ while true; do # Really, this awk script BEGIN's with reading in a huge file, # thus the client-server model awk '{sub("wrong", "correct");print;} /\0/ {fflush();}' <to

PhantomJS: pipe input

断了今生、忘了曾经 提交于 2019-11-27 17:35:25
问题 I am trying to use PhantomJS to render an html page to pdf. I do not want to write the files to disk, I have the html in memory, and I want the pdf in memory. Using the excellent answer from Pooria Azimi at this question, i am able to get the pdf from a named pipe. When trying the same on the other end (replacing the input file with a named pipe), I end up with a blank pdf. This is what I am doing now (simplified): mkfifo in_pipe.html out_pipe.pdf ./phantomjs rasterize.js in_pipe.html out

How do I properly write to FIFOs in Python?

末鹿安然 提交于 2019-11-27 17:30:10
问题 Something very strange is happening when I open FIFOs (named pipes) in Python for writing. Consider what happens when I try to open a FIFO for writing in a interactive interpreter: >>> fifo_write = open('fifo', 'w') The above line blocks until I open another interpreter and type the following: >>> fifo_read = open('fifo', 'r') >>> fifo.read() I don't understand why I had to wait for the pipe to be opened for reading, but lets skip that. The above code will block until there's data available

What conditions result in an opened, nonblocking named pipe (fifo) being “unavailable” for reads?

白昼怎懂夜的黑 提交于 2019-11-27 14:48:10
问题 Situation: new_pipe = os.open(pipe_path, os.O_RDONLY | os.O_NONBLOCK) # pipe_path points to a FIFO data = os.read(new_pipe, 1024) The read occasionally raises errno -11: Resource temporarily unavailable. When is this error raised? It seems very rare, as the common cases return data: If no writer has the pipe opened, empty str ('') is returned. If the writer has the pipe opened, but no data is in the fifo, empty str ('') is also returned And of course if the writer puts data in the fifo, that

O_RDWR on named pipes with poll()

孤人 提交于 2019-11-27 10:02:52
问题 I have gone through a variaty of different linux named pipe client/server implementations but most of them use the blocking defaults on reads/writes. As I am already using poll() to check other flags I though it would be a good idea to check for incoming FIFO data via poll() as well... After all the research I think that opening the pipe in O_RDWR mode is the only way to prevent an indefinitely number of EOF events on a pipe when no writer has opened it. This way both ends of the pipe are

Implementing pipelining in C. What would be the best way to do that?

天大地大妈咪最大 提交于 2019-11-27 07:16:10
问题 I can't think of any way to implement pipelining in c that would actually work. That's why I've decided to write in here. I have to say, that I understand how do pipe/fork/mkfifo work. I've seen plenty examples of implementing 2-3 pipelines. It's easy. My problem starts, when I've got to implement shell, and pipelines count is unknown. What I've got now: eg. ls -al | tr a-z A-Z | tr A-Z a-z | tr a-z A-Z I transform such line into something like that: array[0] = {"ls", "-al", NULL"} array[1] =

python os.mkfifo() for Windows

余生颓废 提交于 2019-11-27 07:09:59
问题 Short version (if you can answer the short version it does the job for me, the rest is mainly for the benefit of other people with a similar task): In python in Windows, I want to create 2 file objects, attached to the same file (it doesn't have to be an actual file on the hard-drive), one for reading and one for writing, such that if the reading end tries to read it will never get EOF (it will just block until something is written). I think in linux os.mkfifo() would do the job, but in