Capture both stdout & stderr via pipe

可紊 提交于 2020-12-30 02:44:04

问题


I want to read both stderr and stdout from a child process, but it doesn't work.

main.rs

use std::process::{Command, Stdio};
use std::io::{BufRead, BufReader};

fn main() {
    let mut child = Command::new("./1.sh")
        .stdout(Stdio::piped())
        .stderr(Stdio::piped())
        .spawn()
        .unwrap();

    let out = BufReader::new(child.stdout.take().unwrap());
    let err = BufReader::new(child.stderr.take().unwrap());

    out.lines().for_each(|line|
        println!("out: {}", line.unwrap())
    );
    err.lines().for_each(|line|
        println!("err: {}", line.unwrap())
    );

    let status = child.wait().unwrap();
    println!("{}", status);
}

1.sh

#!/bin/bash
counter=100
while [ $counter -gt 0 ]
do
   sleep 0.1
   echo "on stdout"
   echo "on stderr" >&2
   counter=$(( $counter - 1 ))
done
exit 0

This code only reads stdout:

out: on stdout

If I remove everything related to stdout in this code and leave only stderr it will read only stderr:

let mut child = Command::new("./1.sh")
    .stdout(Stdio::null())
    .stderr(Stdio::piped())
    .spawn()
    .unwrap();

let err = BufReader::new(child.stderr.take().unwrap());

err.lines().for_each(|line|
    println!("err: {}", line.unwrap())
);

Produces

err: on stderr

It seems like it can read either stdout or stderr at a time, but not both. What am I doing wrong?

I'm using Rust 1.26.0-nightly (322d7f7b9 2018-02-25)


回答1:


When I run this program on my computer under Linux, what happens is that it prints a line from stdout around every 0.1 seconds until all the 100 lines have been read, then the 100 lines from stderr are all printed immediately, then the program prints the called program's exit code and terminates.

When you read from a pipe, if there's no incoming data, by default, your program will block until some data is available. When the other program terminates, or decides to close its end of the pipe, then if you read from the pipe after having read everything the other program sent, the read will return with a length of zero bytes, signaling the "end of the file" (i.e. it's the same mechanism as for regular files).

When a program writes to a pipe, the operating system will store the data in a buffer until the other end of the pipe reads it. That buffer has a limited size, so if it gets full, the write will block. What can happen then, for example, is that one end blocks while reading from stdout while the other end blocks while writing to stderr. The shell script you posted doesn't output enough data to block, but if I change the counter to start at 10000, it blocks at 5632 on my system, because stderr is full as the Rust program hasn't started reading it yet.

I know of two solutions to solve this problem:

  1. Set the pipes to nonblocking mode. Nonblocking mode means that if a read or a write would block, it instead returns immediately with a distinct error code signaling this condition. When this condition occurs, you can then switch to the next pipe and try that one. To avoid consuming all CPU when both pipes have no data yet, you usually want to use a function like poll to wait until either pipe has data.

    The Rust standard library doesn't expose nonblocking mode for these pipes, but it provides the convenient wait_with_output method that does exactly what I just described! However, as the name implies, it only returns when the program has ended. Also, stdout and stderr are read into Vecs, so if the output is big, your program will consume a lot of memory; you can't process the data in a streaming fashion.

    use std::io::{BufRead, BufReader};
    use std::process::{Command, Stdio};
    
    fn main() {
        let child = Command::new("./1.sh")
            .stdout(Stdio::piped())
            .stderr(Stdio::piped())
            .spawn()
            .unwrap();
    
        let output = child.wait_with_output().unwrap();
    
        let out = BufReader::new(&*output.stdout);
        let err = BufReader::new(&*output.stderr);
    
        out.lines().for_each(|line|
            println!("out: {}", line.unwrap());
        );
        err.lines().for_each(|line|
            println!("err: {}", line.unwrap());
        );
    
        println!("{}", output.status);
    }
    

    If you want to use nonblocking mode manually, you can recover the file descriptor on Unix-like systems with AsRawFd or the file handle on Windows with AsRawHandle, and then you can pass those to the appropriate operating system APIs.

  2. Read each pipe on a separate thread. We can keep reading one of them on the main thread and spawn a thread for the other pipe.

    use std::io::{BufRead, BufReader};
    use std::process::{Command, Stdio};
    use std::thread;
    
    fn main() {
        let mut child = Command::new("./1.sh")
            .stdout(Stdio::piped())
            .stderr(Stdio::piped())
            .spawn()
            .unwrap();
    
        let out = BufReader::new(child.stdout.take().unwrap());
        let err = BufReader::new(child.stderr.take().unwrap());
    
        let thread = thread::spawn(move || {
            err.lines().for_each(|line|
                println!("err: {}", line.unwrap());
            );
        });
    
        out.lines().for_each(|line|
            println!("out: {}", line.unwrap());
        );
    
        thread.join().unwrap();
    
        let status = child.wait().unwrap();
        println!("{}", status);
    }
    


来源:https://stackoverflow.com/questions/49062707/capture-both-stdout-stderr-via-pipe

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!