Why does reading and writing to the same file in a pipeline produce unreliable results?

前端 未结 2 1805
予麋鹿
予麋鹿 2020-12-04 03:04

I have a bunch a files that contain many blank lines, and want to remove any repeated blank lines to make reading the files easier. I wrote the following script:

<         


        
2条回答
  •  感情败类
    2020-12-04 03:45

    The unpredictability happens because there's a race condition between two stages in the pipeline, cat "$file" and cat -s > "$file".

    The first tries to open the file and read from it, while the other tries to empty the file.

    • If it's emptied before it's read, you get an empty file.
    • If it's read before it's emptied, you get some data (but the file is emptied shortly after and the result is truncated unless it's very short).

    If you have GNU sed, you can simply do sed -i 'expression' *

提交回复
热议问题