Why does reading and writing to the same file in a pipeline produce unreliable results?
问题 I have a bunch a files that contain many blank lines, and want to remove any repeated blank lines to make reading the files easier. I wrote the following script: #!/bin/bash for file in * ; do cat "$file" | sed 's/^ \+//' | cat -s > "$file" ; done However, this had very unreliable results, with most files becoming completely empty and only a few files having the intended results. What's more, the files that did work seemed to change randomly every time I retried, as different files would get