问题
For example, right now I'm using the following to change a couple of files whose Unix paths I wrote to a file:
cat file.txt | while read in; do chmod 755 "$in"; done
Is there a more elegant, safer way?
回答1:
Read a file line by line and execute commands: 4 answers
This is because there is not only 1 answer...
shell
command line expansionxargs
dedicated toolwhile read
with some remarkswhile read -u
using dedicatedfd
, for interactive processing (sample)
Regarding the OP request: running chmod
on all targets listed in file, xargs
is the indicated tool. But for some other applications, small amount of files, etc...
Read entire file as command line argument.
If your file is not too big and all files are well named (without spaces or other special chars like quotes), you could use
shell
command line expansion. Simply:chmod 755 $(<file.txt)
For small amount of files (lines), this command is the lighter one.
xargs
is the right toolFor bigger amount of files, or almost any number of lines in your input file...
For many binutils tools, like
chown
,chmod
,rm
,cp -t
...xargs chmod 755 <file.txt
If you have special chars and/or a lot of lines in
file.txt
.xargs -0 chmod 755 < <(tr \\n \\0 <file.txt)
if your command need to be run exactly 1 time by entry:
xargs -0 -n 1 chmod 755 < <(tr \\n \\0 <file.txt)
This is not needed for this sample, as
chmod
accept multiple files as argument, but this match the title of question.For some special case, you could even define location of file argument in commands generateds by
xargs
:xargs -0 -I '{}' -n 1 myWrapper -arg1 -file='{}' wrapCmd < <(tr \\n \\0 <file.txt)
Test with
seq 1 5
as inputTry this:
xargs -n 1 -I{} echo Blah {} blabla {}.. < <(seq 1 5) Blah 1 blabla 1.. Blah 2 blabla 2.. Blah 3 blabla 3.. Blah 4 blabla 4.. Blah 5 blabla 5..
Where commande is done once per line.
while read
and variants.As OP suggest
cat file.txt | while read in; do chmod 755 "$in"; done
will work, but there is 2 issues:cat |
is an useless fork, and| while ... ;done
will become a subshell where environment will disapear after;done
.
So this could be better written:
while read in; do chmod 755 "$in"; done < file.txt
But,
You may be warned about
$IFS
andread
flags:help read
read: read [-r] ... [-d delim] ... [name ...] ... Reads a single line from the standard input... The line is split into fields as with word splitting, and the first word is assigned to the first NAME, the second word to the second NAME, and so on... Only the characters found in $IFS are recognized as word delimiters. ... Options: ... -d delim continue until the first character of DELIM is read, rather than newline ... -r do not allow backslashes to escape any characters ... Exit Status: The return code is zero, unless end-of-file is encountered...
In some case, you may need to use
while IFS= read -r in;do chmod 755 "$in";done <file.txt
For avoiding problems with stranges filenames. And maybe if you encouter problems with
UTF-8
:while LANG=C IFS= read -r in ; do chmod 755 "$in";done <file.txt
While you use
STDIN
for readingfile.txt
, your script could not be interactive (you cannot useSTDIN
anymore).
while read -u
, using dedicatedfd
.Syntax:
while read ...;done <file.txt
will redirectSTDIN
tofile.txt
. That mean, you won't be able to deal with process, until they finish.If you plan to create interactive tool, you have to avoid use of
STDIN
and use some alternative file descriptor.Constants file descriptors are:
0
for STDIN,1
for STDOUT and2
for STDERR. You could see them by:ls -l /dev/fd/
or
ls -l /proc/self/fd/
From there, you have to choose unused number, between
0
and63
(more, in fact, depending onsysctl
superuser tool) as file descriptor:For this demo, I will use fd
7
:exec 7<file.txt # Without spaces between `7` and `<`! ls -l /dev/fd/
Then you could use
read -u 7
this way:while read -u 7 filename;do ans=;while [ -z "$ans" ];do read -p "Process file '$filename' (y/n)? " -sn1 foo [ "$foo" ]&& [ -z "${foo/[yn]}" ]&& ans=$foo || echo '??' done if [ "$ans" = "y" ] ;then echo Yes echo "Processing '$filename'." else echo No fi done 7<file.txt
done
To close
fd/7
:exec 7<&- # This will close file descriptor 7. ls -l /dev/fd/
Nota: I let
strikedversion because this syntax could be usefull, when doing many I/O with parallels process:mkfifo sshfifo exec 7> >(ssh -t user@host sh >sshfifo) exec 6<sshfifo
回答2:
Yes.
while read in; do chmod 755 "$in"; done < file.txt
This way you can avoid a cat
process.
cat
is almost always bad for a purpose such as this. You can read more about Useless Use of Cat.
回答3:
if you have a nice selector (for example all .txt files in a dir) you could do:
for i in *.txt; do chmod 755 "$i"; done
bash for loop
or a variant of yours:
while read line; do chmod 755 "$line"; done <file.txt
回答4:
If you know you don't have any whitespace in the input:
xargs chmod 755 < file.txt
If there might be whitespace in the paths, and if you have GNU xargs:
tr '\n' '\0' < file.txt | xargs -0 chmod 755
回答5:
If you want to run your command in parallel for each line you can use GNU Parallel
parallel -a <your file> <program>
Each line of your file will be passed to program as an argument. By default parallel
runs as many threads as your CPUs count. But you can specify it with -j
回答6:
I see that you tagged bash, but Perl would also be a good way to do this:
perl -p -e '`chmod 755 $_`' file.txt
You could also apply a regex to make sure you're getting the right files, e.g. to only process .txt files:
perl -p -e 'if(/\.txt$/) `chmod 755 $_`' file.txt
To "preview" what's happening, just replace the backticks with double quotes and prepend print
:
perl -p -e 'if(/\.txt$/) print "chmod 755 $_"' file.txt
回答7:
You can also use AWK which can give you more flexibility to handle the file
awk '{ print "chmod 755 "$0"" | "/bin/sh"}' file.txt
if your file has a field separator like:
field1,field2,field3
To get only the first field you do
awk -F, '{ print "chmod 755 "$1"" | "/bin/sh"}' file.txt
You can check more details on GNU Documentation https://www.gnu.org/software/gawk/manual/html_node/Very-Simple.html#Very-Simple
回答8:
The logic applies to many other objectives. And how to read .sh_history of each user from /home/ filesystem? What if there are thousand of them?
#!/bin/ksh
last |head -10|awk '{print $1}'|
while IFS= read -r line
do
su - "$line" -c 'tail .sh_history'
done
Here is the script https://github.com/imvieira/SysAdmin_DevOps_Scripts/blob/master/get_and_run.sh
来源:https://stackoverflow.com/questions/49432555/how-to-pipeline-permission-denied-file-folders-into-sudo-chmod-orw