awk

awk remote ssh command

倾然丶 夕夏残阳落幕 提交于 2020-01-07 06:18:51
问题 I have a bash script that run a remote awk command but I guess I haven't correctly escape specials characters since no file is generated on the remote server. Still I have no error. My variables are declared locally and can be used remotely without issue (other part of the script confirm this). ssh -q -t server ' logfiles=$(find /var/log/httpd/ -type f -name *access_log -cmin -'"$past"') for log in $logfiles; awk -vDate=\`date -d'now-'"$past"' minutes' +[%d/%b/%Y:%H:%M:%S\` ' { if \(\$4 >

Awk: Sum up column values across multiple files with identical column layout

孤街浪徒 提交于 2020-01-07 05:30:51
问题 I have a number of files with the same header: COL1, COL2, COL3, COL4 You can ignore COL1-COL3. COL4 contains a number. Each file contains about 200 rows. I am trying to sum up across the rows. For example: File 1 COL1 COL2 COL3 COL4 x y z 3 a b c 4 File 2 COL1 COL2 COL3 COL4 x y z 5 a b c 10 Then a new file is returned: COL1 COL2 COL3 COL4 x y z 8 a b c 14 Is there a simple way to do this without AWK? I will use AWK if need be, I just thought there might be a simple one-liner that I could

Replace string in XML file with awk/sed with string from another file

半世苍凉 提交于 2020-01-07 02:44:15
问题 Sorry for the long/weird title but I'm stuck on a problem I have. I have this XML file: </member> <member> <name>TransactionID</name> <value><string>123456789123456</string></value> </member> <member> <name>Number</name> <value><string>765101293</string></value> </member> There, I have to replace the "765101293" with another value from another file, file2: 765003448 765885388 764034143 784478101 762568592 769765134 767200702 769550613 784914007 762333840 So, the XML file will change at each

How can I adjust the length of a column field in bash using awk or sed?

两盒软妹~` 提交于 2020-01-07 02:19:45
问题 I've an input.csv file in which columns 2 and 3 have variable lengtt. 100,Short Column, 199 200,Meeedium Column,1254 300,Loooooooooooong Column,35 I'm trying to use the following command to achieve a clean tabulation, but I need to fill the 2nd column with a certain number of blank spaces in order to get a fixed lenght column (let's say that a total lenght of 30 is enough). awk -F, '{print $1 "\t" $2 "\t" $3;}' input.csv My current output looks like this: 100 Short Column 199 200 Meeedium

How can I adjust the length of a column field in bash using awk or sed?

给你一囗甜甜゛ 提交于 2020-01-07 02:19:17
问题 I've an input.csv file in which columns 2 and 3 have variable lengtt. 100,Short Column, 199 200,Meeedium Column,1254 300,Loooooooooooong Column,35 I'm trying to use the following command to achieve a clean tabulation, but I need to fill the 2nd column with a certain number of blank spaces in order to get a fixed lenght column (let's say that a total lenght of 30 is enough). awk -F, '{print $1 "\t" $2 "\t" $3;}' input.csv My current output looks like this: 100 Short Column 199 200 Meeedium

Scripting in Bash

China☆狼群 提交于 2020-01-07 00:33:49
问题 I have a function in bash say parse which takes one argument and function name is f. My file to be processed is somewhat like a@b@c@ a@d@e@g@ m@n@ t@ I want to give the output as a@f(b)@f(c)@ a@f(d)@f(e)@f(g)@ m@f(n)@ t@ That is apply function f to all except the first. Any clues so how can I do this? 回答1: maybe this is what you want? e.g. you have a script called sqr.sh: kent$ cat sqr.sh #!/bin/bash echo $(($1*$1)) now you want to apply the function above on your input: kent$ echo "foo@2@3@4

Edit .csv file with AWK

冷暖自知 提交于 2020-01-06 20:23:52
问题 I have a csv file in which I have to make some changes which you will see in the examples I will put. And I think I can do it with arrays, but I do not know how to structure it. Any ideas? Original File; "1033reto";"V09B";"";"";"";"";"";"QVN";"V09B" "1033reto";"V010";"";"";"";"";"";"QVN";"V010" "1033reto";"V015";"";"";"";"";"";"QVN";"V015" "1033reto";"V08C";"";"";"";"";"";"QVN";"V08C" "1040reto";"V03D";"";"";"";"";"";"QVN";"V03D" "1040reto";"V01C";"";"";"";"";"";"QVN";"V01C" "1050reto";"V03D"

To sum column with condition

我与影子孤独终老i 提交于 2020-01-06 20:03:43
问题 I have data in textfile. example. A B C D E F 10 0 0.9775 39.3304 0.9311 60.5601 10 1 0.9802 32.3287 0.9433 56.1201 10 2 0.9816 39.9759 0.9446 54.0428 10 3 0.9737 37.8779 0.9419 56.3865 10 4 0.9798 34.9152 0.905 69.0879 10 5 0.9803 50.057 0.9201 64.6289 10 6 0.9805 39.1062 0.9093 68.4061 10 7 0.9781 33.8874 0.9327 60.7631 10 8 0.9802 32.5734 0.9376 60.9165 10 9 0.9798 32.3466 0.94 54.7645 11 0 0.9749 40.2712 0.9042 71.2873 11 1 0.9755 35.6546 0.9195 63.7436 11 2 0.9766 36.753 0.9507 51.7864

how can I find a matching pattern between two words using sed or awk

↘锁芯ラ 提交于 2020-01-06 19:56:32
问题 I want to search a pattern in paragraph that begins with word1 and end with word2 and print the first line of the paragraph if the pattern match, I am not sure if I can do it using grep for example if I have the following file and I am looking for aaa Word1 this is paragraph number 1 aaa bbb ccc word2 Word1 this is paragraph number 2 bbb ccc ddd word2 the answer should be like that Word1 this is paragraph number 1 回答1: You can try this awk : awk '/^Word1/{f=1;l="";hold=$0} /word2$/{f=0; if(l

Expect Escaping with Awk

别说谁变了你拦得住时间么 提交于 2020-01-06 19:51:07
问题 I need to process the output of a single record psql query through awk before assigning it to a value in my expect script. The relevant code: spawn $env(SHELL) send "psql -U safeuser -h db test -c \"SELECT foo((SELECT id FROM table where ((table.col1 = \'$user\' AND table.col2 IS NULL) OR table.col2 = \'$user\') AND is_active LIMIT 1));\" | /bin/awk {{NR=3}} {{ print $1 }}; \r" expect "assword for user safeuser:" send "$safeuserpw\r" expect -re '*' set userpass $expect_out(0, string) When I