awk

Strange behavior when parsing result from curl + awk

混江龙づ霸主 提交于 2021-02-11 05:58:41
问题 Using curl on ubuntu I am trying to fetch the Jenkins version inspired by: https://wiki.jenkins.io/display/JENKINS/Remote+access+API In a bash script I do: VERSION=$(curl -k -i -X GET --insecure --silent --header \"Authorization: Bearer $TOKEN \" $URL | grep -Fi X-Jenkins: | awk '{print $2}') echo "__A__[${VERSION}]__B__" But when I run the script I get: ]__B__2.89.2 So for some reason the prefix: __A__[ gets swallowed and the suffix gets turned into a prefix . I have also tried to trim the

Strange behavior when parsing result from curl + awk

痴心易碎 提交于 2021-02-11 05:58:16
问题 Using curl on ubuntu I am trying to fetch the Jenkins version inspired by: https://wiki.jenkins.io/display/JENKINS/Remote+access+API In a bash script I do: VERSION=$(curl -k -i -X GET --insecure --silent --header \"Authorization: Bearer $TOKEN \" $URL | grep -Fi X-Jenkins: | awk '{print $2}') echo "__A__[${VERSION}]__B__" But when I run the script I get: ]__B__2.89.2 So for some reason the prefix: __A__[ gets swallowed and the suffix gets turned into a prefix . I have also tried to trim the

subtract columns from 2 files and output to new files

放肆的年华 提交于 2021-02-10 18:38:55
问题 I have 2 files in below formats. File1_Stored.txt ABC:100, 83 ABC:84, 53 ABC:14, 1222 File2_Stored.txt ABC:100 , 83 ABC:84 , 1553 ABC:524 , 2626 I am trying to get the 3rd file in below format. So, whenever difference is 0 it shouldnt appear but if the diffence is not 0 then it should appear like Value , File1 Value , File2 Value , Difference ---------------------------------------------- ABC:84, 53 ,1553 , -1500 ABC:14, 1222 , 0 , 1222 ABC:524, 0 ,2626 ,-2626 I tried doing it using awk to

awk FPAT to ignore commas in csv

百般思念 提交于 2021-02-10 17:02:00
问题 Sample.csv Data "2-Keyw-Bllist, TerrorViolencetest",vodka,ZETA+GLOBAL 4(ID: ZETA+GLOBAL),,105629,523,flag "2-Keyw-Bllist, TerrorViolencetest",vodka,Captify (ID: Captify),,94676,884,flag "2-Keyw-Bllist, TerrorViolencetest",vodka,QuantCast (ID: QuantCast),,46485,786,flag TerrorViolencetest,germany,QuantCast (ID: QuantCast),,31054,491,flag EY-Keyword-Blacklist,BBQ,MIQ+RON (ID: MIQ+RON),,26073,149,flag TerrorViolencetest,chips,Captify (ID: Captify),,23737,553,flag "2-Keyw-Bllist,

awk FPAT to ignore commas in csv

别说谁变了你拦得住时间么 提交于 2021-02-10 17:01:32
问题 Sample.csv Data "2-Keyw-Bllist, TerrorViolencetest",vodka,ZETA+GLOBAL 4(ID: ZETA+GLOBAL),,105629,523,flag "2-Keyw-Bllist, TerrorViolencetest",vodka,Captify (ID: Captify),,94676,884,flag "2-Keyw-Bllist, TerrorViolencetest",vodka,QuantCast (ID: QuantCast),,46485,786,flag TerrorViolencetest,germany,QuantCast (ID: QuantCast),,31054,491,flag EY-Keyword-Blacklist,BBQ,MIQ+RON (ID: MIQ+RON),,26073,149,flag TerrorViolencetest,chips,Captify (ID: Captify),,23737,553,flag "2-Keyw-Bllist,

awk: preserve row order and remove duplicate strings (mirrors) when generating data

我与影子孤独终老i 提交于 2021-02-10 15:51:50
问题 I have two text files g1.txt alfa beta;www.google.com Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org; g2.txt Jack to ride.zip;http://alfa.org; JKr.rui.rar;http://gamma.org; Nofj ogk.png;http://gamma.org; I use this command to run my awk script awk -f ./join2.sh g1.txt g2.txt > "g3.txt" and I obtain this output Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org;;Jack to ride.zip;http://alfa.org;JKr.rui.rar;http://gamma.org

awk: preserve row order and remove duplicate strings (mirrors) when generating data

大兔子大兔子 提交于 2021-02-10 15:48:31
问题 I have two text files g1.txt alfa beta;www.google.com Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org; g2.txt Jack to ride.zip;http://alfa.org; JKr.rui.rar;http://gamma.org; Nofj ogk.png;http://gamma.org; I use this command to run my awk script awk -f ./join2.sh g1.txt g2.txt > "g3.txt" and I obtain this output Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org;;Jack to ride.zip;http://alfa.org;JKr.rui.rar;http://gamma.org

awk: preserve row order and remove duplicate strings (mirrors) when generating data

青春壹個敷衍的年華 提交于 2021-02-10 15:47:07
问题 I have two text files g1.txt alfa beta;www.google.com Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org; g2.txt Jack to ride.zip;http://alfa.org; JKr.rui.rar;http://gamma.org; Nofj ogk.png;http://gamma.org; I use this command to run my awk script awk -f ./join2.sh g1.txt g2.txt > "g3.txt" and I obtain this output Light Dweller - CR, Technical Metal;http://alfa.org;http://beta.org;http://gamma.org;;Jack to ride.zip;http://alfa.org;JKr.rui.rar;http://gamma.org

awk syntax errors with double star **

烂漫一生 提交于 2021-02-10 14:59:45
问题 I'm using Debian 6 and GNU sed, trying to get awk to convert the output of du from a long string of bytes to a more human readable number with suffixes like Mb and Kb. (I know you can use the -h option, but I want to do this manually with awk.) So far, my command looks like this (I put in the newlines to make it more readable): du /test.img | grep [0-9]* | awk "{ sum=$1 ; hum[1024**3]='Gb';hum[1024**2]='Mb'; hum[1024]='Kb'; for (x=1024**3; x>=1024; x/=1024){ if (sum>=x) { printf '%.2f %s\n'

awk syntax errors with double star **

只愿长相守 提交于 2021-02-10 14:57:12
问题 I'm using Debian 6 and GNU sed, trying to get awk to convert the output of du from a long string of bytes to a more human readable number with suffixes like Mb and Kb. (I know you can use the -h option, but I want to do this manually with awk.) So far, my command looks like this (I put in the newlines to make it more readable): du /test.img | grep [0-9]* | awk "{ sum=$1 ; hum[1024**3]='Gb';hum[1024**2]='Mb'; hum[1024]='Kb'; for (x=1024**3; x>=1024; x/=1024){ if (sum>=x) { printf '%.2f %s\n'