awk

How to convert date format using bash and printf?

…衆ロ難τιáo~ 提交于 2020-08-08 05:48:12
问题 I want to convert 2019-02-16 to Feb 16 15:29 in bash using awk and printf . For example: [root@localhost ~]# who | awk '{print $1, $3, $4}' root 2019-02-16 15:29 root 2019-02-16 15:30 john 2019-02-01 10:34 emmett 2019-01-12 09:45 Desired output: root Feb 16 15:29 root Feb 16 15:30 john Feb 1 10:34 emmett Jan 12 09:45 Please help and provide an explanation with your solution. 回答1: With any awk in any shell on any UNIX box: $ who | awk '{split($3,d,/-/); print $1, substr(

Min-Max Normalization using AWK

雨燕双飞 提交于 2020-08-05 20:01:10
问题 I dont know Why I am unable to loop through all the records. currently it goes for last record and prints the normalization for it. Normalization formula: New_Value = (value - min[i]) / (max[i] - min[i]) Program { for(i = 1; i <= NF; i++) { if (min[i]==""){ min[i]=$i;} #initialise min if (max[i]==""){ max[i]=$i;} #initialise max if ($i<min[i]) { min[i]=$i;} #new min if ($i>max[i]) { max[i]=$i;} #new max } } END { for(j = 1; j <= NF; j++) { normalized_value[j] = ($j - min[j])/(max[j] - min[j])

Split string on a backslash (“\”) delimiter in awk?

萝らか妹 提交于 2020-07-30 00:55:53
问题 I am trying to split the string in a file based on some delimiter.But I am not able to achieve it correctly... Here is my code below. awk 'var=split($2,arr,'\'); {print $var}' file1.dat Here is my sample data guys. Col1 Col2 abc 123\abc abcd 123\abcd Desire output: Col1 Col2 abc abc abcd abcd 回答1: Sample data and output is my best guess at your requirement echo '1:2\\a\\b:3' | awk -F: '{ n=split($2,arr,"\\") # print "#dbg:n=" n var=arr[3] print var }' output b Recall that split returns the

Getting more decimal positions as a result of division when using awk

霸气de小男生 提交于 2020-07-22 06:14:10
问题 I have this problem, awk 'BEGIN{ x = 0.703125; p = x/2; print p }' somefile and the output is 0.351562. But a decimal number is missing. It should be 0.3515625 So is there a way to improve the division to get all decimals stored into a variable and not only printed? So that p really holds the value 0.3515625 回答1: It is because of the built-in value for CONVFMT (newer versions of awk ) and OFMT is only 6 digit precision. You need to make it different by modifying that variable to use along

filter ping result with grep and awk altogether doesn't work

落爺英雄遲暮 提交于 2020-07-22 05:40:49
问题 I want to pipe the ping output with only its delay time to a text. while I do , I get this as expected ping somesite PING somesite (220.181.57.217) 56(84) bytes of data. 64 bytes from 220.181.57.217: icmp_seq=1 ttl=52 time=43.4 ms 64 bytes from 220.181.57.217: icmp_seq=2 ttl=52 time=43.7 ms 64 bytes from 220.181.57.217: icmp_seq=3 ttl=52 time=43.4 ms Then I do this ping somesite | awk -F '[:=]' '{print $5}' 43.3 ms 43.2 ms 43.3 ms 43.2 ms 43.2 ms 43.1 ms 43.1 ms 43.3 ms 43.2 ms 43.6 ms 43.3

How to make awk produce columns

早过忘川 提交于 2020-07-16 05:49:09
问题 I have a file with two columns. Each line in this file has a different size of words awk '$5=$9' FS="#" OFS="\t" file1** Tomato#Vegetable Orange#Fruit Cucumber#Vegetable Sweat Potato#Fruit This line of code makes it like this: Tomato Vegetable Orange Fruit Cucumber Vegetable Sweat Potato Fruit I am trying to make this file be displayed like this: Tomato Vegetable Orange Fruit Cucumber Vegetable Sweat Potato Fruit What am I doing wrong that does not give me this result? 回答1: With Unix / Linux

Using pipe character as a field separator

馋奶兔 提交于 2020-07-14 12:47:11
问题 I'm trying different commands to process csv file where the separator is the pipe | character. While those commands do work when the comma is a separator, it throws an error when I replace it with the pipe: awk -F[|] "NR==FNR{a[$2]=$0;next}$2 in a{ print a[$2] [|] $4 [|] $5 }" OFS=[|] file1.csv file2.csv awk "{print NR "|" $0}" file1.csv I tried, "|" , [|] , /| to no avail. I'm using Gawk on windows. What I'm I missing? 回答1: You tried "|" , [|] and /| . /| does not work because the escape

remove entirely same duplicate columns in unix

瘦欲@ 提交于 2020-07-10 10:42:45
问题 Let say I have a file as below: number 2 6 7 10 number 6 13 name1 A B C D name1 B E name2 A B C D name2 B E name3 B A D A name3 A F name4 B A D A name4 A F I wish to remove the entirely the same duplicate columns and the output file is as below: number 2 6 7 10 13 name1 A B C D E name2 A B C D E name3 B A D A F name4 B A D A F I use sort and uniq command for lines but never know how to do for columns. Can anyone suggest a good way? 回答1: Here is a way with awk that preserves the order awk 'NR=

Make awk efficient (again)

喜夏-厌秋 提交于 2020-07-10 06:58:32
问题 I have the code below, which works successfully (kudos to @EdMorton), and is used to parse, clean log files (very large in size) and output into smaller sized files. Output filename is the first 2 characters of each line. However, if there is a special character in these 2 characters, then it needs to be replaced with a '_'. This will help ensure there is no illegal character in the filename. Next, it checks if any of the output files are large than a certain size, if so, that file is sub

Average of multiple files having different row sizes

*爱你&永不变心* 提交于 2020-07-10 03:11:27
问题 I have few files with different row sizes, but number of columns in each file is same. e.g. ifile1.txt 1 1001 ? ? 2 1002 ? ? 3 1003 ? ? 4 1004 ? ? 5 1005 ? 0 6 1006 ? 1 7 1007 ? 3 8 1008 5 4 9 1009 3 11 10 1010 2 9 ifile2.txt 1 2001 ? ? 2 2002 ? ? 3 2003 ? ? 4 2004 ? ? 5 2005 ? 0 6 2006 6 12 7 2007 6 5 8 2008 9 10 9 2009 3 12 10 2010 5 7 11 2011 2 ? 12 2012 9 ? ifile3.txt 1 3001 ? ? 2 3002 ? 6 3 3003 ? ? 4 3004 ? ? 5 3005 ? 0 6 3006 1 25 7 3007 2 3 8 3008 ? ? In each file 1st column