awk

Delete nodes in xml if they contain certain text using sed

孤街醉人 提交于 2020-01-06 04:26:09
问题 I have an xml that looks like following. <rootNode> <appender name="SERVER_FILE" class="org.apache.log4j.RollingFileAppender"> <param name="File" value="C:/COM_FIND.log"/> <param name="Threshold" value="INFO"/> <param name="Append" value="true"/> <param name="MaxFileSize" value="5000KB"/> <param name="MaxBackupIndex" value="5"/> <layout class="org.apache.log4j.PatternLayout"> <param name="ConversionPattern" value="%d %-5p [%c] %m%n"/> </layout> </appender> <appender name="CAT_FILE" class="org

awk - remove character in regex

回眸只為那壹抹淺笑 提交于 2020-01-05 21:31:10
问题 I want to remove 1 with awk from this regex: ^1[0-9]{10}$ if said regex is found in any field. I've been trying to make it work with sub or substr for a few hours now, I am unable to find the correct logic for this. I already have the solution for sed: s/^1\([0-9]\{10\}\)$/\1/ , I need to make this work with awk . Edit for input and output example. Input: 10987654321 2310987654321 1098765432123 (awk twisted and overcomplicated syntax) Output: 0987654321 2310987654321 1098765432123 Basically

Unable to filter rows which contain “Is a directory” by SED/AWK

时光总嘲笑我的痴心妄想 提交于 2020-01-05 17:57:18
问题 I run the code gives me the following sample data md5deep find * | awk '{ print $1 }' A sample of the output /Users/math/Documents/Articles/Number theory: Is a directory 258fe6853b1bfb2d07f512ff6bec52b1 /Users/math/Documents/Articles/Probability and statistics: Is a directory 4811bfb2ad04b9f4318049c01ebb52ef 8aae4ac3694658cf90005dbdea37b4d5 258fe6853b1bfb2d07f512ff6bec52b1 I have tried to filter the rows which contain Is a directory by SED unsuccessfully md5deep find * | awk '{ print $1 }' |

An odd text stops awk command from working

只愿长相守 提交于 2020-01-05 14:03:40
问题 I use awk command to count lines with same beginning... For instance, in try1.txt , the texts are: b : c b : c When I launch the following command in a terminal: awk -F ' : ' '$1=="b"{a[$2]++} END{for (i in a) print " ", i,a[i]}' try1.txt it returns c 2 which is good, because b : c appears twice in try1.txt . The output of my tool is a huge output.txt , much more complicated than try1.txt . Some part of output.txt contains the following characters: ^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@

AWK to Consolidate Files

て烟熏妆下的殇ゞ 提交于 2020-01-05 13:07:51
问题 I'm hacking some AWK. I'm a beginner with it. I have done my homework on the following problem, and just can't get it to work. RAW DATA SAMPLE: Start Date 12/3/17 End Date 12/30/17 Report Type Report1 Currency ZAR Country Identifier MType Quantity Net Net Net Code Title Contrib I_Type M_Type Vendor Identifier Offline Indicator LSN ZA 44057330 FMP 1 0.050666 0.050666 USYYYYYYYYYY ABC Tom 1 1 USYYYYYYYYYY 0 SUT ZA 1267456726 SIMT 1 0.03 0.03 USXXXXXXXXXX DEF Frances 1 1 USXXXXXXXXXX 0 XYZ Row

Concatenation of AWK array variables - unexpected behaviour

安稳与你 提交于 2020-01-05 12:09:24
问题 Input file steve,apples steve,oranges john,pears john,oranges mary,bananas steve,plums mary,nactarines I want to get output like this: steve:apples,oranges,plums john:pears,oranges mary:bananas,nectarines Here is the one liner I have been trying to get to work: awk -F, '{if(a[$1])a[$1]=a[$1]","$2; else a[$1]=$2;}END{for (i in a)print i ":" a[i];}' OFS=, inputfile The output it gives is ,orangesrs ,plumsesples ,nactariness It would appear that the string concatenation a[$1]=a[$1]","$2 is

I want to execute awk command from tcl script

烈酒焚心 提交于 2020-01-05 10:27:10
问题 I want to execute the following command from my tcl script exec /bin/awk '/Start/{f=1;++file}/END/{f=0}f{print > "/home/user/report/"file }' input I'm getting this error awk: ^ invalid char ''' in expression is it possible to execute such command from tcl Thanks 回答1: Quote from tcl man page: When translating a command from a Unix shell invocation, care should be taken over the fact that single quote characters have no special significance to Tcl. Thus: awk '{sum += $1} END {print sum}'

Replace string without regex

不想你离开。 提交于 2020-01-05 08:43:37
问题 I would like to have a shell script to do a find and replace on anything without using regular expressions (regex), that includes all the specials chars such as !@#$%¨&*()?^~]}[{´`>.<,"' So for example the function would be able to receive two parameters where the 1st will be what I want to find, and the second parameter will be what will be replaced with. Example 1: File1: BETWEEN TO_DATE('&1','YYYY-MM-DD') +1 and TO_DATE('&2','YYYY-MM-DD') +15 First parameter: TO_DATE('&1','YYYY-MM-DD')

Convert n number of rows to columns repeatedly using awk

跟風遠走 提交于 2020-01-05 08:32:28
问题 My data is a large text file that consists of 12 rows repeating. It looks something like this: { 1 2 3 4 5 6 7 8 9 10 } repeating over and over. I want to turn every 12 rows into columns. so the data would look like this: { 1 2 3 4 5 6 7 8 9 10 } { 1 2 3 4 5 6 7 8 9 10 } { 1 2 3 4 5 6 7 8 9 10 } I have found some examples of how to convert all the rows to columns using awk: awk '{printf("%s ", $0)}' , but no examples of how to convert every 12 rows into columns and then repeat the process.

awk: create list of destination ports seen for each source IP from a bro log (conn.log)

孤人 提交于 2020-01-05 08:15:12
问题 I'm trying to solve a problem in awk as an exercise but I'm having trouble. I want awk (or gawk) to be able to print all unique destination ports for a particular source IP address. The source IP address is field 1 ($1) and the destination port is field 4 ($4). Cut for brevity: SourceIP SrcPort DstIP DstPort 192.168.1.195 59508 98.129.121.199 80 192.168.1.87 64802 192.168.1.2 53 10.1.1.1 41170 199.253.249.63 53 10.1.1.1 62281 204.14.233.9 443 I imagine you would store each Source IP as in