awk

Unix help to extract/print 50 lines after every 3rd occurrence pattern till end of file

馋奶兔 提交于 2020-01-25 14:31:28
问题 I need help with extract/print 4 lines after every 3rd occurrence pattern till end of file. Consider below is example of log file ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND Extract line 1 Extract line 2 Extract line 3 Extract line 4 ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND Extract line 5 Extract line 6 Extract line 7 Extract line 8 ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND ERROR_FILE_NOT_FOUND Extract line 9 Extract line 10 Extract line 11 Extract

How to replace catch block in *.java using sed?

做~自己de王妃 提交于 2020-01-25 13:07:22
问题 How to replace the following pattern in a java project catch(SQLException e) { \\TO DO } with catch(SQLException e) { S.O.P(); } Please note that the file will have other patterns like catch(IOException e) { // To Do } which should not be changed. I tried sed 's/catch\(SQLException[^\}]*}/catch(SQLException e)\{S.O.P();\}/g' file.java but it does not work. 回答1: you can use awk $ more file catch(SQLException e) { \\TO DO } catch(IOException e) { // To Do } $ awk -vRS="}" '/catch\(SQLException

List differences in two files using awk

浪尽此生 提交于 2020-01-25 12:47:33
问题 Say if I have two files - File1: 1|abc 2|cde 3|pkr File2: 1|abc 2|cde 4|lkg How can I list true difference in both files using awk ? If the second file is a subset of first file, I can do the following - awk -F"|" 'NR==FNR{a[$1]=$2;next} !($1 in a)' file{1,2} But this would give me 4|lkg I would like to get output as follows since that is the true difference. The difference should be seen as: 3|pkr 4|lkg Criteria for difference: Field 1 present in file1 but not in file2. Field 1 present in

Making csv from txt files

﹥>﹥吖頭↗ 提交于 2020-01-25 12:17:09
问题 I have a lot of txt files like this: Title 1 Text 1(more then 1 line) And I would like to make one csv file from all of them that it will look like this: Title 1,Text 1 Title 2,Text 2 Title 3,Text 3 etc How could I do it? I think that awk is good for it but don't know how to realize it. 回答1: May I suggest: paste -d, file1 file2 file3 To handle large numbers of files, max 40 per output file (untested, but close): xargs -n40 files... echo >tempfile num=1 for line in $(<tempfile) do paste -d,

How to display Html Tabular Data in email Body using Mailx or Mutt

百般思念 提交于 2020-01-25 09:42:10
问题 I have a Scenario where i am having one .csv file called Studentdetails.csv The student details has below following data Ram,Mumbai,MBA Viraj,Delhi,MCA Vilas,Kolkata,MMS Priya,Agra,BCA My below code convert .csv file to html table format file awk 'BEGIN{ FS="," print "MIME-Version: 1.0" print "Content-Type: text/html" print "Content-Disposition: inline" print "<HTML>""<TABLE border="1"><TH>Name</TH><TH>City</TH><TH>Course</TH>" } { printf "<TR>" for(i=1;i<=NF;i++) printf "<TD>%s</TD>", $i

extract last section of data from file using linux command

早过忘川 提交于 2020-01-25 09:28:08
问题 I have the input file (myfile) as: >> Vi 'x' found in file /data/152.916612:2,/proforma invoice.doc >> Vi 'x' found in file /data/152.48152834/Bank T.T Copy 12 d3d.doc >> Vi 'x' found in file /data/155071755/Bank T.T Copy.doc >> Vi 'x' found in file /data/1521/Quotation Request.doc >> Vi 'x' found in file /data/15.462/Quotation Request 2ds.doc >> Vi 'y' found in file /data/15.22649962_test4/Quotation Request 33 zz (.doc >> Vi 'x' found in file /data/15.226462_test6/Quotation Request.doc and I

How to add zeros between hex numbers with sed/awk?

[亡魂溺海] 提交于 2020-01-25 08:54:32
问题 I have file contains that strings abc = <0x12345678>; abc = <0x01234 0x56789>; abc = <0x123 0x456 0x789>; abc = <0x0 0x01234 0x0 0x56789>; abc = <0x012 0x345>, <0x678 0x901>; def = <0x12345 0x67890>; I need to convert it to file contains abc = <0 0x12345678>; abc = <0 0x01234 0 0x56789>; abc = <0x123 0x456 0x789>; abc = <0x0 0x01234 0x0 0x56789>; abc = <0 0x012 0 0x345>, <0 0x678 0 0x901>; def = <0x12345 0x67890>; So I need to add zeros before HEX numbers if strings starts with 'abc = ' ,

Match specific pattern and print just the matched string in the previous line: Updated

若如初见. 提交于 2020-01-25 08:48:05
问题 I have a .fastq file formatted in the following way @M01790:39:000000000-C3C6P:1:1101:14141:1618 1:N:0:8 (name) AACATCTACATATTCACATATAGACATGAAACACCTGTGGTTCTTCCTCAGTATGTAGGACTGTAACATAG (sequence) + GGACCCGGGGGGGGGDGGGFGGGGGGFGGGGGGGGGGGFGGGGFGFGFFFGGGGGGFGGGGGGGGGGGFGG (quality) For each sequence the format is the same (repetition of 4 lines) What I am trying to do is searching for specific regex pattern in a window of n=35 characters of the 2nd line, cut it if found and report it at the end

Printing text in awk conditions

爱⌒轻易说出口 提交于 2020-01-25 07:06:57
问题 I would like to print text according some conditions: print lines starting \hello (that works) and I don't know how to add a condition that would wite \a some text \b from this: \item[\word{\small 1}]: \a some text \b \item[\word{\small 3}]: \a some text \b I look for a condition that would delete the first part of a line that is always same except a number \item[\word{\small 1}]: or a condition that would print text between \a and \b included form the line that included \a and \b. awk ' /\

Best way to pull repeated table data from a pcap http file (could awk handle the disruptive breaks)?

拜拜、爱过 提交于 2020-01-25 06:52:26
问题 I am collecting data readings from my PV system. The web client will graph one day of data - I want to collect a whole year or two in one file for patterns etc. So far I capture lines into a cap file with Wireshark and just filter the data I want with headers and a few retransmitted packet. The data of interest is being sent to a js app but I want to lift out the data which repeats in each packet as date time=watts, see sample below... I was hoping to use AWK to parse the data into an array