awk

search multiple pattern in file and delete line if pattern matches

删除回忆录丶 提交于 2020-01-10 03:09:06
问题 For ex, a file contain contents: 10.45.56.84 raj 10.49.31.81 mum 10.49.31.86 mum 10.81.51.92 guj 10.45.56.116 raj 10.45.56.84 raj I want to search 10.45.56.84 and 10.81.51.92 in the above file and delete line if pattern matches. Also i want to do this in single command. 回答1: Another solution: awk '!/10.45.56.84|10.81.51.92/' file 回答2: grep -Fv -f <(echo $'10.45.56.84\n10.81.51.92') filename 回答3: You could do this: sed -e '/10[.]45[.]56[.]84/d;/10[.]81[.]51[.]92/d' file This has two sed "d"

Add a header to a tab delimited file

a 夏天 提交于 2020-01-09 19:09:30
问题 I'd like to add a header to a tab-delimited file but I am not sure how to do it in one line in linux. Let us say my file is: roger\t18\tcolumbia\tnew york\n albert\t21\tdartmouth\tnew london\n etc... and now I'd like to add a header that says: name\tage\tuniversity\tcity How would I do that in one line in linux? I am ok with awk, sed, cat, etc. not familiar at all with perl though. 回答1: There isn't a "prepend" operator like the "append" operator >> , but you can write the header to a temp

Add a header to a tab delimited file

扶醉桌前 提交于 2020-01-09 19:09:06
问题 I'd like to add a header to a tab-delimited file but I am not sure how to do it in one line in linux. Let us say my file is: roger\t18\tcolumbia\tnew york\n albert\t21\tdartmouth\tnew london\n etc... and now I'd like to add a header that says: name\tage\tuniversity\tcity How would I do that in one line in linux? I am ok with awk, sed, cat, etc. not familiar at all with perl though. 回答1: There isn't a "prepend" operator like the "append" operator >> , but you can write the header to a temp

Fast alternative to grep -f

試著忘記壹切 提交于 2020-01-09 11:36:46
问题 file.contain.query.txt ENST001 ENST002 ENST003 file.to.search.in.txt ENST001 90 ENST002 80 ENST004 50 Because ENST003 has no entry in 2nd file and ENST004 has no entry in 1st file the expected output is: ENST001 90 ENST002 80 To grep multi query in a particular file we usually do the following: grep -f file.contain.query <file.to.search.in >output.file since I have like 10000 query and almost 100000 raw in file.to.search.in it takes very long time to finish (like 5 hours). Is there a fast

Fixed width to CSV

只愿长相守 提交于 2020-01-09 10:56:18
问题 I know how to use awk to change fixed width to CSV. What I have is a hard drive with a few thousand fixed width files. The all contain different column width formats, but it is "encoded" on the second line as: Name DOB GENDER ============== ======== ====== JOHN DOE 19870130 M MARY DOE 19850521 F MARTY MCFLY 19790320 M I want to convert ALL the files to CSV. I can write a program that reads in the first line and holds it for column names. Then, it loads the second line to get the column widths

Print just the last line of a file?

旧街凉风 提交于 2020-01-09 09:02:47
问题 How to print just the last line of a file? 回答1: END{print} should do it. Thanks to Ventero who was too lazy to submit this answer. 回答2: Use the right tool for the job. Since you want to get the last line of a file, tail is the appropriate tool for the job, especially if you have a large file. Tail's file processing algorithm is more efficient in this case. tail -n 1 file If you really want to use awk, awk 'END{print}' file EDIT : tail -1 file deprecated 回答3: Is it a must to use awk for this?

Print just the last line of a file?

孤者浪人 提交于 2020-01-09 08:58:10
问题 How to print just the last line of a file? 回答1: END{print} should do it. Thanks to Ventero who was too lazy to submit this answer. 回答2: Use the right tool for the job. Since you want to get the last line of a file, tail is the appropriate tool for the job, especially if you have a large file. Tail's file processing algorithm is more efficient in this case. tail -n 1 file If you really want to use awk, awk 'END{print}' file EDIT : tail -1 file deprecated 回答3: Is it a must to use awk for this?

How to filter logs easily with awk?

痴心易碎 提交于 2020-01-09 08:04:46
问题 Suppose I have a log file mylog like this: [01/Oct/2015:16:12:56 +0200] error number 1 [01/Oct/2015:17:12:56 +0200] error number 2 [01/Oct/2015:18:07:56 +0200] error number 3 [01/Oct/2015:18:12:56 +0200] error number 4 [02/Oct/2015:16:12:56 +0200] error number 5 [10/Oct/2015:16:12:58 +0200] error number 6 [10/Oct/2015:16:13:00 +0200] error number 7 [01/Nov/2015:00:10:00 +0200] error number 8 [01/Nov/2015:01:02:00 +0200] error number 9 [01/Jan/2016:01:02:00 +0200] error number 10 And I want to

How to filter logs easily with awk?

我的未来我决定 提交于 2020-01-09 08:04:10
问题 Suppose I have a log file mylog like this: [01/Oct/2015:16:12:56 +0200] error number 1 [01/Oct/2015:17:12:56 +0200] error number 2 [01/Oct/2015:18:07:56 +0200] error number 3 [01/Oct/2015:18:12:56 +0200] error number 4 [02/Oct/2015:16:12:56 +0200] error number 5 [10/Oct/2015:16:12:58 +0200] error number 6 [10/Oct/2015:16:13:00 +0200] error number 7 [01/Nov/2015:00:10:00 +0200] error number 8 [01/Nov/2015:01:02:00 +0200] error number 9 [01/Jan/2016:01:02:00 +0200] error number 10 And I want to

Insert multiple lines into a file after specified pattern using shell script

不羁的心 提交于 2020-01-09 06:08:20
问题 I want to insert multiple lines into a file using shell script. Let us consider my input file contents are: input.txt: abcd accd cdef line web Now I have to insert four lines after the line 'cdef' in the input.txt file. After inserting my file should change like this: abcd accd cdef line1 line2 line3 line4 line web The above insertion I should do using the shell script. Can any one help me? 回答1: Another sed , sed '/cdef/r add.txt' input.txt input.txt: abcd accd cdef line web add.txt: line1