awk

UNIX Bash - Removing double quotes from specific strings within a file

淺唱寂寞╮ 提交于 2020-07-09 03:55:07
问题 Apologies if I've formatted this terribly as I've not posted here before. I'm attempting to edit a file to remove double quotation marks that are wrapped around multiple strings of varied lengths. Some of these strings include capitalisation and white-space, normally I would just use a global search and replace, however, some of the strings CAN'T have the double quotes removed as they're required. An extract of the file in question is here: "tplan"."external_plan_ref" "Plan ID", 'CMP' CMP,

Using awk or perl to extract specific columns from CSV (parsing)

半城伤御伤魂 提交于 2020-07-08 06:04:09
问题 Background - I want to extract specific columns from a csv file. The csv file is comma delimited, uses double quotes as the text-qualifier (optional, but when a field contains special characters, the qualifier will be there - see example), and uses backslashes as the escape character. It is also possible for some fields to be blank. Example Input and Desired Output - For example, I only want columns 1, 3, and 4 to be in the output file. The final extract of the columns from the csv file

Using awk or perl to extract specific columns from CSV (parsing)

无人久伴 提交于 2020-07-08 06:02:40
问题 Background - I want to extract specific columns from a csv file. The csv file is comma delimited, uses double quotes as the text-qualifier (optional, but when a field contains special characters, the qualifier will be there - see example), and uses backslashes as the escape character. It is also possible for some fields to be blank. Example Input and Desired Output - For example, I only want columns 1, 3, and 4 to be in the output file. The final extract of the columns from the csv file

How to compare 2 files having random numbers in non sequential order?

試著忘記壹切 提交于 2020-07-08 04:23:20
问题 There are 2 files named compare 1.txt and compare2.txt having random numbers in non-sequential order cat compare1.txt 57 11 13 3 889 014 91 cat compare2.txt 003 889 13 14 57 12 90 Aim Output list of all the numbers which are present in compare1 but not in compare 2 and vice versa If any number has zero in its prefix, ignore zeros while comparing ( basically the absolute value of number must be different to be treated as a mismatch ) Example - 3 should be considered matching with 003 and 014

How to compare 2 files having random numbers in non sequential order?

时光毁灭记忆、已成空白 提交于 2020-07-08 04:22:08
问题 There are 2 files named compare 1.txt and compare2.txt having random numbers in non-sequential order cat compare1.txt 57 11 13 3 889 014 91 cat compare2.txt 003 889 13 14 57 12 90 Aim Output list of all the numbers which are present in compare1 but not in compare 2 and vice versa If any number has zero in its prefix, ignore zeros while comparing ( basically the absolute value of number must be different to be treated as a mismatch ) Example - 3 should be considered matching with 003 and 014

Standard deviation of multiple files having different row sizes

老子叫甜甜 提交于 2020-07-08 03:41:26
问题 This question is related to my previous one Average of multiple files having different row sizes I have few files with different row sizes, but number of columns in each file is same. e.g. ifile1.txt 1 1001 ? ? 2 1002 ? ? 3 1003 ? ? 4 1004 ? ? 5 1005 ? 0 6 1006 ? 1 7 1007 ? 3 8 1008 5 4 9 1009 3 11 10 1010 2 9 ifile2.txt 1 2001 ? ? 2 2002 ? ? 3 2003 ? ? 4 2004 ? ? 5 2005 ? 0 6 2006 6 12 7 2007 6 5 8 2008 9 10 9 2009 3 12 10 2010 5 7 11 2011 2 ? 12 2012 9 ? ifile3.txt 1 3001 ? ? 2 3002 ? 6 3

AWK to print every nth line from a file

限于喜欢 提交于 2020-07-02 18:05:50
问题 i want to print every Nth line of a file using AWK. i tried modifying the general format :- awk '0 == NR % 4' results.txt to:- awk '0 == NR % $ct' results.txt where 'ct' is the number of lines that should be skipped. its not working . can anyone please help me out? Thanks in advance. 回答1: You may want to use: awk -v patt="$ct" 'NR % patt' results.txt Explanation Given a file like the following: $ cat -n a 1 hello1 2 hello2 3 hello3 4 hello4 5 hello5 ... 37 hello37 38 hello38 39 hello39 40

AWK to print every nth line from a file

三世轮回 提交于 2020-07-02 18:04:11
问题 i want to print every Nth line of a file using AWK. i tried modifying the general format :- awk '0 == NR % 4' results.txt to:- awk '0 == NR % $ct' results.txt where 'ct' is the number of lines that should be skipped. its not working . can anyone please help me out? Thanks in advance. 回答1: You may want to use: awk -v patt="$ct" 'NR % patt' results.txt Explanation Given a file like the following: $ cat -n a 1 hello1 2 hello2 3 hello3 4 hello4 5 hello5 ... 37 hello37 38 hello38 39 hello39 40

How can I replace 'bc' tool in my bash script?

一曲冷凌霜 提交于 2020-06-29 13:49:09
问题 I have the following command in my bash script: printf '\n"runtime": %s' "$(bc -l <<<"($a - $b)")" I need to run this script on around 100 servers and I have found that on few of them bc is not installed. I am not admin and cannot install bc on missing servers. In that case, what alternative can i use to perform the same calculation? Please let me know how the new command should look like 回答1: In case you need a solution which works for floating-point arithmetic you can always fall back to

AWK inside a loop for making multiple files form single files

这一生的挚爱 提交于 2020-06-29 03:58:47
问题 I have the below file named ABCD.vasp : # A B C D 1.000000 13.85640621 0.00000000 0.00000000 4.61880236 13.06394496 0.00000000 0.00000000 0.00000000 45.25483322 A B C D 32 32 32 32 Selective dynamics Direct 0.00000000 0.00000000 0.00000000 F F F 0.00000000 0.00000000 0.12500000 F F T 0.00000000 0.00000000 0.25000000 F F T 0.00000000 0.00000000 0.37500000 F F T 0.50000000 0.00000000 0.00000000 F F F 0.50000000 0.00000000 0.12500000 F F T 0.50000000 0.00000000 0.25000000 F F T 0.50000000 0