awk

How do I split a text file into an array by blank lines?

眉间皱痕 提交于 2021-02-16 14:19:28
问题 I have a bash command that outputs text in the following format: Header 1 - Point 1 - Point 2 Header 2 - Point 1 - Point 2 Header 3 -Point 1 - Point 2 ... I want to parse this text into an array, separating on the empty line so that array[0] for example contains: Header 1 - Point 1 - Point 2 And then I want to edit some of the data in the array if it satisfies certain conditions. I was looking at something like this Separate by blank lines in bash but I'm completely new to bash so I don't

What is the meaning of $0 = $0 in Awk?

柔情痞子 提交于 2021-02-16 09:17:37
问题 While going through a piece of code I saw the below command: grep "r" temp | awk '{FS=","; $0=$0} { print $1,$3}' temp file contain the pattern like: 1. r,1,5 2. r,4,5 3. ... I could not understand what does the statement $0=$0 mean in awk command. Can anyone explain what does it mean? 回答1: When you do $1=$1 (or any other assignment to a field) it causes record recompilation where $0 is rebuilt with every FS replaced with OFS but it does not change NF (unless there was no $1 previously and

What is the meaning of $0 = $0 in Awk?

会有一股神秘感。 提交于 2021-02-16 09:17:22
问题 While going through a piece of code I saw the below command: grep "r" temp | awk '{FS=","; $0=$0} { print $1,$3}' temp file contain the pattern like: 1. r,1,5 2. r,4,5 3. ... I could not understand what does the statement $0=$0 mean in awk command. Can anyone explain what does it mean? 回答1: When you do $1=$1 (or any other assignment to a field) it causes record recompilation where $0 is rebuilt with every FS replaced with OFS but it does not change NF (unless there was no $1 previously and

Add Column to end of CSV file using 'awk' in BASH script

我们两清 提交于 2021-02-15 10:49:15
问题 How do you add a column to the end of a CSV file with using a string in a variable? input.csv 2012-02-29,01:00:00,Manhattan,New York,234 2012-02-29,01:00:00,Manhattan,New York,843 2012-02-29,01:00:00,Manhattan,New York,472 2012-02-29,01:00:00,Manhattan,New York,516 output.csv 2012-02-29,01:00:00,Manhattan,New York,234,2012-02-29 16:13:00 2012-02-29,01:00:00,Manhattan,New York,843,2012-02-29 16:13:00 2012-02-29,01:00:00,Manhattan,New York,472,2012-02-29 16:13:00 2012-02-29,01:00:00,Manhattan

Instead using awk in for loop in bash through all files - do something only in awk

大城市里の小女人 提交于 2021-02-15 07:37:18
问题 I have a question. This is my input file. #!/bin/bash name="eq6" tmp=$(mktemp) || exit 1 for index in {1..2} do awk 'f;/hbonds_Other-SOL/{f=1}' "${name}_$index.ndx" > "$tmp" && mv "$tmp" "${name}_$index.ndx" done Is it possible do this only in awk? I don't want to call awk in every for loop. This is my input eq6_1.ndx 98536 98539 98542 98545 98548 [ hbonds_Other-SOL ] 8 9 76759 eq6_2.ndx 98542 98545 98548 [ hbonds_Other-SOL ] 8 9 65281 Expected output - print all lines from all files which

Instead using awk in for loop in bash through all files - do something only in awk

我与影子孤独终老i 提交于 2021-02-15 07:37:10
问题 I have a question. This is my input file. #!/bin/bash name="eq6" tmp=$(mktemp) || exit 1 for index in {1..2} do awk 'f;/hbonds_Other-SOL/{f=1}' "${name}_$index.ndx" > "$tmp" && mv "$tmp" "${name}_$index.ndx" done Is it possible do this only in awk? I don't want to call awk in every for loop. This is my input eq6_1.ndx 98536 98539 98542 98545 98548 [ hbonds_Other-SOL ] 8 9 76759 eq6_2.ndx 98542 98545 98548 [ hbonds_Other-SOL ] 8 9 65281 Expected output - print all lines from all files which

Rename multiple files while keeping the same extension on Linux

廉价感情. 提交于 2021-02-13 16:37:09
问题 I have 100s of files in a directory with the following naming convention. 00XYZCD042ABCDE20141002ABCDE.XML 00XYZCC011ABCDE20141002.TXT 00XYZCB165ABCDE20141002ABCDE.TXT 00XYZCB165ABCDE20141002ABCDE.CSV I want to rename these files using bash , awk , cut , sed , so that I get the output: XYZCD042.XML XYZCC011.TXT XYZCB165.TXT XYZCB165.CSV So basically, remove the first two 0s always, and then keep everything until ABCDE starts and then remove everything including ABCDE and keep the file

Rename multiple files while keeping the same extension on Linux

筅森魡賤 提交于 2021-02-13 16:35:47
问题 I have 100s of files in a directory with the following naming convention. 00XYZCD042ABCDE20141002ABCDE.XML 00XYZCC011ABCDE20141002.TXT 00XYZCB165ABCDE20141002ABCDE.TXT 00XYZCB165ABCDE20141002ABCDE.CSV I want to rename these files using bash , awk , cut , sed , so that I get the output: XYZCD042.XML XYZCC011.TXT XYZCB165.TXT XYZCB165.CSV So basically, remove the first two 0s always, and then keep everything until ABCDE starts and then remove everything including ABCDE and keep the file

How to delete rows from a csv file based on a list values from another file?

佐手、 提交于 2021-02-13 12:16:43
问题 I have two files: candidates.csv : id,value 1,123 4,1 2,5 50,5 blacklist.csv : 1 2 5 3 10 I'd like to remove all rows from candidates.csv in which the first column ( id ) has a value contained in blacklist.csv . id is always numeric. In this case I'd like my output to look like this: id,value 4,1 50,5 So far, my script for identifying the duplicate lines looks like this: cat candidates.csv | cut -d \, -f 1 | grep -f blacklist.csv -w This gives me the output 1 2 Now I somehow need to pipe this

How to delete rows from a csv file based on a list values from another file?

百般思念 提交于 2021-02-13 12:15:52
问题 I have two files: candidates.csv : id,value 1,123 4,1 2,5 50,5 blacklist.csv : 1 2 5 3 10 I'd like to remove all rows from candidates.csv in which the first column ( id ) has a value contained in blacklist.csv . id is always numeric. In this case I'd like my output to look like this: id,value 4,1 50,5 So far, my script for identifying the duplicate lines looks like this: cat candidates.csv | cut -d \, -f 1 | grep -f blacklist.csv -w This gives me the output 1 2 Now I somehow need to pipe this