awk

Search a pattern in file and replace another pattern in the same line

谁都会走 提交于 2020-06-08 17:23:08
问题 I would like to search for a pattern in bunch of source files. The pattern should act as a marker. If the pattern is found, I would like to process that line by performing a substitution of another string For example: Private const String myTestString = @"VAL15"; // STRING—REPLACE-VAL## Here, I want to search my source file for pattern STRING—REPLACE-VAL and then replace VAL15 with VAL20 in same. Output: private const String myTestString = @"VAL20"; // STRING—REPLACE-VAL## Tried below command

Specify other flags in awk script header

喜欢而已 提交于 2020-06-06 08:26:11
问题 I want to write an awk script file using the #!/bin/awk -f header, but I want this script to always use : as a field separator. But for some reason writing #!/bin.awk -F: -f gives me a syntax error. I also want this script to always run on the same file, so I'd like to hardcode that as well. Basically, what I want to work is this: #!/bin/awk -F: -f -- /etc/passwd followed by some awk code 回答1: Something like this should do: #!/bin/awk -f # using the #!/bin/awk -f BEGIN { FS=":" # always use :

gawk regex to find any record having characters other then the specified by character class in regex pattern

隐身守侯 提交于 2020-06-02 23:34:49
问题 I have list of email addresses in a text file. I have a pattern having character classes that specifies what characters are allowed in the email addresses. Now from that input file, I want to only search the email addresses that has the characters other than the allowed ones. I am trying to write a gawk for the same, but not able to get it to work properly. Here is the gawk that I am trying: gawk -F "," ' $2!~/[[:alnum:]@\.]]/ { print "has invalid chars" }' emails.csv The problem I am facing

Treat first column with spaces as one column using awk

∥☆過路亽.° 提交于 2020-06-02 09:33:26
问题 I have this data that wanted to extract. However, im having trouble with the first column since some data has space in it making it hard for me to parse it using awk 6_MB06 SA003 1550 None uats admin 1270 1478640 1211360 none 2957656064 no 0 no 60021AA29H38028200000000000521D3 no no 0 no yes no no supported no no 18 2.60 193 no 0 Active optimized 0000 Not Available 6_VLS01 G 516 None uats admin 0 492880 176 none 1008291840 no 0 no 60021AA29H38028200000000000521D4 no no 0 no yes no no

Treat first column with spaces as one column using awk

▼魔方 西西 提交于 2020-06-02 09:33:11
问题 I have this data that wanted to extract. However, im having trouble with the first column since some data has space in it making it hard for me to parse it using awk 6_MB06 SA003 1550 None uats admin 1270 1478640 1211360 none 2957656064 no 0 no 60021AA29H38028200000000000521D3 no no 0 no yes no no supported no no 18 2.60 193 no 0 Active optimized 0000 Not Available 6_VLS01 G 516 None uats admin 0 492880 176 none 1008291840 no 0 no 60021AA29H38028200000000000521D4 no no 0 no yes no no

Append columns in csv file by matching the fields with another csv file in bash

无人久伴 提交于 2020-06-01 07:36:12
问题 I have 2 csv files. File1 is an existing list of private IP address & its hostname. File2 is a daily report which has 8 columns in which 2 containing the private IP. I want to compare file2 with with file1 by matching field 4 and field 7 of file2 with field 2 of file1. Then, upon matching, I want to append field 3 and field 6 of file2 according to the matches of field 4 and field 7 with field 2 of file1. File1.csv PC1,192.168.3.1 PC2,192.168.3.2 PC3,192.168.3.3 File2.csv (Has about 50 lines)

How to add new column with header to csv with awk

别说谁变了你拦得住时间么 提交于 2020-05-29 04:59:12
问题 I'm using some awk inside a bash script that's handling CSVs. The awk does this: ORIG_FILE="score_model.csv" NEW_FILE="updates/score_model.csv" awk -v d="2017_01" -F"," 'BEGIN {OFS = ","} {$(NF+1)=d; print}' $ORIG_FILE > $NEW_FILE Which does this transformation: # before model_description, type, effective_date, end_date Inc <= 40K, Retired, 08/05/2016, 07/31/2017 Inc > 40K Age <= 55 V5, Retired, 04/30/2016, 07/31/2017 Inc > 40K Age > 55 V5 , Retired, 04/30/2016, 07/31/2017 # after, bad model

How to add new column with header to csv with awk

为君一笑 提交于 2020-05-29 04:59:05
问题 I'm using some awk inside a bash script that's handling CSVs. The awk does this: ORIG_FILE="score_model.csv" NEW_FILE="updates/score_model.csv" awk -v d="2017_01" -F"," 'BEGIN {OFS = ","} {$(NF+1)=d; print}' $ORIG_FILE > $NEW_FILE Which does this transformation: # before model_description, type, effective_date, end_date Inc <= 40K, Retired, 08/05/2016, 07/31/2017 Inc > 40K Age <= 55 V5, Retired, 04/30/2016, 07/31/2017 Inc > 40K Age > 55 V5 , Retired, 04/30/2016, 07/31/2017 # after, bad model

use awk to compute difference of two columns

眉间皱痕 提交于 2020-05-28 08:29:07
问题 I wanted to compute some column differences in a csv file, say file: item1,0.01,0.1 item2,0.02,0.2 item3,0.03,0.3 expected output file: item1,0.01,0.1,-0.09 item2,0.02,0.2,-0.18 item3,0.03,0.3,-0.27 I tried something like this: awk -F, '{print $2-$3 "," $0}' and got the difference in the first column, but unable to put it in the 4th! The following didn't work and gave me strange result like: ',$0[original line]'. awk -F, '{print $0 "," $2-$3}' What's happening here? And how to fix this? I'm

use awk to compute difference of two columns

心已入冬 提交于 2020-05-28 08:27:00
问题 I wanted to compute some column differences in a csv file, say file: item1,0.01,0.1 item2,0.02,0.2 item3,0.03,0.3 expected output file: item1,0.01,0.1,-0.09 item2,0.02,0.2,-0.18 item3,0.03,0.3,-0.27 I tried something like this: awk -F, '{print $2-$3 "," $0}' and got the difference in the first column, but unable to put it in the 4th! The following didn't work and gave me strange result like: ',$0[original line]'. awk -F, '{print $0 "," $2-$3}' What's happening here? And how to fix this? I'm