awk

How to only add unique elements to an array in awk from sevral input text files

巧了我就是萌 提交于 2021-01-29 07:32:43
问题 as the toppic suggests, how to I read in information from multiple text files and only add elements 1 time in a an array regardless if they occur multiple times in the diffrent text files? I have started with this script that reads in and prints out all elements in the order that they occur in the different documents. For example take e look at these 3 diffrent text files containing the following data File 1: 2011-01-22 22:12 test1 22 1312 75 13.55 1399 2011-01-23 22:13 test4 22 1112 72 12.55

How can I extract only the name field from json by using gcloud command?

ぃ、小莉子 提交于 2021-01-29 06:04:51
问题 I am new to gcp and linux. I am trying to get the name from the json format using the gcloud command. Below is the command and output: gcloud firestore indexes composite list --format=json Output: [ { "fields": [ { "fieldPath": "is_active", "order": "ASCENDING" }, { "fieldPath": "created_at", "order": "DESCENDING" }, { "fieldPath": "__name__", "order": "ASCENDING" } ], "name": "projects/emc-ema-simcs-d-286404/databases/(default)/collectionGroups/customer/indexes/CICAgNjbhowK", "queryScope":

Can't close after awk command inside for loop to read and write multiple files

牧云@^-^@ 提交于 2021-01-29 05:59:23
问题 I have a set of csv files, one of them has a timestamp, all the other ones just have data, but they all have the same number of rows. The timestamp is in the 2nd column of the csv. I would like to append the timestamp to the first column of all csv files. This is currently working but when I try to close the files, I get an error. There can be 50-500 csv files, and each can have thousands of rows, so that is why I wonder if the close() is required. Also, can anyone suggest any ways of

AWK - syntax error: unexpected end of file

大兔子大兔子 提交于 2021-01-29 05:13:22
问题 I have about 300 files in a folder, trying to remove comma in CSV, when I run in the loop I got an error MY CODE : #!/bin/bash FILES=/home/whoisdat/all-data/* { for f in $FILES do { awk -F'"' -v OFS='' '{ for (i=2; i<=NF; i+=2) gsub(",", "", $i) } 1' $f > allan-$f } done Error: root@s1.allheartweb.com [all-data]# sh /home/all-data/unique.sh /home/whoisdat/all-data/unique.sh: line 12: syntax error: unexpected end of file 回答1: The right way to do what you're doing is: awk -F'"' -v OFS='' ' FNR=

bash using awk or sed to search backwards from occurance to a specific string

自古美人都是妖i 提交于 2021-01-29 03:56:40
问题 I've an xml file and am searching for a string in this file. Once (and if) the string is found I need to be able to search back to the position of another string and output the data. ie: <xml> <packet> <proto> <field show="bob"> </proto> </packet> <packet> <proto> <field show="rumpelstiltskin"> </proto> </packet> <packet> <proto> <field show="peter"> </proto> </packet> My input would be known: show="rumpelstiltskin" and <packet> I need to get the following result (which is basically the

How to do uniq -d without presorting (or something similar)

…衆ロ難τιáo~ 提交于 2021-01-29 03:54:13
问题 I am aware that I can remove duplicated lines without presorting, for example: awk '!x[$0]++' file However, my goal is to only print lines which are duplicated and only once. If it were not for the presorting problem sort | uniq -d would be perfect. BUT the order is of great importance to me. Is there a way to do this with awk, grep or something similar? I am looking for a one liner which does not require writing a script if possible. 回答1: Just check the value of x[$0] : awk 'x[$0]++ == 1'

bash using awk or sed to search backwards from occurance to a specific string

荒凉一梦 提交于 2021-01-29 03:48:50
问题 I've an xml file and am searching for a string in this file. Once (and if) the string is found I need to be able to search back to the position of another string and output the data. ie: <xml> <packet> <proto> <field show="bob"> </proto> </packet> <packet> <proto> <field show="rumpelstiltskin"> </proto> </packet> <packet> <proto> <field show="peter"> </proto> </packet> My input would be known: show="rumpelstiltskin" and <packet> I need to get the following result (which is basically the

Find the difference between two files

本小妞迷上赌 提交于 2021-01-29 03:26:47
问题 I have the following situation: The file1.dat is like: 1 2 1 3 1 4 2 1 and the file2.dat is like: 1 2 2 1 2 3 3 4 I want to find the differences between the second file from the first. I tried wit grep -v -f file1 file2 but my real files are bigger than this two and when I tried with it the shell never ended is work. The result should be: 2 3 3 4 The files are sorted and they have the same number of elements. Any way to find a solution with awk? 回答1: Seems like you want lines in file2 that

Find the difference between two files

岁酱吖の 提交于 2021-01-29 03:21:02
问题 I have the following situation: The file1.dat is like: 1 2 1 3 1 4 2 1 and the file2.dat is like: 1 2 2 1 2 3 3 4 I want to find the differences between the second file from the first. I tried wit grep -v -f file1 file2 but my real files are bigger than this two and when I tried with it the shell never ended is work. The result should be: 2 3 3 4 The files are sorted and they have the same number of elements. Any way to find a solution with awk? 回答1: Seems like you want lines in file2 that

awk difference between 1 and {print}?

自古美人都是妖i 提交于 2021-01-28 22:23:45
问题 This comment on awk change once per file made me think 1 and {print} are equal in awk. But it is not. awk '/^\S/ {core=0} /^_core/ {core=1} !core 1' views.view.who_s_online.yml|head uuid: 50715f68-3b13-4a15-8455-853110fd1d8b langcode: en status: true dependencies: module: - user _core: default_config_hash: DWLCIpl8ku4NbiI9t3GgDeuW13KSOy2l1zho7ReP_Bg id: who_s_online label: 'Who''s online block' Compare to (and this is what I wanted btw): awk '/^\S/ {core=0} /^_core/ {core=1} !core {print}'