awk

Finding a range of numbers of a file in another file using awk

99封情书 提交于 2020-01-11 12:18:13
问题 I have lots of files like this: 3 10 23 . . . 720 810 980 And a much bigger file like this: 2 0.004 4 0.003 6 0.034 . . . 996 0.01 998 0.02 1000 0.23 What I want to do is find in which range of the second file my first file falls and then estimate the mean of the values in the 2nd column of that range. Thanks in advance. NOTE The numbers in the files do not necessarily follow an easy pattern like 2,4,6... 回答1: Since your smaller files are sorted you can pull out the first row and the last row

No output from awk only when redirected to pipe or file [duplicate]

半腔热情 提交于 2020-01-11 11:50:30
问题 This question already has an answer here : awk not printing to file (1 answer) Closed 12 months ago . I have a rather simple script (print content from a tty after adding timestamp to every row). It outputs nicely on the command line, but redirecting the output with > does not work. Why not? Here is the script: #!/bin/bash awk '{ print strftime("%Y-%m-%d %H:%M:%S |"), $0; }' "$1" Running it as is, like timecat /dev/ttyACM0 works fine, I see the content in my terminal. But if I run timecat

awk流程控制

梦想与她 提交于 2020-01-11 10:42:05
4案例4:awk流程控制 4.1问题 本案例要求了解awk的流程控制操作,可自行设置awk语句来验证一下操作: if分支结构(单分支、双分支、多分支) 联系awk数组的使用 4.2步骤 实现此案例需要按照如下步骤进行。 步骤一:awk过滤中的if分支结构 1)单分支 统计/etc/passwd文件中UID小于或等于1000的用户个数: [root@svr5 ~]# awk -F: '{if($3<=1000){i++}}END{print i}' /etc/passwd 39 统计/etc/passwd文件中UID大于1000的用户个数: [root@svr5 ~]# awk -F: '{if($3>1000){i++}}END{print i}' /etc/passwd 8 统计/etc/passwd文件中登录shell是/bin/bash的用户的个数: [root@svr5 ~]# awk -F: '{if($7~/bash$/){i++}}END{print i}' /etc/passwd 29 2)双分支 分别统计/etc/passwd文件中UID小于或等于1000、UID大于1000的用户个数: [root@svr5 ~]# awk -F: '{if($3<=1000){i++}else{j++}}END{print i,j}' /etc/passwd 39 8 分别统计

Combine lines with matching first field

会有一股神秘感。 提交于 2020-01-11 09:09:50
问题 For a few years, I often have a need to combine lines of (sorted) text with a matching first field, and I never found an elegant (i.e. one-liner unix command line) way to do it. What I want is similar to what's possible with the unix join command, but join expects 2 files, with each key appearing maximum once. I want to start with a single file, in which a key might appear multiple tiles. I have both a ruby and perl script that do this, but there's no way to shorten my algorithm into a one

Longest line using awk

情到浓时终转凉″ 提交于 2020-01-11 03:14:27
问题 Can someone show, how to use awk command to identify the longest line in a text file. Thanks 回答1: To print the longest line: awk 'length > m { m = length; a = $0 } END { print a }' input-file To simply identify the longest line by line number: awk 'length > m { m = length; a = NR } END { print a }' input-file 回答2: awk '{ if (length($0) > longest) longest = length($0); } END { print longest }' 来源: https://stackoverflow.com/questions/12607776/longest-line-using-awk

AWK-入门指南

若如初见. 提交于 2020-01-11 01:35:22
Awk是一种便于使用且表达能力强的程序设计语言,可应用于各种计算和数据处理任务 1.1起步 加入我们现在有一个文件 文件名emp.txt 里面存放的是员工的每小时的薪资 每个员工单独占一行 如下所示 Beth 4.00 0 Dan 3.75 0 kathy 4.00 10 Mark 5.00 20 Mary 5.50 22 Susie 4.25 18 现在你想打印出工作时间超过零小时的员工的姓名和工资(薪资乘以时间)。 使用命令及输出内容 [ root@ localhost ~ ] # awk '$3 >0 { print $1,$2 * $3}' emp.txt kathy 40 Mark 100 Mary 121 Susie 76.5 命令讲解 :$ > 3 是我们的判断条件 { print $1,$2 * $3 } 这个是我们想要输出的内容 包括员工的名字以及工资 emp.txt 是我们要读取的文件名字 打印工作时间等于0 个小时的工作人员 使用命令 及输出结果 [ root@ localhost ~ ] # awk '$3 == 0 { print $1 }' emp.txt Beth Dan 打印名字叫kathy 并且工作小时大于0 的名字 和薪资总和 比如 [ root@ localhost ~ ] # awk '$3==0' emp.txt Beth 4.00 0

Parsing json with awk/sed in bash to get key value pair

孤街浪徒 提交于 2020-01-10 20:06:29
问题 I have read many existing questions at SO but none of them answers what I am looking for. I know it is difficult to parse json in bash using sed/awk but I only need a few key-value pairs per record out of a whole list of key-value pairs per record. I want to do this because it will be faster as the main JSON is pretty big with millions of records. The JSON format is like following: { "documents": [ { "title":"a", //needed "description":"b", //needed "id":"c", //needed ....(some more:not

Linux系统巡检脚本

孤街浪徒 提交于 2020-01-10 18:01:20
#!/bin/bash # auth:Bertram # created Time : 2019-12-26 # func:sys info check # sys:centos6.x/7.x ------------------------------------------------------------------------------------------------------------------------------------- [ $(id -u) -ne 0 ] && echo "请用root用户执行此脚本!" && exit 1 sysversion=$(rpm -q centos-release|cut -d- -f3) line="-------------------------------------------------" [ -d logs ] || mkdir logs #sys_check_file="logs/$(ip a show dev eth0|grep -w inet|awk '{print $2}'|awk -F '/' '{print $1}')-`date +%Y%m%d`.txt" sys_check_file="logs/$(ifconfig |awk 'NR==2{print $2}')-`date +%Y

awk error “makes too many open files”

梦想与她 提交于 2020-01-10 05:49:25
问题 I have a awk based splitter that splits a huge file based on regex. But the problem is that I am getting a makes too many files error. Even i have a conditional close. If you could help me figure out what I am doing wrong I would be much grateful. awk 'BEGIN { system("mkdir -p splitted/sub"++j) } /<doc/{x="F"++i".xml";}{ if (i%5==0 ){ ++i; close("splitted/sub"j"/"x); system("mkdir -p splitted/sub"++j"/"); } else{ print > ("splitted/sub"j"/"x); } }' wiki_parsed.xml 回答1: The simple answer is

linux awk命令详解

余生颓废 提交于 2020-01-10 05:20:53
转:http://www.cnblogs.com/ggjucheng/archive/2013/01/13/2858470.html#3292588 简介 awk是一个强大的文本分析工具,相对于grep的查找,sed的编辑,awk在其对数据分析并生成报告时,显得尤为强大。简单来说awk就是把文件逐行的读入,以空格为默认分隔符将每行切片,切开的部分再进行各种分析处理。 awk有3个不同版本: awk、nawk和gawk,未作特别说明,一般指gawk,gawk 是 AWK 的 GNU 版本。 awk其名称得自于它的创始人 Alfred Aho 、Peter Weinberger 和 Brian Kernighan 姓氏的首个字母。实际上 AWK 的确拥有自己的语言: AWK 程序设计语言 , 三位创建者已将它正式定义为“样式扫描和处理语言”。它允许您创建简短的程序,这些程序读取输入文件、为数据排序、处理数据、对输入执行计算以及生成报表,还有无数其他的功能。 使用方法 awk '{pattern + action}' {filenames} 尽管操作可能会很复杂,但语法总是这样,其中 pattern 表示 AWK 在数据中查找的内容,而 action 是在找到匹配内容时所执行的一系列命令。花括号({})不需要在程序中始终出现,但它们用于根据特定的模式对一系列指令进行分组。