uniq

linux文本处理工具-1

一个人想着一个人 提交于 2019-12-06 01:07:20
文件内容: cat ,more,less 文件截取:head,tail 按列抽取:cut 排序和统计:sort,wc ----------------------------------------------------- cat [OPTION]... [FILE]... //用于查看字符文件 -E:显示行结束符$ -n:对显示出的每一行进行编号 -A:显示所有控制符 -b:非空行编号 -s:压缩连续的空行成一行 ------------------------------------------------------ 分页查看 more:分页查看文件 more [OPTIONS...] FILE...     -d: 显示翻页及退出提示 less:一页一页地查看文件或STDIN输出   查看时有用的命令包括:       /文本 搜索 文本       n/N 跳到下一个 或 上一个匹配   less 命令是man命令使用的分页器 less 空格键 滚动一页 回车键 滚动一行 [pagedown]: 向下翻动一页 [pageup]: 向上翻动一页 还可以在冒号后面直接输入 /要搜索的关键字 进行 高亮显示,可以 用 n 向前查找或者 N 向后查找。 ---------------------------------------------------------------

nginx常用运维日志分析命令

跟風遠走 提交于 2019-12-05 20:17:29
nginx常用日志分析命令 运维人员必备 常用日志分析命令 1、总请求数 wc -l access.log |awk '{print $1}' 2、独立IP数 awk '{print $1}' access.log|sort |uniq |wc -l 3、每秒客户端请求数 TOP5 awk -F'[ []' '{print $5}' access.log|sort|uniq -c|sort -rn|head -5 4、访问最频繁IP Top5 awk '{print $1}' access.log|sort |uniq -c | sort -rn |head -5 5、访问最频繁的URL TOP5 awk '{print $7}' access.log|sort |uniq -c | sort -rn |head -5 6、响应大于10秒的URL TOP5 awk '{if ($12 > 10){print $7}}' access.log|sort|uniq -c|sort -rn |head -5 7、HTTP状态码(非200)统计 Top5 awk '{if ($13 != 200){print $13}}' access.log|sort|uniq -c|sort -rn|head -5 8、分析请求数大于50000的源IP的行为 awk '{print $1}'

shell 编程求交集,并集,补集

随声附和 提交于 2019-12-05 08:51:13
1.方法一 grep -f (求两个文件的交集,不要有效) 文件a cat a 12 34 13 25 文件b cat b 13 35 78 14 (1).交集 grep -f a b 13 (2).补集 a-b grep -v -f b a 12 34 25 b-a grep -v -f b a (3).并集 2.方法二 sort (1).交集 sort a b|uniq -d 并集 sort a b|uniq 补集 a-b sort a b b|uniq -u 来源: https://www.cnblogs.com/sun5/p/11917371.html

【leetcode】740. Delete and Earn

折月煮酒 提交于 2019-12-05 01:54:20
题目如下: Given an array nums of integers, you can perform operations on the array. In each operation, you pick any nums[i] and delete it to earn nums[i] points. After, you must delete every element equal to nums[i] - 1 or nums[i] + 1 . You start with 0 points. Return the maximum number of points you can earn by applying such operations. Example 1: Input: nums = [3, 4, 2] Output: 6 Explanation: Delete 4 to earn 4 points, consequently 3 is also deleted. Then, delete 2 to earn 2 points. 6 total points are earned. Example 2: Input: nums = [2, 2, 3, 3, 3, 4] Output: 9 Explanation: Delete 3 to earn 3

Linux管道命令——《鸟哥的Linux私房菜》笔记

拜拜、爱过 提交于 2019-12-04 20:43:02
Linux管道命令——《鸟哥的Linux私房菜》笔记 0 前言 看完书之后,总感觉不记录下来的话,很快就会忘了,然后又需要重新到处翻书找资料,所以还是把内容记录下来,方便以后复习。本文大部分是书中的内容,中间加入部分自己的理解以及尝试的例子。 1 简单使用 管道命令使用“|”这个界定符号,用于两个命令中间,作用是把左边命令的标准输出作为右边命令的标准输入。 例如要查询/etc目录下的文件详情,但是内容太多会刷满屏幕,查看起来不方便,就可以利用管道命令通过less命令来查看输出。 ls -al /etc | less 需要注意的点: 管道命令仅能处理标准输出(standard output),会忽略出错时输出的信息(standard error output); 管道命令命令必须要能够接受来自前一个命令的数据成为标准输出继续处理才行。 2 选取命令 2.1 cut 2.1.1 命令介绍 cut命令可以将一段信息的某一段“切”出来,处理的信息是以“ 行 ”为单位。 命令说明如下: cut -d ‘分隔字符’ -f fields # 该命令将每行按“分隔字符”分割,然后选取fields参数的那几部分 cut -c 字符范围 # 该命令用于排列整齐的信息,可以取出每一行在选定字符范围内的串 参数: -d : 后面接分隔字符,与-f一起使用; -f : 依据-d的 分隔字符

How to sort,uniq and display line that appear more than X times

∥☆過路亽.° 提交于 2019-12-04 17:29:33
问题 I have a file like this: 80.13.178.2 80.13.178.2 80.13.178.2 80.13.178.2 80.13.178.1 80.13.178.3 80.13.178.3 80.13.178.3 80.13.178.4 80.13.178.4 80.13.178.7 I need to display unique entries for repeated line (similar to uniq -d) but only entries that occur more than just twice (twice being an example so flexibility to define the lower limit.) Output for this example should be like this when looking for entries with three or more occurrences: 80.13.178.2 80.13.178.3 回答1: With pure awk : awk '

nginx_日志

*爱你&永不变心* 提交于 2019-12-04 07:59:36
192.168.31.250 - - [13/Nov/2019:08:38:07 +0800] "GET /aa HTTP/1.1" 404 571 "-" "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/70.0.3538.102 Safari/537.36 Edge/18.18362" "-" log_format main '$remote_addr - $remote_user [$time_local] "$request" ' '$status $body_bytes_sent "$http_referer" ' '"$http_user_agent" "$http_x_forwarded_for"'; 字段 说明 $remote_addr 客户端地址 $remote_user 客户端用户名称 $time_local 访问时间和时区 $request 请求的URI和HTTP协议 $http_host 请求地址,即浏览器中你输入的地址(IP或域名) $status HTTP请求状态 $upstream_status upstream状态 $body_bytes_sent 发送给客户端文件内容大小 $http_referer url跳转来源

using Linux cut, sort and uniq

。_饼干妹妹 提交于 2019-12-04 07:19:14
I have a list with population, year, and county and I need to cut the list, and then find the number of uniq counties. The list starts off like this: #Population, Year, County 3900, 1969, Beaver 3798, 1970, Beaver 3830, 1971, Beaver 3864, 1972, Beaver 3993, 1973, Beaver 3976, 1974, Beaver 4064, 1975, Beaver There is much more to this list, and many more counties. I have to cut out the county column, sort it, and then output the number of uniq counties. I tried this command: cut -c3- list.txt | sort -k3 | uniq -c But this does not cut the third list, nor does it sort it alphabetically. What am

How to sort,uniq and display line that appear more than X times

我的未来我决定 提交于 2019-12-04 01:41:32
I have a file like this: 80.13.178.2 80.13.178.2 80.13.178.2 80.13.178.2 80.13.178.1 80.13.178.3 80.13.178.3 80.13.178.3 80.13.178.4 80.13.178.4 80.13.178.7 I need to display unique entries for repeated line (similar to uniq -d) but only entries that occur more than just twice (twice being an example so flexibility to define the lower limit.) Output for this example should be like this when looking for entries with three or more occurrences: 80.13.178.2 80.13.178.3 With pure awk : awk '{a[$0]++}END{for(i in a){if(a[i] > 2){print i}}}' a.txt It iterates over the file and counts the occurances

第四周作业——统计/etc/init.d/functions文件中每个单词的出现次数,并排序(用grep和sed两种方法分别实现)

这一生的挚爱 提交于 2019-12-03 22:43:09
统计/etc/init.d/functions文件中每个单词的出现次数,并排序(用grep和sed两种方法分别实现) 方法一:grep实现 grep -o "\<[[:alpha:]]\+\>" /etc/init.d/functions |sort |uniq -c grep -o '[[:alpha:]]\+' /etc/init.d/functions | sort |uniq -c [root@centos7 data]# grep -o "\<[[:alpha:]]\+\>" /etc/init.d/functions |sort |uniq -c 21 a 5 A 1 aA 1 abnormally 5 action 1 active 1 ActiveState 1 adjust 1 alive 2 all 1 already 2 and 1 And 3 any 1 anywhere 1 Apply 1 are 1 arg 2 at 1 avoid 7 awk 8 b 1 backup 2 bak 29 base 2 basename 2 bash 7 be 6 BEGIN 1 bg 13 bin 11 binary 1 bit 2 booleans 1 boot 21 BOOTUP 2 break 3 but 1 by 6 c 1 caller 3 can 8