uniq

巡检脚本

别说谁变了你拦得住时间么 提交于 2019-12-13 11:54:19
# 巡检脚本 #!/bin/bash leixing= uname echo “系统类型 l e i x i n g " b a n b e n = ‘ c a t / e t c / r e d h a t − r e l e a s e ‘ e c h o " 系 统 版 本 leixing" banben=`cat /etc/redhat-release` echo "系统版本 l e i x i n g " b a n b e n = ‘ c a t / e t c / r e d h a t − r e l e a s e ‘ e c h o " 系 统 版 本 banben” neihe= uname -a|awk '{print $3}' echo “系统内核 n e i h e " s h i j i a n = ‘ d a t e + e c h o " 系 统 时 间 neihe" shijian=`date +%F_%T` echo "系统时间 n e i h e " s h i j i a n = ‘ d a t e + e c h o " 系 统 时 间 shijian” yunxing= uptime |awk '{print $3,$4}'|awk -F , '{print $1}' echo “系统运行时间$yunxing” chongqi= who

combine like terms in bash

一个人想着一个人 提交于 2019-12-13 06:40:17
问题 I have a list of domain names in a text file with a number of times they occur in a collection of email files. For example: 598 aol.com 1 aOL.COM 4 Aol.com 1 AOl.com 6 AOL.com 39 AOL.COM There were 598 emails sent to aol.com and 1 sent to aOL.COM and so on. I was wondering if there was a way in bash to combine aol.com and aOL.COM and all the other aliases since they are in fact the same thing. Any help would be greatly appreciated! This is the line of code that produced that output: grep -E

Remove all lines from file with duplicate value in field, including the first occurrence

痴心易碎 提交于 2019-12-12 18:33:36
问题 I would like to remove all the lines in my data file that contain a value in column 2 that is repeated in column 2 in other lines. I've sorted by the value in column 2, but can't figure out how to use uniq for just the values in one field as the values are not necessarily of the same length. Alternately, I can remove lines with the duplicate using an awk one-liner like awk -F"[,]" '!_[$2]++' but this retains the line with the first incidence of the repeated value in col 2. As an example, if

How to use uniq -cd in bash scripting and extract only the count and not the line?

。_饼干妹妹 提交于 2019-12-12 12:26:41
问题 I have a .sh file that takes a log file and extracts data and makes reports. I would like to calculate what percentage of the total lines does an error pop-up (top talkers). So far I have this: awk '// {print $4, substr($0, index($0,$9))}' | sort \ | uniq -cd | sort -nr | head -n20 > $filename-sr1.tmp This outputs two columns, the count followed by the line. How can I take just the count to make the calculations. Eg. count / total_lines = 0.000000 ... 回答1: First I looked to get some similar

Linux运维常用命令

ぐ巨炮叔叔 提交于 2019-12-12 12:15:12
1 删除0字节文件 find-type f -size 0 -exec rm -rf {} ; 2 查看进程 按内存从大到小排列 ps -e -o “%C : %p : %z : %a”|sort -k5 -nr 3 按cpu利用率从大到小排列 ps -e -o “%C : %p : %z : %a”|sort -nr 4 打印说cache里的URL grep -r -a jpg /data/cache/* | strings | grep “http:” |awk-F’http:’ ‘{print “http:”$2;}’ 5 查看http的并发请求数及其TCP连接状态: netstat -n | awk ‘/^tcp/ {++S[$NF]} END {for(a in S) print a, S[a]}’ 6 sed-i ‘/Root/s/no/yes/’ /etc/ssh/sshd_config sed在这个文里Root的一行,匹配Root一行,将no替换成yes. 7 如何杀掉mysql进程: ps aux|grep mysql|grep -v grep|awk ‘{print $2}’|xargs kill -9 (从中了解到awk的用途) pgrep mysql |xargs kill -9 killall -TERM mysqld kill -9 cat /usr

Removing lines containing a unique first field with awk?

爱⌒轻易说出口 提交于 2019-12-12 09:43:02
问题 Looking to print only lines that have a duplicate first field. e.g. from data that looks like this: 1 abcd 1 efgh 2 ijkl 3 mnop 4 qrst 4 uvwx Should print out: 1 abcd 1 efgh 4 qrst 4 uvwx (FYI - first field is not always 1 character long in my data) 回答1: awk 'FNR==NR{a[$1]++;next}(a[$1] > 1)' ./infile ./infile Yes, you give it the same file as input twice. Since you don't know ahead of time if the current record is uniq or not, you build up an array based on $1 on the first pass then you only

get an array of arrays with unique elements

左心房为你撑大大i 提交于 2019-12-12 03:07:19
问题 I have an array like this: [1, 2, 3, 3, 4, 4, 5, 6, 6, 6, 7] I want to know if there's a method to get this: [[1, 2, 3, 4, 5, 6, 7], [3, 4, 6], [6]] I know there is Array.uniq but this removes the duplicate elements, and I would like to keep them. 回答1: Not sure about performance, but this works: Code: $ cat foo.rb require 'pp' array = [1, 2, 3, 3, 4, 4, 5, 6, 6, 6, 7] result = [] values = array.group_by{|e| e}.values while !values.empty? result << values.map{|e| e.slice!(0,1)}.flatten values

Sorting and counting method faster then cat file | sort | uniq -c

ぐ巨炮叔叔 提交于 2019-12-12 02:58:00
问题 I have the following script that parses some | delimited field/value pairs. Sample data looks like |Apple=32.23|Banana =1232.12|Grape=12312|Pear=231|Grape=1231| I am just looking to count how many times A, B or C field names appear in the log file. The field list needs to be dynamic. Log files are 'big' about 500 megs each so it takes a while to sort each file. Is there a faster way to do the count once I do the cut and get a file with one field per line? cat /bb/logs/$dir/$file.txt | tr -s "

what is the meaning of delimiter in cut and why in this command it is sorting twice?

妖精的绣舞 提交于 2019-12-11 14:49:11
问题 I am trying to find the reason of this command and as I know very basic I found that last | cut -d" " -f 1 | sort | uniq -c | sort last = Last searches back through the file /var/log/wtmp (or the file designated by the -f flag) and displays a list of all users logged in (and out) since that file was created. cut is to show the desired column. The option -d specifies what is the field delimiter that is used in the input file. -f specifies which field you want to extract 1 is the out put I

Ruby 1.8.6 Array#uniq not removing duplicate hashes

坚强是说给别人听的谎言 提交于 2019-12-10 23:58:36
问题 I have this array, in a ruby 1.8.6 console: arr = [{:foo => "bar"}, {:foo => "bar"}] both elements are equal to each other: arr[0] == arr[1] => true #just in case there's some "==" vs "===" oddness... arr[0] === arr[1] => true But, arr.uniq doesn't remove the duplicates: arr.uniq => [{:foo=>"bar"}, {:foo=>"bar"}] Can anyone tell me what's going on here? EDIT: I can write a not very clever uniqifier which uses include? as follows: uniqed = [] arr.each do |hash| unless uniqed.include?(hash)