bash

Linux bash script to find and delete oldest file with special characters and whitespaces in a directory tree if condtion is met

和自甴很熟 提交于 2021-02-20 00:47:08
问题 I need some help building a linux bash script to find and delete oldest file with special characters and white spaces in a directory tree if condtion is met. I have been searching the forum for questions like this and thanks to users here I came with output as seen under. So far I can't figure out how to pipe the output filename to rm, so that it is being deleted. The goal is to check if hdd is running full, and if so delete the oldest file until free-space requirement is met. The problem is,

Linux bash script to find and delete oldest file with special characters and whitespaces in a directory tree if condtion is met

£可爱£侵袭症+ 提交于 2021-02-20 00:44:18
问题 I need some help building a linux bash script to find and delete oldest file with special characters and white spaces in a directory tree if condtion is met. I have been searching the forum for questions like this and thanks to users here I came with output as seen under. So far I can't figure out how to pipe the output filename to rm, so that it is being deleted. The goal is to check if hdd is running full, and if so delete the oldest file until free-space requirement is met. The problem is,

Linux bash script to find and delete oldest file with special characters and whitespaces in a directory tree if condtion is met

北慕城南 提交于 2021-02-20 00:43:19
问题 I need some help building a linux bash script to find and delete oldest file with special characters and white spaces in a directory tree if condtion is met. I have been searching the forum for questions like this and thanks to users here I came with output as seen under. So far I can't figure out how to pipe the output filename to rm, so that it is being deleted. The goal is to check if hdd is running full, and if so delete the oldest file until free-space requirement is met. The problem is,

Using awk to put a header in a text file

青春壹個敷衍的年華 提交于 2021-02-19 08:48:05
问题 I have lots of text files and need to put a header on each one of them depending of the data on each file. This awk command accomplishes the task: awk 'NR==1{first=$1}{sum+=$1;}END{last=$1;print NR,last,"L";}' my_text.file But this prints it on the screen and I want to put this output in the header of each of my file, and saving the modifications with the same file name. Here is what I've tried: for i in *.txt do echo Processing ${i} cat awk 'NR==1{first=$1}{sum+=$1;}END{last=$1;print NR,last

JQ, Hadoop: taking command from a file

房东的猫 提交于 2021-02-19 08:33:00
问题 I have been enjoying the powerful filters provided by JQ (Doc). Twitter's public API gives nicely formatted json files. I have access to a large amount of it, and I have access to a Hadoop cluster. There I decided to, instead of loading them in Pig using Elephantbird , try out JQ in mapper streaming to see if it is any faster. Here is my final query: nohup hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.5.1.jar\ -files $HOME/bin/jq \ -D mapreduce.map.memory.mb=2048\ -D

JQ, Hadoop: taking command from a file

Deadly 提交于 2021-02-19 08:32:21
问题 I have been enjoying the powerful filters provided by JQ (Doc). Twitter's public API gives nicely formatted json files. I have access to a large amount of it, and I have access to a Hadoop cluster. There I decided to, instead of loading them in Pig using Elephantbird , try out JQ in mapper streaming to see if it is any faster. Here is my final query: nohup hadoop jar $HADOOP_HOME/share/hadoop/tools/lib/hadoop-streaming-2.5.1.jar\ -files $HOME/bin/jq \ -D mapreduce.map.memory.mb=2048\ -D

tput cols doesn't work properly in a script

五迷三道 提交于 2021-02-19 07:10:17
问题 I'm using "tput cols" in a script everything goes OK except when the windows is maximized. my script is able to get any windows size correctly but when the windows is maximized, it gets a wrong value (80). Then I type "tput cols" directly into the terminal and I get the correct size (158). So my question is, how can I get the right value even when the window is maximized??? thanks in advance 回答1: tput cols may be reading from the shell environment variable $COLUMNS instead of the TIOCGWINSZ

tput cols doesn't work properly in a script

只谈情不闲聊 提交于 2021-02-19 07:10:12
问题 I'm using "tput cols" in a script everything goes OK except when the windows is maximized. my script is able to get any windows size correctly but when the windows is maximized, it gets a wrong value (80). Then I type "tput cols" directly into the terminal and I get the correct size (158). So my question is, how can I get the right value even when the window is maximized??? thanks in advance 回答1: tput cols may be reading from the shell environment variable $COLUMNS instead of the TIOCGWINSZ

tput cols doesn't work properly in a script

白昼怎懂夜的黑 提交于 2021-02-19 07:09:55
问题 I'm using "tput cols" in a script everything goes OK except when the windows is maximized. my script is able to get any windows size correctly but when the windows is maximized, it gets a wrong value (80). Then I type "tput cols" directly into the terminal and I get the correct size (158). So my question is, how can I get the right value even when the window is maximized??? thanks in advance 回答1: tput cols may be reading from the shell environment variable $COLUMNS instead of the TIOCGWINSZ

GREP by result of awk

有些话、适合烂在心里 提交于 2021-02-19 06:22:44
问题 Output of awk '{print $4}' is b05808aa-c6ad-4d30-a334-198ff5726f7c 59996d37-9008-4b3b-ab22-340955cb6019 2b41f358-ff6d-418c-a0d3-ac7151c03b78 7ac4995c-ff2c-4717-a2ac-e6870a5670f0 I need to grep file st.log by these records. Something like awk '{print $4}' |xargs -i grep -w "pattern from awk" st.log I dont know how to pass pattern correctly? 回答1: What about awk '{print $4}' | grep -F -f - st.log Credits to Eric Renouf, who noticed that -f - can be used for standard input instead -f <(cat) ,