cat

Using awk to put a header in a text file

青春壹個敷衍的年華 提交于 2021-02-19 08:48:05
问题 I have lots of text files and need to put a header on each one of them depending of the data on each file. This awk command accomplishes the task: awk 'NR==1{first=$1}{sum+=$1;}END{last=$1;print NR,last,"L";}' my_text.file But this prints it on the screen and I want to put this output in the header of each of my file, and saving the modifications with the same file name. Here is what I've tried: for i in *.txt do echo Processing ${i} cat awk 'NR==1{first=$1}{sum+=$1;}END{last=$1;print NR,last

replace path value of a variable in file with bash

一笑奈何 提交于 2021-02-11 07:42:43
问题 I have a namelist containing inputs for simulations. I have to change the path of some variables using bash. The text file has the following value that I would like to change: opt_path = '/home/gle/test_workflow', I would like to replace the value of opt_path with cat and sed, and I tried the following but it doesn't work: new_path=/my/new/path cat namelist.wps | sed "s/^opt_path.*/opt_path = '${new_path}'/g" it returns the following error: sed: -e expression #1, char 29: unknown option to `s

Merge files with same name in different folders

一曲冷凌霜 提交于 2021-01-29 04:34:02
问题 I'm new to Linux and I'm looking for a command to merge files with the same name but from different folders. like this: folder 1, folder l1 folder 1 contains folder 2 and files 1.txt, 2.txt, 3.txt, ... folder 2 contains files 1.txt, 2.txt, 3.txt, ... I want to merge two texts from folder 1 and subfolder 2, then put them into folder l1. I got this: ls ./1 | while read FILE; do cat ./1/"$FILE" ./1/2/"$FILE" >> ./l1/"$FILE" done This one seems working well, two files are merged, however, a new

Cat * and order of files

北城以北 提交于 2021-01-28 03:03:32
问题 I need to concatenate all files in a directory to one file, but files with specified names must be on top of the output. Just doing cat * > result will concatenate all files in alphabetical order. Is there any way to tell cat to place file vars.css or any other to the beginning of output? For now i just renamed files need to be first to 000-filename but i wonder if there is better solution without renaming files. 回答1: There are many ways to achieve this, most of which would involve invoking

Why does python exit immediately when I pipe code in with echo but not with cat?

那年仲夏 提交于 2021-01-27 12:52:33
问题 #!/bin/bash echo "print('Hello 1')" | python3 cat | python3 -u <<EOF print('Hello 2') EOF echo "print('Hello 3')" | python3 This outputs Hello 1 Hello 2 It will wait for me to press enter before printing the final Hello 3 . It does this also using python's -u flag for unbuffered output. Why does it do this for cat but not for echo ? 回答1: You aren't using cat. You're using a here-doc, and cat is waiting for input separately. Just remove the cat | and try it again. echo "print('Hello 1')" |

Can I make a shell function in as a pipeline conditionally “disappear”, without using cat?

只谈情不闲聊 提交于 2020-08-08 18:19:14
问题 I have a bash script that produces some text from a pipe of commands. Based on a command line option I want to do some validation on the output. For a contrived example... CHECK_OUTPUT=$1 ... check_output() { if [[ "$CHECK_OUTPUT" != "--check" ]]; then # Don't check the output. Passthrough and return. cat return 0 fi # Check each line exists in the fs root while read line; do if [[ ! -e "/$line" ]]; then echo "Error: /$line does not exist" return 1 fi echo "$line" done return 0 } ls /usr |

How to concatenate huge number of files

痴心易碎 提交于 2020-08-02 07:11:23
问题 I would like to concatenate my files. I use cat *txt > newFile But I have almost 500000 files and it complains that the argument list is too long. Is there an efficient and fast way of merging half a million files? Thanks 回答1: If your directory structure is shallow (there are no subdirectories) then you can simply do: find . -type f -exec cat {} \; > newFile If you have subdirectories, you can limit the find to the top level, or you might consider putting some of the files in the sub

cat multiple files based on ID in filename

浪子不回头ぞ 提交于 2020-06-01 07:20:09
问题 I would like to combine files with similar ID before first underscore into one file using cat . How do I do this for multiple files like below? Thought of something like this: for f in *.R1.fastq.gz; do cat "$f" > "${f%}.fastq.gz"; done in 9989_L004_R1.fastq.gz 9989_L005_R1.fastq.gz 9989_L009_R1.fastq.gz 9873_L008_R1.fastq.gz 9873_L005_R1.fastq.gz 9873_L001_R1.fastq.gz out 9989.fastq.gz 9873.fastq.gz 回答1: for f in *_R1.fastq.gz; do cat "$f" >> "${f%%_*}.fastq.gz"; done >> for appending, ${f%%

How to avoid 'sink stack is full' error when sink() is used to capture messages in foreach loop

江枫思渺然 提交于 2020-05-27 04:26:45
问题 In order to see the console messages output by a function running in a foreach() loop I followed the advice of this guy and added a sink() call like so: library(foreach) library(doMC) cores <- detectCores() registerDoMC(cores) X <- foreach(i=1:100) %dopar%{ sink("./out/log.branchpies.txt", append=TRUE) cat(paste("\n","Starting iteration",i,"\n"), append=TRUE) myFunction(data, argument1="foo", argument2="bar") } However, at iteration 77 I got the error 'sink stack is full'. There are well