I\'ve a directory with many number of 0 byte files in it. I can\'t even see the files when I use the ls command. I\'m using a small script to delete these files but sometime
you can even use the option -delete which will delete the file.
from man find, -delete Delete files; true if removal succeeded.
You can use the following command:
find . -maxdepth 1 -size 0c -exec rm {} \;
And if are looking to delete the 0 byte files in subdirectories as well, omit -maxdepth 1
in previous command and execute.
Here is an example, trying it yourself will help this to make sense:
bash-2.05b$ touch empty1 empty2 empty3
bash-2.05b$ cat > fileWithData1
Data Here
bash-2.05b$ ls -l
total 0
-rw-rw-r-- 1 user group 0 Jul 1 12:51 empty1
-rw-rw-r-- 1 user group 0 Jul 1 12:51 empty2
-rw-rw-r-- 1 user group 0 Jul 1 12:51 empty3
-rw-rw-r-- 1 user group 10 Jul 1 12:51 fileWithData1
bash-2.05b$ find . -size 0 -exec rm {} \;
bash-2.05b$ ls -l
total 0
-rw-rw-r-- 1 user group 10 Jul 1 12:51 fileWithData1
If you have a look at the man page for find (type man find
), you will see an array of powerful options for this command.
Going up a level it's worth while to figure out why the files are there. You're just treating a symptom by deleting them. What if some program is using them to lock resources? If so your deleting them could be leading to corruption.
lsof is one way you might figure out which processes have a handle on the empty files.
"...sometimes that does not even delete these files" makes me think this might be something you do regularly. If so, this Perl script will remove any zero-byte regular files in your current directory. It avoids rm altogether by using a system call (unlink), and is quite fast.
#!/usr/bin/env perl
use warnings;
use strict;
my @files = glob "* .*";
for (@files) {
next unless -e and -f;
unlink if -z;
}
Delete all files named file... in the current directory:
find . -name file* -maxdepth 1 -exec rm {} \;
This will still take a long time, as it starts rm
for every file.