Is `ls -f | grep -c .` the fastest way to count files in directory, when using POSIX / Unix system (Big Data)?

江枫思渺然 提交于 2019-12-23 16:28:07

问题


I used to do ls path-to-whatever| wc -l, until I discovered, that it actually consumes huge amount of memory. Then I moved to find path-to-whatever -name "*" | wc -l, which seems to consume much graceful amount of memory, regardless how many files you have.

Then I learned that ls is mostly slow and less memory efficient due to sorting the results. By using ls -f | grep -c ., one will get very fast results; the only problem is filenames which might have "line breaks" in them. However, that is a very minor problem for most use cases.

Is this the fastest way to count files?

EDIT / Possible Answer: It seems that when it comes to Big Data, some versions of ls, find etc. have been reported to hang with >8 million files (need to be confirmed though). In order to succeed with very large file counts (my guess is > 2.2 billion), one should use getdents64 system call instead of getdents, which can be done with most programming languages, that support POSIX standards. Some filesystems might offer faster non-POSIX methods for counting files.


回答1:


One way would be to use readdir and count the entries (in one directory). Below I'm counting regular file and using d_type==DT_REG which is available for limited OSs and FSs (man readdir and see NOTES) but you could just comment out that line and count all the dir entries:

#include <stdio.h>
#include <dirent.h>

int main (int argc, char *argv[]) {

  struct dirent *entry;
  DIR *dirp;

  long long c;                            // 64 bit

  if(argc<=1)                             // require dir
    return 1;

  dirp = opendir (argv[1]);

  if (dirp == NULL) {                     // dir not found
    return 2;
  }

  while ((entry = readdir(dirp)) != NULL) {
    if(entry->d_type==DT_REG)
      c++;
      // printf ("%s\n", entry->d_name);  // for outputing filenames
  }
  printf ("%lli\n", c);

  closedir (dirp);
  return 0;
}

Complie and run:

$ gcc code.c
$ ./a.out ~
254

(I need to clean my home dir :)

Edit:

I touched a 1000000 files into a dir and run a quick comparison (best user+sys of 5 presented):

$ time ls -f | grep -c .
1000005

real    0m1.771s
user    0m0.656s
sys     0m1.244s

$ time ls -f | wc -l
1000005

real    0m1.733s
user    0m0.520s
sys     0m1.248s

$ time ../a.out  .
1000003

real    0m0.474s
user    0m0.048s
sys     0m0.424s

Edit 2:

As requested in comments:

$ time ./a.out testdir | wc -l
1000004

real    0m0.567s
user    0m0.124s
sys     0m0.468s


来源:https://stackoverflow.com/questions/44084805/is-ls-f-grep-c-the-fastest-way-to-count-files-in-directory-when-using-p

易学教程内所有资源均来自网络或用户发布的内容,如有违反法律规定的内容欢迎反馈
该文章没有解决你所遇到的问题?点击提问,说说你的问题,让更多的人一起探讨吧!