I wrote kind of a test suite which is heavily file intensive. After some time (2h) I get an IOError: [Errno 24] Too many open files: \'/tmp/tmpxsqYPm\'
. I doubl
resource.RLIMIT_NOFILE is indeed 7, but that's an index into resource.getrlimit(), not the limit itself... resource.getrlimit(resource.RLIMIT_NOFILE) is what you want your top range() to be
Your test script overwrites f
each iteration, which means that the file will get closed each time. Both logging to files and subprocess
with pipes use up descriptors, which can lead to exhaustion.
The corrected code is:
import resource
import fcntl
import os
def get_open_fds():
fds = []
soft, hard = resource.getrlimit(resource.RLIMIT_NOFILE)
for fd in range(0, soft):
try:
flags = fcntl.fcntl(fd, fcntl.F_GETFD)
except IOError:
continue
fds.append(fd)
return fds
def get_file_names_from_file_number(fds):
names = []
for fd in fds:
names.append(os.readlink('/proc/self/fd/%d' % fd))
return names
fds = get_open_fds()
print get_file_names_from_file_number(fds)