I wrote kind of a test suite which is heavily file intensive. After some time (2h) I get an IOError: [Errno 24] Too many open files: '/tmp/tmpxsqYPm'
. I double checked all file handles whether I close them again. But the error still exists.
I tried to figure out the number of allowed file descriptors using resource.RLIMIT_NOFILE
and the number of currently opened file desciptors:
def get_open_fds(): fds = [] for fd in range(3,resource.RLIMIT_NOFILE): try: flags = fcntl.fcntl(fd, fcntl.F_GETFD) except IOError: continue fds.append(fd) return fds
So if I run the following test:
print get_open_fds() for i in range(0,100): f = open("/tmp/test_%i" % i, "w") f.write("test") print get_open_fds()
I get this output:
[] /tmp/test_0 [3] /tmp/test_1 [4] /tmp/test_2 [3] /tmp/test_3 [4] /tmp/test_4 [3] /tmp/test_5 [4] ...
That's strange, I expected an increasing number of opened file descriptors. Is my script correct?
I'm using python's logger and subprocess. Could that be the reason for my fd leak?
Thanks, Daniel