Django + FastCGI - randomly raising OperationalError

前端 未结 13 1337
忘掉有多难
忘掉有多难 2020-12-31 19:24

I\'m running a Django application. Had it under Apache + mod_python before, and it was all OK. Switched to Lighttpd + FastCGI. Now I randomly get the following exception (ne

13条回答
  •  不知归路
    2020-12-31 19:46

    Possible solution: http://groups.google.com/group/django-users/browse_thread/thread/2c7421cdb9b99e48

    Until recently I was curious to test this on Django 1.1.1. Will this exception be thrown again... surprise, there it was again. It took me some time to debug this, helpful hint was that it only shows when (pre)forking. So for those who getting randomly those exceptions, I can say... fix your code :) Ok.. seriously, there are always few ways of doing this, so let me firs explain where is a problem first. If you access database when any of your modules will import as, e.g. reading configuration from database then you will get this error. When your fastcgi-prefork application starts, first it imports all modules, and only after this forks children. If you have established db connection during import all children processes will have an exact copy of that object. This connection is being closed at the end of request phase (request_finished signal). So first child which will be called to process your request, will close this connection. But what will happen to the rest of the child processes? They will believe that they have open and presumably working connection to the db, so any db operation will cause an exception. Why this is not showing in threaded execution model? I suppose because threads are using same object and know when any other thread is closing connection. How to fix this? Best way is to fix your code... but this can be difficult sometimes. Other option, in my opinion quite clean, is to write somewhere in your application small piece of code:

    from django.db import connection 
    from django.core import signals 
    def close_connection(**kwargs): 
        connection.close() 
    signals.request_started.connect(close_connection) 
    

    Not ideal thought, connecting twice to the DB is a workaround at best.


    Possible solution: using connection pooling (pgpool, pgbouncer), so you have DB connections pooled and stable, and handed fast to your FCGI daemons.

    The problem is that this triggers another bug, psycopg2 raising an InterfaceError because it's trying to disconnect twice (pgbouncer already handled this).

    Now the culprit is Django signal request_finished triggering connection.close(), and failing loud even if it was already disconnected. I don't think this behavior is desired, as if the request already finished, we don't care about the DB connection anymore. A patch for correcting this should be simple.

    The relevant traceback:

     /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/core/handlers/wsgi.py in __call__(self=, environ={'AUTH_TYPE': 'Basic', 'DOCUMENT_ROOT': '/storage/test', 'GATEWAY_INTERFACE': 'CGI/1.1', 'HTTPS': 'off', 'HTTP_ACCEPT': 'application/xml,application/xhtml+xml,text/html;q=0.9,text/plain;q=0.8,image/png,*/*;q=0.5', 'HTTP_ACCEPT_ENCODING': 'gzip, deflate', 'HTTP_AUTHORIZATION': 'Basic dGVzdGU6c3VjZXNzbw==', 'HTTP_CONNECTION': 'keep-alive', 'HTTP_COOKIE': '__utma=175602209.1371964931.1269354495.126938948...none); sessionid=a1990f0d8d32c78a285489586c510e8c', 'HTTP_HOST': 'www.rede-colibri.com', ...}, start_response=)
      246                 response = self.apply_response_fixes(request, response)
      247         finally:
      248             signals.request_finished.send(sender=self.__class__)
      249 
      250         try:
    global signals = , signals.request_finished = , signals.request_finished.send = >, sender undefined, self = , self.__class__ = 
     /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/dispatch/dispatcher.py in send(self=, sender=, **named={})
      164 
      165         for receiver in self._live_receivers(_make_id(sender)):
      166             response = receiver(signal=self, sender=sender, **named)
      167             responses.append((receiver, response))
      168         return responses
    response undefined, receiver = , signal undefined, self = , sender = , named = {}
     /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py in close_connection(**kwargs={'sender': , 'signal': })
       63 # when a Django request is finished.
       64 def close_connection(**kwargs):
       65     connection.close()
       66 signals.request_finished.connect(close_connection)
       67 
    global connection = , connection.close = >
     /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py in close(self=)
       74     def close(self):
       75         if self.connection is not None:
       76             self.connection.close()
       77             self.connection = None
       78 
    self = , self.connection = , self.connection.close = 
    

    Exception handling here could add more leniency:

    /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/__init__.py

       63 # when a Django request is finished.
       64 def close_connection(**kwargs):
       65     connection.close()
       66 signals.request_finished.connect(close_connection)
    

    Or it could be handled better on psycopg2, so to not throw fatal errors if all we're trying to do is disconnect and it already is:

    /usr/local/lib/python2.6/dist-packages/Django-1.1.1-py2.6.egg/django/db/backends/__init__.py

       74     def close(self):
       75         if self.connection is not None:
       76             self.connection.close()
       77             self.connection = None
    

    Other than that, I'm short on ideas.

提交回复
热议问题