My service based on flask + postgresql + gunicorn + supervisor + nginx
When deploying by docker, after running the service, then accessing the api
My db configuration:
app = Flask(__name__)
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = os.environ['DATABASE_URL']
app.config['SQLALCHEMY_TRACK_MODIFICATIONS'] = False
# Play with following options:
app.config['SQLALCHEMY_POOL_SIZE'] = 10
app.config['SQLALCHEMY_MAX_OVERFLOW'] = 20
app.config['SQLALCHEMY_POOL_RECYCLE'] = 1800
db = SQLAlchemy(app)
Same logic for sqlalchemy.orm, ( on which flask_sqlalchemy is based btw )
engine = sqlalchemy.create_engine(connection_string, pool_pre_ping=True)
More protection strategies can be setup such as it is described in the doc: https://docs.sqlalchemy.org/en/13/core/pooling.html#disconnect-handling-pessimistic
For example, here is my engine instantiation:
engine = sqlalchemy.create_engine(connection_string,
pool_size=10,
max_overflow=2,
pool_recycle=300,
pool_pre_ping=True,
pool_use_lifo=True)
sqlalchemy.orm.sessionmaker(bind=engine, query_cls=RetryingQuery)
For RetryingQuery code, cf: Retry failed sqlalchemy queries
Building on the Solution in the answer and the info from @MaxBlax360's answer. I think the proper way to set these config values in Flask-SQLAlchemy is by setting app.config['SQLALCHEMY_ENGINE_OPTIONS']
:
from flask import Flask
from flask_sqlalchemy import SQLAlchemy
app = Flask(__name__)
# pool_pre_ping should help handle DB connection drops
app.config['SQLALCHEMY_ENGINE_OPTIONS'] = {"pool_pre_ping": True}
app.config['SQLALCHEMY_DATABASE_URI'] = \
f'postgresql+psycopg2://{POSTGRES_USER}:{dbpass}@{POSTGRES_HOST}:{POSTGRES_PORT}/{POSTGRES_DBNAME}'
db = SQLAlchemy(app)
See also Flask-SQLAlchemy docs on Configuration Keys