I am creating a flask application, for one request I need to run some long running job which is not required to wait on the UI. I will create a thread and send a message to
Check out Flask-Executor which uses concurrent.futures in the background and makes your life very easy.
from flask_executor import Executor
executor = Executor(app)
@app.route('/someJob')
def index():
executor.submit(long_running_job)
return 'Scheduled a job'
def long_running_job
#some long running processing here
This not only runs jobs in the background but gives them access to the app context. It also provides a way to store jobs so users can check back in to get statuses.
If you'd like to execute the long-running operation within the flask application context, then it's a bit easier to (as opposed to using ThreadPoolExecutor
, taking care of exceptions):
cli.py
) - because all web applications should have an admin cli
anyway.subprocess.Popen
(no wait) the command line in a web request.For example:
# cli.py
import click
import yourpackage.app
import yourpackage.domain
app = yourpackage.app.create_app()
@click.group()
def cli():
pass
@click.command()
@click.argument('foo_id')
def do_something(foo_id):
with app.app_context():
yourpackage.domain.do_something(foo_id)
if __name__ == '__main__':
cli.add_command(do_something)
cli()
Then,
# admin.py (flask view / controller)
bp = Blueprint('admin', __name__, url_prefix='/admin')
@bp.route('/do-something/<int:foo_id>', methods=["POST"])
@roles_required('admin')
def do_something(foo_id):
yourpackage.domain.process_wrapper_do_something(foo_id)
flash("Something has started.", "info")
return redirect(url_for("..."))
And:
# domain.py
import subprocess
def process_wrapper_do_something(foo_id):
command = ["python3", "-m", "yourpackage.cli", "do_something", str(foo_id)]
subprocess.Popen(command)
def do_something(foo_id):
print("I am doing something.")
print("This takes some time.")
The best thing to do for stuff like this is use a message broker. There is some excellent software in the python world meant for doing just this:
Both are excellent choices.
It's almost never a good idea to spawn a thread the way you're doing it, as this can cause issues processing incoming requests, among other things.
If you take a look at the celery or RQ getting started guides, they'll walk you through doing this the proper way!
Try this example, tested on Python 3.4.3 / Flask 0.11.1
from flask import Flask
from time import sleep
from concurrent.futures import ThreadPoolExecutor
# DOCS https://docs.python.org/3/library/concurrent.futures.html#concurrent.futures.ThreadPoolExecutor
executor = ThreadPoolExecutor(2)
app = Flask(__name__)
@app.route('/jobs')
def run_jobs():
executor.submit(some_long_task1)
executor.submit(some_long_task2, 'hello', 123)
return 'Two jobs were launched in background!'
def some_long_task1():
print("Task #1 started!")
sleep(10)
print("Task #1 is done!")
def some_long_task2(arg1, arg2):
print("Task #2 started with args: %s %s!" % (arg1, arg2))
sleep(5)
print("Task #2 is done!")
if __name__ == '__main__':
app.run()